Available Count
int64
1
31
AnswerCount
int64
1
35
GUI and Desktop Applications
int64
0
1
Users Score
int64
-17
588
Q_Score
int64
0
6.79k
Python Basics and Environment
int64
0
1
Score
float64
-1
1.2
Networking and APIs
int64
0
1
Question
stringlengths
15
7.24k
Database and SQL
int64
0
1
Tags
stringlengths
6
76
CreationDate
stringlengths
23
23
System Administration and DevOps
int64
0
1
Q_Id
int64
469
38.2M
Answer
stringlengths
15
7k
Data Science and Machine Learning
int64
0
1
ViewCount
int64
13
1.88M
is_accepted
bool
2 classes
Web Development
int64
0
1
Other
int64
1
1
Title
stringlengths
15
142
A_Id
int64
518
72.2M
2
2
1
0
2
0
0
0
I'm trying to use boost.python library in a C++ project (Windows + VS9) but it always tries to link against pyton25.lib. Is it possible to link with version 2.6.x of python? thanks
0
python,boost
2010-04-14T08:35:00.000
0
2,635,933
You could try putting -lpython26 when linking
0
712
false
0
1
boost python version
2,636,724
2
2
1
1
2
0
0.099668
0
I'm trying to use boost.python library in a C++ project (Windows + VS9) but it always tries to link against pyton25.lib. Is it possible to link with version 2.6.x of python? thanks
0
python,boost
2010-04-14T08:35:00.000
0
2,635,933
You need to recompile boost-python library pointing Boost.Build to needed python version. P.S. This heals a problem of undefined references while linking with library needed. I beleive you've already turned of autolinking.
0
712
false
0
1
boost python version
2,636,711
3
3
0
0
5
0
0
1
I'm playing around with sockets in C/Python and I wonder what is the most efficient way to send headers from a Python dictionary to the client socket. My ideas: use a send call for every header. Pros: No memory allocation needed. Cons: many send calls -- probably error prone; error management should be rather complicated use a buffer. Pros: one send call, error checking a lot easier. Cons: Need a buffer :-) malloc/realloc should be rather slow and using a (too) big buffer to avoid realloc calls wastes memory. Any tips for me? Thanks :-)
0
python,c,sockets,buffer,send
2010-04-14T15:04:00.000
0
2,638,490
Unless you're sending a truly huge amount of data, you're probably better off using one buffer. If you use a geometric progression for growing your buffer size, the number of allocations becomes an amortized constant, and the time to allocate the buffer will generally follow.
0
867
false
0
1
What is faster: multiple `send`s or using buffering?
2,638,568
3
3
0
0
5
0
0
1
I'm playing around with sockets in C/Python and I wonder what is the most efficient way to send headers from a Python dictionary to the client socket. My ideas: use a send call for every header. Pros: No memory allocation needed. Cons: many send calls -- probably error prone; error management should be rather complicated use a buffer. Pros: one send call, error checking a lot easier. Cons: Need a buffer :-) malloc/realloc should be rather slow and using a (too) big buffer to avoid realloc calls wastes memory. Any tips for me? Thanks :-)
0
python,c,sockets,buffer,send
2010-04-14T15:04:00.000
0
2,638,490
A send() call implies a round-trip to the kernel (the part of the OS which deals with the hardware directly). It has a unit cost of about a few hundred clock cycles. This is harmless unless you are trying to call send() millions of times. Usually, buffering is about calling send() only once in a while, when "enough data" has been gathered. "Enough" does not mean "the whole message" but something like "enough bytes so that the unit cost of the kernel round-trip is dwarfed". As a rule of thumb, an 8-kB buffer (8192 bytes) is traditionally considered as good. Anyway, for all performance-related questions, nothing beats an actual measure. Try it. Most of the time, there not any actual performance problem worth worrying about.
0
867
false
0
1
What is faster: multiple `send`s or using buffering?
2,638,599
3
3
0
3
5
0
1.2
1
I'm playing around with sockets in C/Python and I wonder what is the most efficient way to send headers from a Python dictionary to the client socket. My ideas: use a send call for every header. Pros: No memory allocation needed. Cons: many send calls -- probably error prone; error management should be rather complicated use a buffer. Pros: one send call, error checking a lot easier. Cons: Need a buffer :-) malloc/realloc should be rather slow and using a (too) big buffer to avoid realloc calls wastes memory. Any tips for me? Thanks :-)
0
python,c,sockets,buffer,send
2010-04-14T15:04:00.000
0
2,638,490
Because of the way TCP congestion control works, it's more efficient to send data all at once. TCP maintains a window of how much data it will allow to be "in the air" (sent but not yet acknowledged). TCP measures the acknowledgments coming back to figure out how much data it can have "in the air" without causing congestion (i.e., packet loss). If there isn't enough data coming from the application to fill the window, TCP can't make accurate measurements so it will conservatively shrink the window. If you only have a few, small headers and your calls to send are in rapid succession, the operating system will typically buffer the data for you and send it all in one packet. In that case, TCP congestion control isn't really an issue. However, each call to send involves a context switch from user mode to kernel mode, which incurs CPU overhead. In other words, you're still better off buffering in your application. There is (at least) one case where you're better off without buffering: when your buffer is slower than the context switching overhead. If you write a complicated buffer in Python, that might very well be the case. A buffer written in CPython is going to be quite a bit slower than the finely optimized buffer in the kernel. It's quite possible that buffering would cost you more than it buys you. When in doubt, measure. One word of caution though: premature optimization is the root of all evil. The difference in efficiency here is pretty small. If you haven't already established that this is a bottleneck for your application, go with whatever makes your life easier. You can always change it later.
0
867
true
0
1
What is faster: multiple `send`s or using buffering?
2,639,059
3
6
0
5
12
1
0.16514
0
A fellow developer on a project I am on believes that doctests are as good as unit-tests, and that if a piece of code is doctested, it does not need to be unit-tested. I do not believe this to be the case. Can anyone provide some solid, ideally cited, examples either for or against the argument that doctests replace the need for unit-tests? Thank you -Daniel EDIT: Can anyone provide a reference showing that doctesting should not replace unit-testing?
0
python,unit-testing,doctest
2010-04-15T01:57:00.000
0
2,642,282
There's a concrete example in the Python standard library that persuades me that doctests alone aren't always enough, namely the decimal module. It has over 60000 individual testcases (in Lib/test/decimaltestdata); if all those were rewritten as doctests, the decimal module would become very unwieldy indeed. It's possible the number of tests could be slimmed down whilst still giving good coverage, but many of the numerical algorithms are sufficiently complicated that you need huge numbers of individual tests to cover all possible combinations of branches.
0
3,655
false
0
1
Does Python doctest remove the need for unit-tests?
2,643,423
3
6
0
4
12
1
0.132549
0
A fellow developer on a project I am on believes that doctests are as good as unit-tests, and that if a piece of code is doctested, it does not need to be unit-tested. I do not believe this to be the case. Can anyone provide some solid, ideally cited, examples either for or against the argument that doctests replace the need for unit-tests? Thank you -Daniel EDIT: Can anyone provide a reference showing that doctesting should not replace unit-testing?
0
python,unit-testing,doctest
2010-04-15T01:57:00.000
0
2,642,282
doctests are great for some uses working and up to date documentation sample tests embeded in docstrings spikes or design phases when classes API is not really clear unit tests are better in differents cases: when you need clear and somewhat complex setup/teardown when trying to get better coverage of all cases, inclusinf corner cases for keeping tests independant from each other In other words, at least for my own use, doctests are great when you focus on explaining what you are doing (docs, but also design phases) but more of a burden when you intent to use tests as a seat belt for refactoring or code coverage.
0
3,655
false
0
1
Does Python doctest remove the need for unit-tests?
2,643,550
3
6
0
0
12
1
0
0
A fellow developer on a project I am on believes that doctests are as good as unit-tests, and that if a piece of code is doctested, it does not need to be unit-tested. I do not believe this to be the case. Can anyone provide some solid, ideally cited, examples either for or against the argument that doctests replace the need for unit-tests? Thank you -Daniel EDIT: Can anyone provide a reference showing that doctesting should not replace unit-testing?
0
python,unit-testing,doctest
2010-04-15T01:57:00.000
0
2,642,282
I think this is the wrong way to think about doctests. Doctests are documentation. They complement regular unit tests. Think of doctests as documentation examples that happen to be tested. The doctests should be there to illustrate the function to human users. The unit tests should test all the code, even the corner cases. If you add doctests for corner cases, or dozens of doctests, that will just make your docstring hard to read.
0
3,655
false
0
1
Does Python doctest remove the need for unit-tests?
16,453,043
2
3
0
1
2
1
0.066568
0
I've done what I shouldn't have done and written 4 modules (6 hours or so) without running any tests along the way. I have a method inside of /mydir/__init__.py called get_hash(), and a class inside of /mydir/utils.py called SpamClass. /mydir/utils.py imports get_hash() from /mydir/__init__. /mydir/__init__.py imports SpamClass from /mydir/utils.py. Both the class and the method work fine on their own but for some reason if I try to import /mydir/, I get an import error saying "Cannot import name get_hash" from /mydir/__init__.py. The only stack trace is the line saying that __init__.py imported SpamClass. The next line is where the error occurs in in SpamClass when trying to import get_hash. Why is this?
0
python,python-import
2010-04-15T16:20:00.000
0
2,647,088
In absence of more information, I would say you have a circular import that you aren't working around. The simplest, most obvious fix is to not put anything in mydir/__init__.py that you want to use from any module inside mydir. So, move your get_hash function to another module inside the mydir package, and import that module where you need it.
0
493
false
0
1
Python - import error
2,647,367
2
3
0
2
2
1
0.132549
0
I've done what I shouldn't have done and written 4 modules (6 hours or so) without running any tests along the way. I have a method inside of /mydir/__init__.py called get_hash(), and a class inside of /mydir/utils.py called SpamClass. /mydir/utils.py imports get_hash() from /mydir/__init__. /mydir/__init__.py imports SpamClass from /mydir/utils.py. Both the class and the method work fine on their own but for some reason if I try to import /mydir/, I get an import error saying "Cannot import name get_hash" from /mydir/__init__.py. The only stack trace is the line saying that __init__.py imported SpamClass. The next line is where the error occurs in in SpamClass when trying to import get_hash. Why is this?
0
python,python-import
2010-04-15T16:20:00.000
0
2,647,088
To add to what the others have said, another good approach to avoiding circular import problems is to avoid from module import stuff. If you just do standard import module at the top of each script, and write module.stuff in your functions, then by the time those functions run, the import will have finished and the module members will all be available. You then also don't have to worry about situations where some modules can update/change one of their members (or have it monkey-patched by a naughty third party). If you'd imported from the module, you'd still have your old, out-of-date copy of the member. Personally, I only use from-import for simple, dependency-free members that I'm likely to refer to a lot: in particular, symbolic constants.
0
493
false
0
1
Python - import error
2,647,459
2
4
0
7
4
1
1
0
I recently came across a simple but nasty bug. I had a list and I wanted to find the smallest member in it. I used Python's built-in min(). Everything worked great until in some strange scenario the list was empty (due to strange user input I could not have anticipated). My application crashed with a ValueError (BTW - not documented in the official docs). I have very extensive unit tests and I regularly check coverage to avoid surprises like this. I also use Pylint (everything is integrated in PyDev) and I never ignore warnings, yet I failed to catch this bug before my users did. Is there anything I can change in my methodology to avoid these kind of runtime errors? (which would have been caught at compile time in Java / C#?). I'm looking for something more than wrapping my code with a big try-except. What else can I do? How many other build in Python functions are hiding nasty surprises like this???
0
python,unit-testing,exception-handling,runtime-error,code-coverage
2010-04-15T18:00:00.000
0
2,647,790
The problem here is that malformed external input crashed your program. The solution is to exhaustively unit test possible input scenarios at the boundaries of your code. You say your unit tests are 'extensive', but you clearly hadn't tested for this possibility. Code coverage is a useful tool, but it's important to remember that covering code is not the same as thoroughly testing it. Thorough testing is a combination of covering usage scenarios as well as lines of code. The methodology I use is to trust internal callers, but never to trust external callers or input. So I explicitly don't unit test for the empty list case in any code beyond the first function that receives the external input. But that input function should be exhaustively covered. In this case I think the library's exception is reasonable behaviour - it makes no sense to ask for the min of an empty list. The library can't legitimately set a value such as 0 for you, since you may be dealing with negative numbers, for example. I think the empty list should never have reached the code that asks for the min - it should have been identified at input, and either raised an exception there, or set it to 0 if that works for you, or whatever else it is that does work for you.
0
784
false
0
1
Python Pre-testing for exceptions when coverage fails
2,648,113
2
4
0
4
4
1
0.197375
0
I recently came across a simple but nasty bug. I had a list and I wanted to find the smallest member in it. I used Python's built-in min(). Everything worked great until in some strange scenario the list was empty (due to strange user input I could not have anticipated). My application crashed with a ValueError (BTW - not documented in the official docs). I have very extensive unit tests and I regularly check coverage to avoid surprises like this. I also use Pylint (everything is integrated in PyDev) and I never ignore warnings, yet I failed to catch this bug before my users did. Is there anything I can change in my methodology to avoid these kind of runtime errors? (which would have been caught at compile time in Java / C#?). I'm looking for something more than wrapping my code with a big try-except. What else can I do? How many other build in Python functions are hiding nasty surprises like this???
0
python,unit-testing,exception-handling,runtime-error,code-coverage
2010-04-15T18:00:00.000
0
2,647,790
Even in Java/C#, a class of exceptions the RuntimeError are unchecked and will not be detected by the compiler (that's why they're called RuntimeError not CompileError). In python, certain exceptions such as KeyboardInterrupt are particularly hairy since it can be raised practically at any arbitrary point in the program. I'm looking for something more than wrapping my code with a big try-except. Anything but that please. It is much better to let exceptions get to user and halt the program rather than letting error pass around silently (Zen of Python). Unlike Java, Python does not require all Exceptions to be caught because requiring all Exceptions to be caught makes it too easy for programmers to ignore the Exception (by writing blank exception handler). Just relax, let the error halt; let the user report it to you, so you can fix it. The other alternative is you stepping into a debugger for forty-two hours because customer's data is getting corrupted everywhere due to a blank mandatory exception handler. So, what you should change in your methodology is thinking that exception is bad; they're not pretty, but they're better than the alternatives.
0
784
false
0
1
Python Pre-testing for exceptions when coverage fails
2,647,972
1
1
1
2
1
0
0.379949
0
I'm looking to do some basic encryption of server messages which would be encrypted with C++ and decrypted using Python server side. I was wondering if anyone knew if there were good solutions that were simpler or more lightweight than Keyczar. I see that supports both C++ and python, but would using Crypto++ and PyCrypto be simpler for a newbie that just wants to get something up and running for the time being? Or should I use Keyczar for python and Crypto++ for the C++ end? The C++ libraries seem to have dependencies to hundreds of files.
0
python,c++,encryption,cryptography
2010-04-16T01:41:00.000
0
2,650,073
The C++ libraries seem to have dependencies to hundreds of files. I don't know much about Python, but that is absolutely normal for C++. I'd recommend Crypto++ -- it's a great easy to use library, and it's public domain, meaning you won't have any license problems with it. EDIT: Keep in mind a large library with lots of code does not mean that you're going to pay in terms of object code. If there are functions you don't use (Crypto++ supports hundreds of algorithms) they won't be compiled into the resulting binary.
0
942
false
0
1
Lightweight cryptography toolkit(s) for C++ and Python
2,650,311
2
2
0
2
0
1
0.197375
0
i have this encryption algorithm written in C++ , but the values that has to be encrypted are being taken input and stored in a file by a python program . Thus how can i call this c++ program from python?
0
python
2010-04-16T07:58:00.000
0
2,651,466
Look for the subprocess module. It is the recommended way to invoke processes from within Python. The os.system function is a viable alternative sometimes, if your needs are very simple (no pipes, simple arguments, etc.)
0
83
false
0
1
how to call a c++ file from python without using any of the spam bindings?
2,651,534
2
2
0
0
0
1
0
0
i have this encryption algorithm written in C++ , but the values that has to be encrypted are being taken input and stored in a file by a python program . Thus how can i call this c++ program from python?
0
python
2010-04-16T07:58:00.000
0
2,651,466
The os.system function will invoke an arbitrary command-line from python.
0
83
false
0
1
how to call a c++ file from python without using any of the spam bindings?
2,651,490
1
3
0
5
9
0
0.321513
1
what is the advantage of using a python virtualbox API instead of using XPCOM?
0
python,virtualbox,xpcom
2010-04-16T10:26:00.000
0
2,652,146
I would generally recommend against either one. If you need to use virtualization programmatically, take a look at libvirt, which gives you cross platform and cross hypervisor support; which lets you do kvm/xen/vz/vmware later on. That said, the SOAP api is using two extra abstraction layers (the client and server side of the HTTP transaction), which is pretty clearly then just calling the XPCOM interface. If you need local host only support, use XPCOM. The extra indirection of libvirt/SOAP doesn't help you. If you need to access virtualbox on a various hosts across multiple client machines, use SOAP or libvirt If you want cross platform support, or to run your code on Linux, use libvirt.
0
14,030
false
0
1
What is the advantage of using Python Virtualbox API?
2,655,522
2
5
0
2
6
1
0.07983
0
I was going over some pages from WikiVS, that I quote from: because lambdas in Python are restricted to expressions and cannot contain statements I would like to know what would be a good example (or more) where this restriction would be, preferably compared to the Ruby language. Thank you for your answers, comments and feedback!
0
python,ruby,lambda,restriction
2010-04-16T16:02:00.000
0
2,654,425
Instead of f=lambda s:pass you can do f=lambda s:None.
0
2,261
false
0
1
Restrictons of Python compared to Ruby: lambda's
3,209,231
2
5
0
1
6
1
0.039979
0
I was going over some pages from WikiVS, that I quote from: because lambdas in Python are restricted to expressions and cannot contain statements I would like to know what would be a good example (or more) where this restriction would be, preferably compared to the Ruby language. Thank you for your answers, comments and feedback!
0
python,ruby,lambda,restriction
2010-04-16T16:02:00.000
0
2,654,425
lambda is simply a shortcut way in Python to define a function that returns a simple expression. This isn't a restriction in any meaningful way. If you need more than a single expression then just use a function: there is nothing you can do with a lambda that you cannot do with a function. The only disadvantages to using a function instead of a lambda are that the function has to be defined on 1 or more separate lines (so you may lose some locality compared to the lambda), and you have to invent a name for the function (but if you can't think of one then f generally works). All the other reasons people think they have to use a lambda (such as accessing nested variables or generating lots of lambdas with separate default arguments) will work just as well with a function. The big advantage of using a named function is of course that when it goes wrong you get a meaningful stack trace. I had that bite me yesterday when I got a stack trace involving a lambda and no context about which lambda it was.
0
2,261
false
0
1
Restrictons of Python compared to Ruby: lambda's
2,654,789
1
4
0
4
7
1
1.2
0
I've got a python script that calls a bunch of functions, each of which writes output to stdout. Sometimes when I run it, I'd like to send the output in an e-mail (along with a generated file). I'd like to know how I can capture the output in memory so I can use the email module to build the e-mail. My ideas so far were: use a memory-mapped file (but it seems like I have to reserve space on disk for this, and I don't know how long the output will be) bypass all this and pipe the output to sendmail (but this may be difficult if I also want to attach the file)
0
python,stream
2010-04-16T17:04:00.000
0
2,654,834
You said that your script "calls a bunch of functions" so I'm assuming that they're python functions accessible from your program. I'm also assuming you're using print to generate the output in all these functions. If that's the case, you can just replace sys.stdout with a StringIO.StringIO which will intercept all the stuff you're writing. Then you can finally call the .getValue method on your StringIO to get everything that has been sent to the output channel. This will also work for external programs using the subprocess module which write to sys.stdout. This is a cheap way. I'd recommend that you do your output using the logging module. You'll have much more control over how it does it's output and you can control it more easily as well.
0
4,960
true
0
1
Capturing stdout within the same process in Python
2,654,886
1
6
0
0
24
1
0
0
When I write with business logic, my code often depends on the current time. For example the algorithm which looks at each unfinished order and checks if an invoice should be sent (which depends on the no of days since the job was ended). In these cases creating an invoice is not triggered by an explicit user action but by a background job. Now this creates a problem for me when it comes to testing: I can test invoice creation itself easily However it is hard to create an order in a test and check that the background job identifies the correct orders at the correct time. So far I found two solutions: In the test setup, calculate the job dates relative to the current date. Downside: The code becomes quite complicated as there are no explicit dates written anymore. Sometimes the business logic is pretty complex for edge cases so it becomes hard to debug due to all these relative dates. I have my own date/time accessor functions which I use throughout my code. In the test I just set a current date and all modules get this date. So I can simulate an order creation in February and check that the invoice is created in April easily. Downside: 3rd party modules do not use this mechanism so it's really hard to integrate+test these. The second approach was way more successful to me after all. Therefore I'm looking for a way to set the time Python's datetime+time modules return. Setting the date is usually enough, I don't need to set the current hour or second (even though this would be nice). Is there such a utility? Is there an (internal) Python API that I can use?
0
python,unit-testing,datetime,testing,time
2010-04-17T10:22:00.000
0
2,658,026
there might be few ways of doing this, like creating the orders (with the current timestamp) and then changing that value in the DB directly by some external process (assuming data is in the DB). I'll suggest something else. Have you though about running your application in a virtual machine, setting the time to say Feb, creating orders, and then just changing the VMs time? This approach is the closest as you can get to the real-life situation.
0
12,107
false
0
1
How to change the date/time in Python for all modules?
2,658,048
2
7
0
1
2
0
0.028564
0
I have an executable that run time should take configuration parameters from a script file. This way I dont need to re-compile the code for every configuration change. Right now I have all the configuration values in a .h file. Everytime I change it i need to re-compile. The platform is C, gcc under Linux. What is the best solution for this problem? I looked up on google and so XML, phthon and Lua bindings for C. Is using a separate scripting language the best approach? If so, which one would you recommend for my need? Addendum: What if I would like to mirror data structures in script files? If I have an array of structures for example, if there an easy way to store and load it? Thanks
0
python,c,linux,gcc,lua
2010-04-19T13:49:00.000
1
2,667,866
How much configuration do you need that it needs to be a "script file"? I just keep a little chunk of code handy that's a ini format parser.
0
828
false
0
1
Configuration files for C in linux
2,667,907
2
7
0
0
2
0
0
0
I have an executable that run time should take configuration parameters from a script file. This way I dont need to re-compile the code for every configuration change. Right now I have all the configuration values in a .h file. Everytime I change it i need to re-compile. The platform is C, gcc under Linux. What is the best solution for this problem? I looked up on google and so XML, phthon and Lua bindings for C. Is using a separate scripting language the best approach? If so, which one would you recommend for my need? Addendum: What if I would like to mirror data structures in script files? If I have an array of structures for example, if there an easy way to store and load it? Thanks
0
python,c,linux,gcc,lua
2010-04-19T13:49:00.000
1
2,667,866
You could reread the configuration file when a signal such as SIGUSR1 is received.
0
828
false
0
1
Configuration files for C in linux
2,667,901
1
2
0
24
10
1
1.2
0
Python has the idea of metaclasses that, if I understand correctly, allow you to modify an object of a class at the moment of construction. You are not modifying the class, but instead the object that is to be created then initialized. Python (at least as of 3.0 I believe) also has the idea of class decorators. Again if I understand correctly, class decorators allow the modifying of the class definition at the moment it is being declared. Now I believe there is an equivalent feature or features to the class decorator in Ruby, but I'm currently unaware of something equivalent to metaclasses. I'm sure you can easily pump any Ruby object through some functions and do what you will to it, but is there a feature in the language that sets that up like metaclasses do? So again, Does Ruby have something similar to Python's metaclasses? Edit I was off on the metaclasses for Python. A metaclass and a class decorator do very similar things it appears. They both modify the class when it is defined but in different manners. Hopefully a Python guru will come in and explain better on these features in Python. But a class or the parent of a class can implement a __new__(cls[,..]) function that does customize the construction of the object before it is initialized with __init__(self[,..]). Edit This question is mostly for discussion and learning about how the two languages compare in these features. I'm familiar with Python but not Ruby and was curious. Hopefully anyone else who has the same question about the two languages will find this post helpful and enlightening.
0
python,ruby,metaprogramming,metaclass
2010-04-20T14:39:00.000
0
2,676,007
Ruby doesn't have metaclasses. There are some constructs in Ruby which some people sometimes wrongly call metaclasses but they aren't (which is a source of endless confusion). However, there's a lot of ways to achieve the same results in Ruby that you would do with metaclasses. But without telling us what exactly you want to do, there's no telling what those mechanisms might be. In short: Ruby doesn't have metaclasses Ruby doesn't have any one construct that corresponds to Python's metaclasses Everything that Python can do with metaclasses can also be done in Ruby But there is no single construct, you will use different constructs depending on what exactly you want to do Any one of those constructs probably has other features as well that do not correspond to metaclasses (although they probably correspond to something else in Python) While you can do anything in Ruby that you can do with metaclasses in Python, it might not necessarily be straightforward Although often there will be a more Rubyish solution that is elegant Last but not least: while you can do anything in Ruby that you can do with metaclasses in Python, doing it might not necessarily be The Ruby Way So, what are metaclasses exactly? Well, they are classes of classes. So, let's take a step back: what are classes exactly? Classes … are factories for objects define the behavior of objects define on a metaphysical level what it means to be an instance of the class For example, the Array class produces array objects, defines the behavior of arrays and defines what "array-ness" means. Back to metaclasses. Metaclasses … are factories for classes define the behavior of classes define on a metaphysical level what it means to be a class In Ruby, those three responsibilities are split across three different places: the Class class creates classes and defines a little bit of the behavior the individual class's eigenclass defines a little bit of the behavior of the class the concept of "classness" is hardwired into the interpreter, which also implements the bulk of the behavior (for example, you cannot inherit from Class to create a new kind of class that looks up methods differently, or something like that – the method lookup algorithm is hardwired into the interpreter) So, those three things together play the role of metaclasses, but neither one of those is a metaclass (each one only implements a small part of what a metaclass does), nor is the sum of those the metaclass (because they do much more than that). Unfortunately, some people call eigenclasses of classes metaclasses. (Until recently, I was one of those misguided souls, until I finally saw the light.) Other people call all eigenclasses metaclasses. (Unfortunately, one of those people is the author of one the most popular tutorials on Ruby metaprogramming and the Ruby object model.) Some popular libraries add a metaclass method to Object that returns the object's eigenclass (e.g. ActiveSupport, Facets, metaid). Some people call all virtual classes (i.e. eigenclasses and include classes) metaclasses. Some people call Class the metaclass. Even within the Ruby source code itself, the word "metaclass" is used to refer to things that are not metaclasses.
0
1,693
true
0
1
What is Ruby's analog to Python Metaclasses?
2,678,233
1
7
0
2
89
1
0.057081
0
I am using py.test for unit testing my python program. I wish to debug my test code with the python debugger the normal way (by which I mean pdb.set_trace() in the code) but I can't make it work. Putting pdb.set_trace() in the code doesn't work (raises IOError: reading from stdin while output is captured). I have also tried running py.test with the option --pdb but that doesn't seem to do the trick if I want to explore what happens before my assertion. It breaks when an assertion fails, and moving on from that line means terminating the program. Does anyone know a way to get debugging, or is debugging and py.test just not meant to be together?
0
python,unit-testing,pdb
2010-04-20T21:28:00.000
0
2,678,792
Simply use: pytest --trace test_your_test.py. This will invoke the Python debugger at the start of the test
0
70,637
false
0
1
Can I debug with python debugger when using py.test somehow?
66,974,346
1
2
0
5
6
1
1.2
0
I have some binary data produced as base-256 bytestrings in Python (2.x). I need to read these into JavaScript, preserving the ordinal value of each byte (char) in the string. If you'll allow me to mix languages, I want to encode a string s in Python such that ord(s[i]) == s.charCodeAt(i) after I've read it back into JavaScript. The cleanest way to do this seems to be to serialize my Python strings to JSON. However, json.dump doesn't like my bytestrings, despite fiddling with the ensure_ascii and encoding parameters. Is there a way to encode bytestrings to Unicode strings that preserves ordinal character values? Otherwise I think I need to encode the characters above the ASCII range into JSON-style \u1234 escapes; but a codec like this does not seem to be among Python's codecs. Is there an easy way to serialize Python bytestrings to JSON, preserving char values, or do I need to write my own encoder?
0
python,json
2010-04-21T02:31:00.000
0
2,679,936
Is there a way to encode bytestrings to Unicode strings that preserves ordinal character values? The byte -> unicode transformation is called decode, not encode. But yes, decoding with a codec such as iso-8859-1 should indeed "preserve ordinal character values" as you wish.
0
1,629
true
0
1
Serializing Python bytestrings to JSON, preserving ordinal character values
2,679,957
1
1
0
4
2
1
1.2
0
I have a script in python that needs to read iso-8859-1 files and also write in that encoding. Now I am running the script in an environment with all locales set at utf-8. Is there a way to define in my python scripts that all file acces have to use the iso-8859-1 encoding?
0
python,encoding,file-io
2010-04-21T09:33:00.000
0
2,681,713
Python doesn't really listen to the environment when it comes to reading and writing files in a particular encoding. It only listens to the environment when it comes to encoding unicode written to stdout, if stdout is connected to a terminal. When reading and writing files in Python 2.x, you deal with bytestrings (the str type) by default. They're encoded data. You have to decode the data you read by hand, and encode what you want to write. Or you can use codecs.open() to open the files, which will do the encoding for you. In Python 3.x, you open files either in binary mode, in which case you get bytes, or you open it in text mode, in which case you should specify an encoding just like with codecs.open() in Python 2.x. None of these are affected by environment variables; you either read bytes, or you specify the encoding.
0
294
true
0
1
Is there a way to set the encoding for all files read and written by python
2,682,226
1
1
0
2
0
0
1.2
0
Assuming that I have the following directory structure for a Python project: config/ scripts/ src/ where should a fabric deployment script should go? I assume that it should be in scripts, obviously, but for me it seems more appropriate to store in scripts, the actual code that fires up the project.
0
python,deployment,fabric
2010-04-22T14:09:00.000
0
2,691,528
This is really a preference thing -- however there are a couple places I like, depending on situation. Most frequently, and particularly in cases like yours where the fabfile is tied to a piece of software, I like to put it the project directory. I view fabfiles as akin to Makefiles in this case, so this feels like a natural place. (e.g. for your example, put the fabfile in the same directory holding config/ scripts/ and src/) In other cases, I use fab for information gathering. Specifically I run a few commands and pull files from a series of servers. Similarly I initiate various tests on remote hosts. In these cases I like to set up a special directory for the fabfile (called tests, or whatever) and pull data to the relevant subdirectory. Finally I have a few fabfiles I keep in $HOME/lib. These do some remote tasks that I frequently deal with. One of these is for setting up new pylons projects on my dev server. I have rpaste set up as an alias to fab -f $HOME/lib/rpaste.py. This allows me to select the target action at will.
0
454
true
0
1
Where to store deployment scripts
2,692,615
1
4
0
34
75
1
1
0
I had never noticed the __path__ attribute that gets defined on some of my packages before today. According to the documentation: Packages support one more special attribute, __path__. This is initialized to be a list containing the name of the directory holding the package’s __init__.py before the code in that file is executed. This variable can be modified; doing so affects future searches for modules and subpackages contained in the package. While this feature is not often needed, it can be used to extend the set of modules found in a package. Could somebody explain to me what exactly this means and why I would ever want to use it?
0
python,path,module
2010-04-23T14:22:00.000
0
2,699,287
If you change __path__, you can force the interpreter to look in a different directory for modules belonging to that package. This would allow you to, e.g., load different versions of the same module based on runtime conditions. You might do this if you wanted to use different implementations of the same functionality on different platforms.
0
39,895
false
0
1
What is __path__ useful for?
2,699,333
2
2
0
5
2
1
0.462117
0
I used python 2.5 and imported a file named "irit.py" from C:\util\Python25\Lib\site-packages directory. This files imports the file "_irit.pyc which is in the same directory. It worked well and did what I wanted. Than, I tried the same thing with python version 2.6.4. "irit.py" which is in C:\util\Python26\Lib\site-packages was imported, but "_irit.pyc" (which is in the same directory of 26, like before) hasn't been found. I got the error message: File "C:\util\Python26\lib\site-packages\irit.py", line 5, in import _irit ImportError: DLL load failed: The specified module could not be found. Can someone help me understand the problem and how to fix it?? Thanks, Almog.
0
python,import,version,pyc
2010-04-24T16:50:00.000
0
2,705,304
"DLL load failed" can't directly refer to the .pyc, since that's a bytecode file, not a DLL; a DLL would be .pyd on Windows. So presumably that _irit.pyc bytecode file tries to import some .pyd and that .pyd is not available in a 2.6-compatible version in the appropriate directory. Unfortunately it also appears that the source file _irit.py isn't around either, so the error messages end up less informative that they could be. I'd try to run python -v, which gives verbose messages on all module loading and unloading actions -- maybe that will let you infer the name of the missing .pyd when you compare its behavior in 2.5 and 2.6.
0
1,712
false
0
1
How to import *.pyc file from different version of python?
2,705,337
2
2
0
1
2
1
0.099668
0
I used python 2.5 and imported a file named "irit.py" from C:\util\Python25\Lib\site-packages directory. This files imports the file "_irit.pyc which is in the same directory. It worked well and did what I wanted. Than, I tried the same thing with python version 2.6.4. "irit.py" which is in C:\util\Python26\Lib\site-packages was imported, but "_irit.pyc" (which is in the same directory of 26, like before) hasn't been found. I got the error message: File "C:\util\Python26\lib\site-packages\irit.py", line 5, in import _irit ImportError: DLL load failed: The specified module could not be found. Can someone help me understand the problem and how to fix it?? Thanks, Almog.
0
python,import,version,pyc
2010-04-24T16:50:00.000
0
2,705,304
Pyc files are not guaranteed to be compatible across python versions, so even if you fix the missing dll, you could still run in to problems.
0
1,712
false
0
1
How to import *.pyc file from different version of python?
2,706,673
1
2
0
2
0
0
0.197375
1
I am writing a python script that downloads a file given by a URL. Unfortuneatly the URL is in the form of a PHP script i.e. www.website.com/generatefilename.php?file=5233 If you visit the link in a browser, you are prompted to download the actual file and extension. I need to send this link to the downloader, but I can't send the downloader the PHP link. How would I get the full file name in a usable variable?
0
php,python,url,scripting
2010-04-24T19:36:00.000
0
2,705,856
What you need to do is examine the Content-Disposition header sent by the PHP script. it will look something like: Content-Disposition: attachment; filename=theFilenameYouWant As to how you actually examine that header it depends on the python code you're currently using to fetch the URL. If you post some code I'll be able to give a more detailed answer.
0
171
false
0
1
I want the actual file name that is returned by a PHP script
2,705,877
1
6
0
6
29
1
1
1
What are the best practices for extending an existing Python module – in this case, I want to extend the python-twitter package by adding new methods to the base API class. I've looked at tweepy, and I like that as well; I just find python-twitter easier to understand and extend with the functionality I want. I have the methods written already – I'm trying to figure out the most Pythonic and least disruptive way to add them into the python-twitter package module, without changing this modules’ core.
0
python,module,tweepy,python-module,python-twitter
2010-04-24T20:12:00.000
0
2,705,964
Don't add them to the module. Subclass the classes you want to extend and use your subclasses in your own module, not changing the original stuff at all.
0
33,640
false
0
1
How do I extend a python module? Adding new functionality to the `python-twitter` package
2,705,976
1
1
0
3
1
0
1.2
0
I am designing a python web app, where people can have an email sent to them on a particular day. So a user puts in his emai and date in a form and it gets stored in my database. My script would then search through the database looking for all records of todays date, retrive the email, sends them out and deletes the entry from the table. Is it possible to have a setup, where the script starts up automatically at a give time, say 1 pm everyday, sends out the email and then quits? If I have a continuously running script, i might go over the CPU limit of my shared web hosting. Or is the effect negligible? Ali
0
python,email
2010-04-25T15:15:00.000
0
2,708,705
Is it possible to have a setup, where the script starts up automatically at a give time, say 1 pm everyday, sends out the email and then quits? It's surely possible in general, but it entirely depends on what your shared web hosting provider is offering you. For these purposes, you'd use some kind of cron in any version or variant of Unix, Google App Engine, and so on. But since you tell us nothing about your provider and what services it offers you, we can't guess whether it makes such functionality available at all, or in what form. (Incidentally: this isn't really a programming question, so, if you want to post more details and get help, you might have better luck at serverfault.com, the companion site to stackoverflow.com that deals with system administration questions).
0
168
true
1
1
Python script repeated auto start up
2,708,720
2
2
0
4
25
1
0.379949
0
We think about whether we should convert a quite large python web application to Python 3 in the near future. All experiences, possible challenges or guidelines are highly appreciated.
0
python,python-3.x
2010-04-26T09:20:00.000
0
2,712,283
For each third-party library that you use, make sure it has Python 3 support. A lot of the major Python libraries are migrated to 3 now. Check the docs and mailing lists for the libraries. When all the libraries you depend on are supported, I suggest you go for it.
0
918
false
0
1
Make the Move to Python 3 - Best practices
2,712,972
2
2
0
13
25
1
1.2
0
We think about whether we should convert a quite large python web application to Python 3 in the near future. All experiences, possible challenges or guidelines are highly appreciated.
0
python,python-3.x
2010-04-26T09:20:00.000
0
2,712,283
My suggestion is that you stick with Python 2.6+, but simply add the -3 flag to warn you about incompatibilities with Python 3.0. Then you can make sure your Python 2.6 can be easily upgraded to Python 3.0 via 2to3, without actually making that jump quite yet. I would suggest you hold back at the moment, because you may at some point want to use a library and find out that it is only available for 2.6 and not 3.0; if you make sure to cleanup things flagged by -3, then you will be easily able to make the jump, but you will also be able to take advantage of the code that is only available for 2.6+ and which is not yet ready for 3.0.
0
918
true
0
1
Make the Move to Python 3 - Best practices
2,712,306
1
2
0
2
2
0
1.2
0
A rather confusing sequence of events happened, according to my log-file, and I am about to put a lot of the blame on the Python logger, which is a bold claim. I thought I should get some second opinions about whether what I am saying could be true. I am trying to explain why there is are several large gaps in my log file (around two minutes at a time) during stressful periods for my application when it is missing deadlines. I am using Python's logging module on a remote server, and have set-up, with a configuration file, for all logs of severity of ERROR or higher to be emailed to me. Typically, only one error will be sent at a time, but during periods of sustained problems, I might get a dozen in a minute - annoying, but nothing that should stress SMTP. I believe that, after a short spurt of such messages, the Python logging system (or perhaps the SMTP system it is sitting on) is encountering errors or congestion. The call to Python's log is then BLOCKING for two minutes, causing my thread to miss its deadlines. (I was smart enough to move the logging until after the critical path of the application - so I don't care if logging takes me a few seconds, but two minutes is far too long.) This seems like a rather awkward architecture (for both a logging system that can freeze up, and for an SMTP system (Ubuntu, sendmail) that cannot handle dozens of emails in a minute**), so this surprises me, but it exactly fits the symptoms. Has anyone had any experience with this? Can anyone describe how to stop it from blocking? ** EDIT # 2 : I actually counted. 170 emails in two hours. Forget the previous edit. I counted wrong. It's late here...
0
python,logging,smtp
2010-04-27T14:27:00.000
0
2,722,036
Stress-testing was revealing: My logging configuration sent critical messages to SMTPHandler, and debug messages to a local log file. For testing I created a moderately large number of threads (e.g. 50) that waited for a trigger, and then simultaneosly tried to log either a critical message or a debug message, depending on the test. Test #1: All threads send critical messages: It revealed that the first critical message took about .9 seconds to send. The second critical message took around 1.9 seconds to send. The third longer still, quickly adding up. It seems that the messages that go to email block waiting for each other to complete the send. Test #2: All threads send debug messages: These ran fairly quickly, from hundreds to thousands of microseconds. Test #3: A mix of both. It was clear from the results that debug messages were also being blocked waiting for critical message's emails to go out. So, it wasn't that 2 minutes meant there was a timeout. It was the two minutes represented a large number of threads blocked waiting in the queue. Why were there so many critical messages being sent at once? That's the irony. There was a logging.debug() call inside a method that included a network call. I had some code monitoring the speed of the of the method (to see if the network call was taking too long). If so, it (of course) logged a critical error that sent an email. The next thread then blocked on the logging.debug() call, meaning it missed the deadline, triggering another email, triggering another thread to run slowly. The 2 minute delay in one thread wasn't a network timeout. It was one thread waiting for another thread, that was blocked for 1 minute 57 - because it was waiting for another thread blocked for 1 minute 55, etc. etc. etc. This isn't very pretty behaviour from SMTPHandler.
0
701
true
0
1
Could Python's logging SMTP Handler be freezing my thread for 2 minutes?
2,734,655
2
8
0
0
4
1
0
0
In the question
0
python,coding-style
2010-04-27T15:53:00.000
0
2,722,758
the Python STDLIB
0
885
false
0
1
Any good python open source projects exemplifying coding standards and best practices?
2,723,111
2
8
0
1
4
1
0.024995
0
In the question
0
python,coding-style
2010-04-27T15:53:00.000
0
2,722,758
You can't read too much source. I think a good idea would be to take some Pythonistas (Raymond Hettinger and Ian Bicking come to mind) and fish out their code from their projects or from other sources like ActiveState and go through them.
0
885
false
0
1
Any good python open source projects exemplifying coding standards and best practices?
2,723,221
2
4
0
1
3
1
0.049958
0
The question is, basically: what would be more preferable, both performance-wise and design-wise - to have a list of objects of a Python class or to have several lists of numerical properties? I am writing some sort of a scientific simulation which involves a rather large system of interacting particles. For simplicity, let's say we have a set of balls bouncing inside a box so each ball has a number of numerical properties, like x-y-z-coordinates, diameter, mass, velocity vector and so on. How to store the system better? Two major options I can think of are: to make a class "Ball" with those properties and some methods, then store a list of objects of the class, e. g. [b1, b2, b3, ...bn, ...], where for each bn we can access bn.x, bn.y, bn.mass and so on; to make an array of numbers for each property, then for each i-th "ball" we can access it's 'x' coordinate as xs[i], 'y' coordinate as ys[i], 'mass' as masses[i] and so on; To me it seems that the first option represents a better design. The second option looks somewhat uglier, but might be better in terms of performance, and it could be easier to use it with numpy and scipy, which I try to use as much as I can. I am still not sure if Python will be fast enough, so it may be necessary to rewrite it in C++ or something, after initial prototyping in Python. Would the choice of data representation be different for C/C++? What about a hybrid approach, e.g. Python with C++ extension? Update: I never expected any performance gain from parallel arrays per se, but in a mixed environment like Python + Numpy (or whatever SlowScriptingLanguage + FastNativeLibrary) using them may (or may not?) let you move more work out of you slow scripting code and into the fast native library.
0
python,performance,data-structures,numpy
2010-04-27T18:13:00.000
0
2,723,790
I think it depends on what you're going to be doing with them, and how often you're going to be working with (all attributes of one particle) vs (one attribute of all particles). The former is better suited to the object approach; the latter is better suited to the array approach. I was facing a similar problem (although in a different domain) a couple of years ago. The project got deprioritized before I actually implemented this phase, but I was leaning towards a hybrid approach, where in addition to the Ball class I would have an Ensemble class. The Ensemble would not be a list or other simple container of Balls, but would have its own attributes (which would be arrays) and its own methods. Whether the Ensemble is created from the Balls, or the Balls from the Ensemble, depends on how you're going to construct them. One of my coworkers was arguing for a solution where the fundamental object was an Ensemble which might contain only one Ball, so that no calling code would ever have to know whether you were operating on just one Ball (do you ever do that for your application?) or on many.
1
1,102
false
0
1
List of objects or parallel arrays of properties?
2,726,598
2
4
0
2
3
1
0.099668
0
The question is, basically: what would be more preferable, both performance-wise and design-wise - to have a list of objects of a Python class or to have several lists of numerical properties? I am writing some sort of a scientific simulation which involves a rather large system of interacting particles. For simplicity, let's say we have a set of balls bouncing inside a box so each ball has a number of numerical properties, like x-y-z-coordinates, diameter, mass, velocity vector and so on. How to store the system better? Two major options I can think of are: to make a class "Ball" with those properties and some methods, then store a list of objects of the class, e. g. [b1, b2, b3, ...bn, ...], where for each bn we can access bn.x, bn.y, bn.mass and so on; to make an array of numbers for each property, then for each i-th "ball" we can access it's 'x' coordinate as xs[i], 'y' coordinate as ys[i], 'mass' as masses[i] and so on; To me it seems that the first option represents a better design. The second option looks somewhat uglier, but might be better in terms of performance, and it could be easier to use it with numpy and scipy, which I try to use as much as I can. I am still not sure if Python will be fast enough, so it may be necessary to rewrite it in C++ or something, after initial prototyping in Python. Would the choice of data representation be different for C/C++? What about a hybrid approach, e.g. Python with C++ extension? Update: I never expected any performance gain from parallel arrays per se, but in a mixed environment like Python + Numpy (or whatever SlowScriptingLanguage + FastNativeLibrary) using them may (or may not?) let you move more work out of you slow scripting code and into the fast native library.
0
python,performance,data-structures,numpy
2010-04-27T18:13:00.000
0
2,723,790
Having an object for each ball in this example is certainly better design. Parallel arrays are really a workaround for languages that do not support proper objects. I wouldn't use them in a language with OO capabilities unless it's a tiny case that fits within a function (and maybe not even then) or if I've run out of every other optimization option and the profiler shows that property access is the culprit. This applies twice as much to Python as to C++, as the former places a large emphasis on readability and elegance.
1
1,102
false
0
1
List of objects or parallel arrays of properties?
2,723,845
2
4
0
0
0
1
0
0
If I obfuscated python code, would it provide the same level of 'security' as c#/java obfuscating? i.e it makes things a little hard, but really you can still reverse engineer if you really wanted to, its just a bit cryptic.
0
c#,java,python,obfuscation
2010-04-27T20:34:00.000
0
2,724,885
Python code gets compiled to bytecode (.pyc) files as it is imported. You can distribute those .pyc files instead of the .py source code files, and the Python interpreter should be able to load them. While Python bytecode is more "obfuscated" than Python source code, it's still relatively easy to disassemble Python bytecode -- but, then again, it's not that hard to disassemble Java bytecode, either.
0
293
false
1
1
Can python code (say if I used djangno) be obfuscated to the same 'level' as c#/java?
2,724,925
2
4
0
0
0
1
0
0
If I obfuscated python code, would it provide the same level of 'security' as c#/java obfuscating? i.e it makes things a little hard, but really you can still reverse engineer if you really wanted to, its just a bit cryptic.
0
c#,java,python,obfuscation
2010-04-27T20:34:00.000
0
2,724,885
Obfuscation doesn't provide security. What you describe isn't security. If you distribute your Python program or your Java program or your C program, it is vunerable. What protects you from people using what you distributed unfairly is the law and people not being jerks. Obfuscation not only provides no security, it has the potential of breaking working code, hurting performance, and ruining documentation.
0
293
false
1
1
Can python code (say if I used djangno) be obfuscated to the same 'level' as c#/java?
2,725,016
2
5
0
0
4
0
0
0
I am working on a project in which I have to develop bio-passwords based on user's keystroke style. Suppose a user types a password for 20 times, his keystrokes are recorded, like holdtime : time for which a particular key is pressed. digraph time : time it takes to press a different key. suppose a user types a password " COMPUTER". I need to know the time for which every key is pressed. something like : holdtime for the above password is C-- 200ms O-- 130ms M-- 150ms P-- 175ms U-- 320ms T-- 230ms E-- 120ms R-- 300ms The rational behind this is , every user will have a different holdtime. Say a old person is typing the password, he will take more time then a student. And it will be unique to a particular person. To do this project, I need to record the time for each key pressed. I would greatly appreciate if anyone can guide me in how to get these times. Editing from here.. Language is not important, but I would prefer it in C. I am more interested in getting the dataset.
0
c++,python,c,linux,unix
2010-04-28T00:55:00.000
0
2,726,176
The answer is conditionally "yes". If your languages/environment has interactive keyboard support that offers Key-Down and Key-Up events, then you catch both events and time the difference between them. This would be trivially easy in JavaScript on a web page, which would also be the easiest way to show off your work to a wider audience.
0
3,012
false
0
1
Can I get the amount of time for which a key is pressed on a keyboard
2,726,201
2
5
0
0
4
0
0
0
I am working on a project in which I have to develop bio-passwords based on user's keystroke style. Suppose a user types a password for 20 times, his keystrokes are recorded, like holdtime : time for which a particular key is pressed. digraph time : time it takes to press a different key. suppose a user types a password " COMPUTER". I need to know the time for which every key is pressed. something like : holdtime for the above password is C-- 200ms O-- 130ms M-- 150ms P-- 175ms U-- 320ms T-- 230ms E-- 120ms R-- 300ms The rational behind this is , every user will have a different holdtime. Say a old person is typing the password, he will take more time then a student. And it will be unique to a particular person. To do this project, I need to record the time for each key pressed. I would greatly appreciate if anyone can guide me in how to get these times. Editing from here.. Language is not important, but I would prefer it in C. I am more interested in getting the dataset.
0
c++,python,c,linux,unix
2010-04-28T00:55:00.000
0
2,726,176
If you read from the terminal in conical mode, you can read each keystroke as it's pressed. You won't see keydown keyup events, like you could if you trapped X events, but it's probably easier, especially if you're just running in a console or terminal.
0
3,012
false
0
1
Can I get the amount of time for which a key is pressed on a keyboard
2,726,199
2
3
0
1
1
0
0.066568
0
I have written up a python script that allows a user to input a message, his email and the time and they would like the email sent. This is all stored in a mysql database. However, how do I get the script to execute on the said time and date? will it require a cron job? I mean say at 2:15 on april 20th, the script will search the database for all times of 2:15, and send out those emails. But what about for emails at 2:16? I am using a shared hosting provided, so cant have a continously running script. Thanks
0
python,mysql,email,reminders
2010-04-28T19:06:00.000
0
2,732,407
A cronjob every minute or so would do it. If you're considering this, you might like to mind two things: 1 - How many e-mails are expected to be sent per minute? If it takes you 1 second to send an e-mail and you have 100 e-mails per minute, you won't finish your queue. 2 - What will happen if one job starts before the last one finishes? Be careful not to send e-mails twice. You need either to make sure first process ends (risk: you can drop an e-mail eventually), avoid next process to start (risk: first process hangs whole queue) or make them work in parallel (risk: synchronization problems). If you take daramarak's suggestion - make you script add a new cron job at end - you have the risk of whole system colapsing if one error occurs.
0
2,308
false
0
1
Timed email reminder in python
2,733,383
2
3
0
2
1
0
1.2
0
I have written up a python script that allows a user to input a message, his email and the time and they would like the email sent. This is all stored in a mysql database. However, how do I get the script to execute on the said time and date? will it require a cron job? I mean say at 2:15 on april 20th, the script will search the database for all times of 2:15, and send out those emails. But what about for emails at 2:16? I am using a shared hosting provided, so cant have a continously running script. Thanks
0
python,mysql,email,reminders
2010-04-28T19:06:00.000
0
2,732,407
If you cannot have a continuously running script, something must trigger it, so that would have to rely on your OS internals. In a unix environment a cron job, as you self state, would do the trick. Set cron to run the script, and make the script wait for a given time and then continue running and sending until the next email is more than this given time away. Then make your script add a new cron job for a new wakeup time.
0
2,308
true
0
1
Timed email reminder in python
2,732,645
2
2
0
8
7
1
1
0
Are there any disadvantages about using eggs through easy-install compared to the "traditional" packages/modules/libs?
0
python,comparison,egg
2010-04-28T22:46:00.000
0
2,733,629
Using eggs does cause a long sys.path, which has to be searched and when it's really long that search can take a while. Only when you get a hundred entries or so is this going to be a problem (but installing a hundred eggs via easy_install is certainly possible).
0
688
false
0
1
Disadvantage of Python eggs?
2,734,885
2
2
0
8
7
1
1.2
0
Are there any disadvantages about using eggs through easy-install compared to the "traditional" packages/modules/libs?
0
python,comparison,egg
2010-04-28T22:46:00.000
0
2,733,629
One (potential) disadvantage is that eggs are zipped by default unless zip_safe=False is set in their setup() function in setup.py. If an egg is zipped, you can't get at the files in it (without unzipping it, obviously). If the module itself uses non-source files (such as templates) it will probably specify zip_safe=False, but another consequence is that you cannot effectively step into zipped modules using pdb, the Python debugger. That is, you can, but you won't be able to see the source or navigate properly.
0
688
true
0
1
Disadvantage of Python eggs?
2,733,647
2
4
0
2
4
0
0.099668
1
I'm almost afraid to post this question, there has to be an obvious answer I've overlooked, but here I go: Context: I am creating a blog for educational purposes (want to learn python and web.py). I've decided that my blog have posts, so I've created a Post class. I've also decided that posts can be created, read, updated, or deleted (so CRUD). So in my Post class, I've created methods that respond to POST, GET, PUT, and DELETE HTTP methods). So far so good. The current problem I'm having is a conceptual one, I know that sending a PUT HTTP message (with an edited Post) to, e.g., /post/52 should update post with id 52 with the body contents of the HTTP message. What I do not know is how to conceptually correctly serve the (HTML) edit page. Will doing it like this: /post/52/edit violate the idea of URI, as 'edit' is not a resource, but an action? On the other side though, could it be considered a resource since all that URI will respond to is a GET method, that will only return an HTML page? So my ultimate question is this: How do I serve an HTML page intended for user editing in a RESTful manner?
0
python,rest,web.py
2010-05-01T14:43:00.000
0
2,750,341
Instead of calling it /post/52/edit, what if you called it /post/52/editor? Now it is a resource. Dilemma averted.
0
267
false
1
1
Is www.example.com/post/21/edit a RESTful URI? I think I know the answer, but have another question
2,750,368
2
4
0
4
4
0
0.197375
1
I'm almost afraid to post this question, there has to be an obvious answer I've overlooked, but here I go: Context: I am creating a blog for educational purposes (want to learn python and web.py). I've decided that my blog have posts, so I've created a Post class. I've also decided that posts can be created, read, updated, or deleted (so CRUD). So in my Post class, I've created methods that respond to POST, GET, PUT, and DELETE HTTP methods). So far so good. The current problem I'm having is a conceptual one, I know that sending a PUT HTTP message (with an edited Post) to, e.g., /post/52 should update post with id 52 with the body contents of the HTTP message. What I do not know is how to conceptually correctly serve the (HTML) edit page. Will doing it like this: /post/52/edit violate the idea of URI, as 'edit' is not a resource, but an action? On the other side though, could it be considered a resource since all that URI will respond to is a GET method, that will only return an HTML page? So my ultimate question is this: How do I serve an HTML page intended for user editing in a RESTful manner?
0
python,rest,web.py
2010-05-01T14:43:00.000
0
2,750,341
Another RESTful approach is to use the query string for modifiers: /post/52?edit=1 Also, don't get too hung up on the purity of the REST model. If your app doesn't fit neatly into the model, break the rules.
0
267
false
1
1
Is www.example.com/post/21/edit a RESTful URI? I think I know the answer, but have another question
2,750,379
1
4
0
1
17
0
0.049958
0
I'd like to write some Python unit tests for my Google App Engine. How can I set that up? Does someone happen to have some sample code which shows how to write a simple test?
0
python,unit-testing,google-app-engine
2010-05-01T17:32:00.000
1
2,750,911
Since, gae is based on webhooks it can be easy to set your own testing framework for all relevant urls in your app.yaml. You can test it on sample dataset on development server ( start devel server with --datastore_path option ) and assert writes to database or webhook responses.
0
5,767
false
1
1
Google App Engine Python Unit Tests
4,059,206
1
3
0
0
1
0
0
0
I am configuring a distutils-based setup.py for a python module that is to be installed on a heterogeneous set of resources. Due to the heterogeneity, the location where the module is installed is not the same on each host however disutils picks the host-specific location. I find that the module is installed without o+rx permissions using disutils (in spite of setting umask ahead of running setup.py). One solution is to manually correct this problem, however I would like an automated means that works on heterogeneous install targets. For example, is there a way to extract the ending location of the installation from within setup.py? Any other suggestions?
0
python,permissions,distutils,setup.py
2010-05-02T15:39:00.000
0
2,753,966
I find that the module is installed without o+rx permissions using disutils I don’t remember right now if distutils copies the files with their rights as is or if it just copies the contents. (in spite of setting umask ahead of running setup.py) I’m not sure how umask and file copying from Python should interact; does umask apply to the system calls or does it need to be explicitly heeded by Python code? For example, is there a way to extract the ending location of the installation from within setup.py? There is one, a bit convoluted. What would you do with that information?
0
1,654
false
0
1
setting permissions of python module (python setup install)
7,931,034
1
3
0
4
2
0
0.26052
1
I am trying to migrate a legacy mailing list to a new web forum software and was wondering if mailman has an export option or an API to get all lists, owners, members and membership types.
0
python,api,mailman
2010-05-03T05:41:00.000
0
2,756,311
probably too late, but the list_members LISTNAME command (executed from a shell) will give you all the members of a list. list_admins LISTNAME will give you the owners What do you mean by membership type? list_members does have an option to filter on digest vs non-digest members. I don't think there's a way to get the moderation flag without writing a script for use with withlist
0
2,988
false
1
1
Does Mailman have an API or an export lists, users and owners option?
3,154,975
1
1
1
0
0
0
1.2
0
Imagine I have a video playing.. Can I have some sort of motion graphics being played 'over' that video.. Like say the moving graphics is on an upper layer than the video, which would be the lower layer.. I am comfortable in a C++ and Python, so a solution that uses these two will be highly appreciated.. Thank you in advance, Rishi..
0
c++,python,graphics,video,video-processing
2010-05-03T17:00:00.000
0
2,759,738
I'm not sure I understand the question correctly but a video file is a sequence of pictures that you can extract (for instance with the opencv library C++ interface) and then you can use it wherever you want. You can play the video on the sides of an opengl 3D cube (available in all opengl tutorials) and other 3D elements around it. Of course you can also displays it in a conventional 2D interface and draw stuff on top of it, but for this you need a graphical ui. Is it what you thought or am I completely lost?
0
169
true
0
1
Navigation graphics overlayed over video
2,760,860
1
3
0
0
3
1
0
0
Are classes necessary for creating methods (defs) in Python?
0
python
2010-05-03T20:57:00.000
0
2,761,145
It depends on your definition of "method". In some sense, no, classes aren't necessary for creating methods in Python, because there are no methods anyway in Python. There are only procedures (which, for some strange reason, are called functions in Python). You can create a procedure anywhere you like. A method is just syntactic sugar for a procedure assigned to an attribute. In another sense, yes, classes are necessary for creating methods. It follows pretty much from the definition of what a method is in Python: a procedure stuck into a class's __dict__. (Note, however, that this means that you do not have to be inside a class definition to create method, you can create a procedure anywhere and any way you like and stick it into the class afterwards.) [Note: I have simplified a bit when it comes to exactly what a method is, how they are synthesized, how they are represented and how you can create your own.]
0
380
false
0
1
Are classes necessary for creating methods (defs) in Python?
2,761,222
1
3
0
6
1
1
1
0
I think in the past python scripts would run off CGI, which would create a new thread for each process. I am a newbie so I'm not really sure, what options do we have? Is the web server pipeline that python works under any more/less effecient than say php?
0
python,webserver
2010-05-04T16:16:00.000
1
2,767,013
You can still use CGI if you want, but the normal approach these days is using WSGI on the Python side, e.g. through mod_wsgi on Apache or via bridges to FastCGI on other web servers. At least with mod_wsgi, I know of no inefficiencies with this approach. BTW, your description of CGI ("create a new thread for each process") is inaccurate: what it does is create a new process for each query's service (and that process typically needs to open a database connection, import all needed modules, etc etc, which is what may make it slow even on platforms where forking a process, per se, is pretty fast, such as all Unix variants).
0
232
false
0
1
When deploying python, what web server options do we have? is the process inefficient at all?
2,767,055
5
6
0
1
3
0
0.033321
0
I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r. My understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed increases by converting these scripts to python? Or is this something of a waste of time?
0
python,file,r,performance
2010-05-05T01:24:00.000
0
2,770,030
what do you mean by "file manipulation?" are you talking about moving files around, deleting, copying, etc., in which case i would use a shell, e.g., bash, etc. if you're talking about reading in the data, performing calculations, perhaps writing out a new file, etc., then you could probably use Python or R. unless maintenance is an issue, i would just leave it as R and find other fish to fry as you're not going to see enough of a speedup to justify your time and effort in porting that code.
1
2,365
false
0
1
R or Python for file manipulation
2,770,393
5
6
0
1
3
0
0.033321
0
I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r. My understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed increases by converting these scripts to python? Or is this something of a waste of time?
0
python,file,r,performance
2010-05-05T01:24:00.000
0
2,770,030
Know where the time is being spent. If your R scripts are bottlenecked on disk IO (and that is very possible in this case), then you could rewrite them in hand-optimized assembly and be no faster. As always with optimization, if you don't measure first, you're just pissing into the wind. If they're not bottlenecked on disk IO, you would likely see more benefit from improving the algorithm than changing the language.
1
2,365
false
0
1
R or Python for file manipulation
2,770,138
5
6
0
0
3
0
0
0
I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r. My understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed increases by converting these scripts to python? Or is this something of a waste of time?
0
python,file,r,performance
2010-05-05T01:24:00.000
0
2,770,030
My guess is that you probably won't see much of a speed-up in time. When comparing high-level languages, overhead in the language is typically not to blame for performance problems. Typically, the problem is your algorithm. I'm not very familiar with R, but you may find speed-ups by reading larger chunks of data into memory at once vs smaller chunks (less system calls). If R doesn't have the ability to change something like this, you will probably find that python can be much faster simply because of this ability.
1
2,365
false
0
1
R or Python for file manipulation
2,770,071
5
6
0
0
3
0
0
0
I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r. My understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed increases by converting these scripts to python? Or is this something of a waste of time?
0
python,file,r,performance
2010-05-05T01:24:00.000
0
2,770,030
R data manipulation has rules for it to be fast. The basics are: vectorize use data.frames as little as possible (for example, in the end) Search for R time optimization and profiling and you will find many resources to help you.
1
2,365
false
0
1
R or Python for file manipulation
2,771,903
5
6
0
10
3
0
1.2
0
I have 4 reasonably complex r scripts that are used to manipulate csv and xml files. These were created by another department where they work exclusively in r. My understanding is that while r is very fast when dealing with data, it's not really optimised for file manipulation. Can I expect to get significant speed increases by converting these scripts to python? Or is this something of a waste of time?
0
python,file,r,performance
2010-05-05T01:24:00.000
0
2,770,030
I write in both R and Python regularly. I find Python modules for writing, reading and parsing information easier to use, maintain and update. Little niceties like the way python lets you deal with lists of items over R's indexing make things much easier to read. I highly doubt you will gain any significant speed-up by switching the language. If you are becoming the new "maintainer" of these scripts and you find Python easier to understand and extend, then I'd say go for it. Computer time is cheap ... programmer time is expensive. If you have other things to do then I'd just limp along with what you've got until you have a free day to putz with them. Hope that helps.
1
2,365
true
0
1
R or Python for file manipulation
2,770,354
1
4
0
0
2
0
0
0
I'm looking for a nice tutorial or framework for developing Python written web applications. I've done lots in PHP, but very little in Python or Ruby and figured I'd start with the first one alphabetically.
0
php,python
2010-05-05T03:33:00.000
0
2,770,426
I just started Python for web myself. I also came from a PHP background and felt that PHP is fine when it comes to making regular stuff like forums, blogs and stuff like that. Sadly though I didn't like PHP when it came to create more complex systems. I looked at a few of the different frameworks out there and came to the conclusion if I want to use a Framework that applies as much to web as to client and server side programming Turbogears2 would suit med best. Seems like one of the "newer" frameworks and looks like they got a solid community that will keep they going for years to come. Oh, lol this post sounds like and ad. Sorry ;) Anyway, I like to use webpages that you can work with the hardware on the server, which means CGI/WSGI or just Turbogears :)
0
617
false
1
1
Beginning python for the web
4,735,025
2
2
0
1
4
1
0.099668
0
Hey, I'm totally behind this topic. Yesterday I was doing profiling using Python profiler module for some script I'm working on, and the unit for time spent was a 'CPU second'. Can anyone remind me with the definition of it? For example for some profiling I got: 200.750 CPU seconds. What does that supposed to mean? At other case and for time consuming process I got: -347.977 CPU seconds, a negative number! Is there anyway I can convert that time, to calendar time? Cheers,
0
python,profiling
2010-05-05T08:25:00.000
0
2,771,561
A CPU second is one second that your process is actually scheduled on the CPU. It can be significantly smaller than than the elapsed real time in case of a busy system, and it can be higher in case of your process running on multiple cores (if the count is per-process, not per-thread). It should never be negative, though...
0
1,333
false
0
1
Python profiler and CPU seconds
2,771,660
2
2
0
8
4
1
1.2
0
Hey, I'm totally behind this topic. Yesterday I was doing profiling using Python profiler module for some script I'm working on, and the unit for time spent was a 'CPU second'. Can anyone remind me with the definition of it? For example for some profiling I got: 200.750 CPU seconds. What does that supposed to mean? At other case and for time consuming process I got: -347.977 CPU seconds, a negative number! Is there anyway I can convert that time, to calendar time? Cheers,
0
python,profiling
2010-05-05T08:25:00.000
0
2,771,561
Roughly speaking, a CPU time of, say, 200.75 seconds means that if only one processor worked on the task and that processor were working on it all the time, it would have taken 200.75 seconds. CPU time can be contrasted with wall clock time, which means the actual time elapsed from the start of the task to the end of the task on a clock hanging on the wall of your room. The two are not interchangeable and there is no way to convert one to the other unless you know exactly how your task was scheduled and distributed among the CPU cores of your system. The CPU time can be less than the wall clock time if the task was distributed among multiple CPU cores, and it can be more if the system was under a heavy load and your task was interrupted temporarily by other tasks.
0
1,333
true
0
1
Python profiler and CPU seconds
2,771,828
2
3
0
1
0
1
0.066568
0
I've been using Ruby as my main scripting language for years but switched to .NET several years ago. I'd like to continue using Ruby (primarily for testing) BUT the toolset for IronRuby is really nonexistent. Why? In Python, meanwhile, there are project templates and full intellisense support. Why isn't there something like that for IronRuby? The only thing I've been able to find on it is "there are no plans for VS integration at this time." Why???
0
ironpython,ironruby,ironpython-studio
2010-05-05T20:57:00.000
0
2,776,721
Shortly the same support for IronRuby is arriving to visual studio. It will take maybe another couple of months but then it will get there. They first needed to get the language implementation right.
0
580
false
1
1
Why doesn't IronRuby have the same tools that IronPython does?
2,789,483
2
3
0
1
0
1
0.066568
0
I've been using Ruby as my main scripting language for years but switched to .NET several years ago. I'd like to continue using Ruby (primarily for testing) BUT the toolset for IronRuby is really nonexistent. Why? In Python, meanwhile, there are project templates and full intellisense support. Why isn't there something like that for IronRuby? The only thing I've been able to find on it is "there are no plans for VS integration at this time." Why???
0
ironpython,ironruby,ironpython-studio
2010-05-05T20:57:00.000
0
2,776,721
IronRuby has been out for 4 weeks, IronPython for 4 years. Developing an IDE takes months, if not years. When exactly where they supposed to squeeze that in? Also, I believe the IronRuby team is smaller than the IronPython team. There actually is a Ruby plugin for Visual Studio produced by SapphireSteel. It's called Ruby in Steel. Unfortunately, they currently only support MRI, YARV and JRuby. They did have IronRuby support at one point, but they removed it, because a) none of their customers actually used it, b) IronRuby was still changing faster than they could adapt and c) some of the IronRuby developers announced that Microsoft is considering developing IronRuby support for Visual Studio in the future and SapphireSteel didn't see much business sense in trying to compete with Microsoft. Also, Visual Studio is not the only IDE on the planet. MonoDevelop has an open bug for IronRuby support, for example. And I'm pretty confident that it wouldn't be too hard to add IronRuby support to NetBeans: it already supports JRuby, MRI and YARV.
0
580
false
1
1
Why doesn't IronRuby have the same tools that IronPython does?
2,778,756
1
13
0
15
112
1
1
0
What is the most lightweight way to create a random string of 30 characters like the following? ufhy3skj5nca0d2dfh9hwd2tbk9sw1 And an hexadecimal number of 30 digits like the followin? 8c6f78ac23b4a7b8c0182d7a89e9b1
0
python
2010-05-06T15:23:00.000
0
2,782,229
Note: random.choice(string.hexdigits) is incorrect, because string.hexdigits returns 0123456789abcdefABCDEF (both lowercase and uppercase), so you will get a biased result, with the hex digit 'c' twice as likely to appear as the digit '7'. Instead, just use random.choice('0123456789abcdef').
0
115,807
false
0
1
Most lightweight way to create a random string and a random hexadecimal number
15,462,293
1
4
0
0
1
0
0
0
I would like to know if is there any way to convert a plain unicode string to HTML in Genshi, so, for example, it renders newlines as <br/>. I want this to render some text entered in a textarea. Thanks in advance!
0
python,html,newline,genshi
2010-05-07T07:03:00.000
0
2,786,803
Maybe use a <pre> tag.
0
764
false
1
1
Print string as HTML
2,786,862
1
8
0
39
89
1
1
0
If a path such as b/c/ does not exist in ./a/b/c , shutil.copy("./blah.txt", "./a/b/c/blah.txt") will complain that the destination does not exist. What is the best way to create both the destination path and copy the file to this path?
0
python
2010-05-08T11:00:00.000
0
2,793,789
Use os.makedirs to create the directory tree.
0
114,233
false
0
1
create destination path for shutil.copy files
2,793,824
2
2
0
0
2
1
0
0
I'm having a problem where I have a queue set up in shared mode and multiple consumers bound to it. The issue is that it appears that rabbitmq is serializing the messages, that is, only one consumer at a time is able to run. I need this to be parallel, however, I can't seem to figure out how. Each consumer is running in its own process. There are plenty of messages in the queue. I'm using py-amqplib to interface with RabbitMQ. Any thoughts?
0
python,parallel-processing,rabbitmq
2010-05-08T17:30:00.000
0
2,794,994
what about prefetching (QOS)? on small queueus I give the appearance of parallelism by declaring the queue, getting the number of messages currently available, attaching a consumer, consuming the messages and then closing it once the number of messages has been consumed. Closing the channel without acknowledging the messages makes the messages available to other consumers, poll the queue quickly enough and you could have a parallel-ish solution.
0
926
false
0
1
RabbitMQ serializing messages from queue with multiple consumers
3,437,797
2
2
0
0
2
1
0
0
I'm having a problem where I have a queue set up in shared mode and multiple consumers bound to it. The issue is that it appears that rabbitmq is serializing the messages, that is, only one consumer at a time is able to run. I need this to be parallel, however, I can't seem to figure out how. Each consumer is running in its own process. There are plenty of messages in the queue. I'm using py-amqplib to interface with RabbitMQ. Any thoughts?
0
python,parallel-processing,rabbitmq
2010-05-08T17:30:00.000
0
2,794,994
Refefer, the preferred AMQP model seems to be a queue-per-connected-consumer. You should create a "direct" exchange and agree upon a routing key that your consumers will all listen for. Then, each consumer that connects should create an exclusive, private, not-durable queue, and use queue_bind() to subscribe their queue to messages matching the public routing key on the exchange. Using this arrangement, my workers are getting to operate in parallel instead of having their operations serialized!
0
926
false
0
1
RabbitMQ serializing messages from queue with multiple consumers
6,284,649
1
4
0
2
15
0
0.099668
0
I need to write a module which will be used from both CPython and IronPython. What's the best way to detect IronPython, since I need a slightly different behaviour in that case? I noticed that sys.platform is "win32" on CPython, but "cli" on IronPython. Is there another preferred/standard way of detecting it?
0
python,ironpython,version,detection
2010-05-08T18:54:00.000
0
2,795,240
The "cli" (= Common Language Infrastructure = .NET = IronPython) is probably reliable. As far as I know, you can access .NET libraries within IronPython, so you could try importing a .NET library, and catch the exception it throws when .NET is not available (as in CPython).
0
3,087
false
0
1
Best way to detect IronPython
2,795,246
1
2
0
4
1
0
0.379949
0
I want to see if there is a microphone active using Python. How can I do it? Thanks in advance!
0
python,python-2.7,microphone
2010-05-09T12:06:00.000
0
2,797,572
Microphones are analog devices, most api's probably couldn't even tell you if there is a microphone plugged in, your computer just reads data from one of your soundcards input channels. What you probably want to know is if the input channels are turned on or off. Determining that is highly platform specific.
0
1,773
false
0
1
How to see if there is one microphone active using python?
2,797,821
1
7
0
0
14
1
0
0
I have a bunch of files. Some are Unix line endings, many are DOS. I'd like to test each file to see if if is dos formatted, before I switch the line endings. How would I do this? Is there a flag I can test for? Something similar?
0
python,bash,file,line-breaks,line-endings
2010-05-09T18:16:00.000
1
2,798,627
dos linebreaks are \r\n, unix only \n. So just search for \r\n.
0
21,186
false
0
1
How can I detect DOS line breaks in a file?
2,798,651
2
2
0
2
4
1
1.2
0
I have used the 2to3 utility to convert code from the command line. What I would like to do is run it basically as a unittest. Even if it tests the file rather than parts(functions, methods...) as would be normal for a unittest. It does not need to be a unittest and I don't what to automatically convert the files I just want to monitor the py3 compliance of files in a unittest like manor. I can't seem to find any documentation or examples for this. An example and/or documentation would be great.
0
python,unit-testing,python-2to3
2010-05-10T04:00:00.000
0
2,800,231
Simply use the -3 option with python2.6+ to be informed of Python3 compliance.
0
349
true
0
1
use/run python's 2to3 as or like a unittest
2,800,242
2
2
0
1
4
1
0.099668
0
I have used the 2to3 utility to convert code from the command line. What I would like to do is run it basically as a unittest. Even if it tests the file rather than parts(functions, methods...) as would be normal for a unittest. It does not need to be a unittest and I don't what to automatically convert the files I just want to monitor the py3 compliance of files in a unittest like manor. I can't seem to find any documentation or examples for this. An example and/or documentation would be great.
0
python,unit-testing,python-2to3
2010-05-10T04:00:00.000
0
2,800,231
If you are trying to verify the code will work in Python 3.x, I would suggest a script that copies the source files to a new directory, runs 2to3 on them, then copies the unit tests to the directory and runs them. This may seem slightly inelegant, but is consistent with the spirit of unit testing. You are making a series of assertions that you believe ought to be true about the external behavior of the code, regardless of implementation. If the converted code passes your unit tests, you can consider your code to support Python 3.
0
349
false
0
1
use/run python's 2to3 as or like a unittest
2,800,372
1
2
0
1
2
0
0.099668
0
I've got a Python module which is distributed on PyPI, and therefore installable using easy_install. It depends on lxml, which in turn depends on libxslt1-dev. I'm unable to install libxslt1-dev with easy_install, so it doesn't work to put it in install_requires. Is there any way I can get setuptools to install it instead of resorting to apt-get?
0
python,installation,packaging,setuptools
2010-05-11T08:05:00.000
0
2,808,956
It's better use apt-get to install lxml (or the python packages that has c extensions) and then pull pure python package from pypi. Also I generally try to avoid using easy_install for top level install, I rather create a virtual env using virtualenv and then use easy_install created by virtualenv to keep my setups clean. This strategy is working successfully for me for couple of production environments.
0
1,090
false
0
1
Installing Python egg dependencies without apt-get
2,809,025
3
4
0
4
13
1
0.197375
0
Why can Lisp with all its dynamic features be statically compiled but Python cannot (without losing all its dynamic features)?
0
python,lisp,compilation,dynamic-languages
2010-05-11T17:27:00.000
0
2,812,954
Actually, there isn't anything that stops you from statically compile a Python program, it's just that no-one wrote such a compiler so far (I personally find Python's runtime to be very easy compared to CL's). You could say that the difference lies in details like "how much time was spent on actually writing compilers and does the language have a formal specification of how to write one". Let's address those points: Lisp compilers have been evolving for over 40 years now, with work starting back in 70's if not earlier (I'm not sure of my dates, too lazy too google exact ones). That creates a massive chunk of lore about how to write a compiler. OTOH, Python was nominally designed as "teaching language", and as such compilers weren't that important. Lack of specification - Python doesn't have a single source specifying exact semantics of the language. Sure, you can point to PEP documents, but it still doesn't change the fact that the only real spec is the source of the main implementation, CPython. Which, nota bene, is a simple compiler of sorts (into bytecode). As for whether it is possible - Python uses quite simple structure to deal with symbols etc., namely its dictionaries. You can treat them as symbol table of a program. You can tag the data types to recognize primitive ones and get the rest based on stored naming and internal structure. rest of the language is also quite simple. The only bit missing is actual work to implement it, and make it run correctly.
0
2,658
false
0
1
Lisp vs Python -- Static Compilation
2,819,922
3
4
0
13
13
1
1.2
0
Why can Lisp with all its dynamic features be statically compiled but Python cannot (without losing all its dynamic features)?
0
python,lisp,compilation,dynamic-languages
2010-05-11T17:27:00.000
0
2,812,954
There is nothing that prevents static compilation of Python. It's a bit less efficient because Python reveals more mutable local scope, also, to retain some of the dynamic properties (e.g. eval) you need to include the compiler with the compiled program but nothing prevents that too. That said, research shows that most Python programs, while dynamic under static analysis, are rather static and monomorphic at runtime. This means that runtime JIT compilation approaches work much better on Python programs. See unladen-swallow, PyPy, Psyco for approaches that do compile Python into machine code. But also IronPython and Jython that use a virtual machines originally intended for a static languages to compile Python into machinecode.
0
2,658
true
0
1
Lisp vs Python -- Static Compilation
2,813,126
3
4
0
4
13
1
0.197375
0
Why can Lisp with all its dynamic features be statically compiled but Python cannot (without losing all its dynamic features)?
0
python,lisp,compilation,dynamic-languages
2010-05-11T17:27:00.000
0
2,812,954
Python can be 'compiled', where compilation is seen as a translation from one Turing Complete language (source code) to another (object code). However in Lisp, the object is assembly, something which is theoretically possible with Python (proven) but not feasible. The true reason however is less flattening. Lisp is in many ways a revolutionary language that pioneered in its dialects a lot of the features in programming languages we are used to today. In Lisps however they just 'follow' logically from the basics of the language. Language which are inspired by the raw expressive powers of lisps such as JavaScript, Ruby, Perl and Python are necessarily interpreted because getting those features in a language with an 'Algol-like syntax' is just hard. Lisp gains these features from being 'homo-iconic' there is no essential difference between a lisp program, and a lisp data-structure. Lisp programs are data-structures, they are structural descriptions of a program in such an S-expression if you like, therefore a compiled lisp program effectively 'interprets itself' without the need of a lexer and all that stuff, a lisp program could just be seen as a manual input of parse tree. Which necessitates a syntax which many people find counter-intuitive to work with, therefore there were a lot of attempts to transport the raw expressive power of the paradigm to a more readable syntax, which means that it's infeasible, but not impossible, to compile it towards assembly. Also, compiling Python to assembly would possibly be slower and larger than 'half-interpreting' it on a virtual machine, a lot of features in python depend upon a syntactic analysis. The above though is written by a huge lisp fanboy, keep that conflict of interest in mind.
0
2,658
false
0
1
Lisp vs Python -- Static Compilation
2,854,477
3
4
0
3
2
1
0.148885
0
In c++ instance variables are private by default,in Python variables are public by default i have two questions regarding the same:- 1: why Python have all the members are public by default? 2: People say you should your member data should be private what if i make my data to be public? what are the disadvantages of this approch? why it is a bad design?
0
c++,python
2010-05-13T05:26:00.000
0
2,824,579
I can't comment on Python, but in C++, structs provide public access by default. The primary reason you want a private part of your class is that, without one, it is impossible to guarantee your invariants are satisfied. If you have a string class, for instance, that is supposed to keep track of the length of the string, you need to be able to track insertions. But if the underlying char* member is public, you can't do that. Anybody can just come along and tack something onto the end, or overwrite your null terminator, or call delete[] on it, or whatever. When you call your length() member, you just have to hope for the best.
0
303
false
0
1
what if i keep my class members are public?
2,824,708
3
4
0
1
2
1
0.049958
0
In c++ instance variables are private by default,in Python variables are public by default i have two questions regarding the same:- 1: why Python have all the members are public by default? 2: People say you should your member data should be private what if i make my data to be public? what are the disadvantages of this approch? why it is a bad design?
0
c++,python
2010-05-13T05:26:00.000
0
2,824,579
It's really a question of language design philosophies. I favour the Python camp so might come down a little heavy handedly on the C++ style but the bottom line is that in C++ it's possible to forcibly prevent users of your class from accessing certain internal parts. In Python, it's a matter of convention and stating that it's internal. Some applications might want to access the internal member for non-malignant purposes (eg. documentation generators). Some users who know what they're doing might want to do the same. People who want to shoot themselves in the foot twiddling with the internal details are not protected from suicide. Like Dennis said "Anybody can just come along and tack something onto the end, or overwrite your null terminator". Python treats the user like an adult and expects her to take care of herself. C++ protects the user as one would a child.
0
303
false
0
1
what if i keep my class members are public?
2,824,800
3
4
0
13
2
1
1.2
0
In c++ instance variables are private by default,in Python variables are public by default i have two questions regarding the same:- 1: why Python have all the members are public by default? 2: People say you should your member data should be private what if i make my data to be public? what are the disadvantages of this approch? why it is a bad design?
0
c++,python
2010-05-13T05:26:00.000
0
2,824,579
You can use a leading underscore in the name to tell readers of the code that the name in question is an internal detail and they must not rely on it remaining in future versions. Such a convention is really all you need -- why weigh the language down with an enforcement mechanism? Data, just like methods, should be public (named without a leading underscore) if they're part of your class's designed API which you intend to support going forward. In C++, or Java, that's unlikely to happen because if you want to change the data member into an accessor method, you're out of luck -- you'll have to break your API and every single client of the class will have to change. In Python, and other languages supporting a property-like construct, that's not the case -- you can always replace a data member with a property which calls accessor methods transparently, the API does not change, nor does client code. So, in Python and other languages with property-like constructs (I believe .NET languages are like that, at source-code level though not necessarily at bytecode level), you may as well leave your data public when it's part of the API and no accessors are currently needed (you can always add accessor methods to later implementation releases if need be, and not break the API). So it's not really a general OO issue, it's language specific: does a given language support a property-like construct. Python does.
0
303
true
0
1
what if i keep my class members are public?
2,824,654
1
3
0
0
16
0
0
0
I am looking for good End to End testing framework under python, where the tests can be written in python and managed in a comfortable way. I know there are many unit testing frameworks, but I am looking for bigger scope, something like test director with support for reports etc,where a whole system is under test.
0
python,testing,automated-tests,system-testing
2010-05-13T12:32:00.000
0
2,826,734
The TTCN3 is a quite good test framework for black-box testing. The comercial tools are having lot of reporting stuff there. It is not in python.
0
22,530
false
0
1
Good automated system testing framework in python
6,698,351
1
1
0
0
0
0
0
1
How can I distinguish between a broadcasted message and a direct message for my ip? I'm doing this in python.
0
python
2010-05-13T21:09:00.000
0
2,830,326
Basically what you need to do is create a raw socket, receive a datagram, and examine the destination address in the header. If that address is a broadcast address for the network adapter the socket is bound to, then you're golden. I don't know how to do this in Python, so I suggest looking for examples of raw sockets and go from there. Bear in mind, you will need root access to use raw sockets, and you had better be real careful if you plan on sending using a raw socket. As you might imagine, this will not be a fun thing to do. I suggest trying to find a way to avoid doing this.
0
83
false
0
1
Distinguishing between broadcasted messages and direct messages
2,830,485
1
4
0
38
45
1
1.2
0
I cannot seem to find a good simple explanation of what python does differently when running with the -O or optimize flag.
0
python,optimization
2010-05-13T21:14:00.000
0
2,830,358
assert statements are completely eliminated, as are statement blocks of the form if __debug__: ... (so you can put your debug code in such statements blocks and just run with -O to avoid that debug code). With -OO, in addition, docstrings are also eliminated.
0
17,421
true
0
1
What are the implications of running python with the optimize flag?
2,830,411
2
3
0
0
3
1
0
0
I want to make my component faster, I am using Javascript and JQuery to build that. I am using JSON object to communicate with component and back-end is python. Is there any suggestion to make component faster?
0
javascript,jquery,python,google-apps
2010-05-14T05:45:00.000
0
2,832,064
If speed is the issue, and you by profiling discover that js is the culprit, then I would look into replacing the jQuery with vanilla javascript, or a more optimized library. As jQuery tries to do 'everything' and trains its users into wrapping everything in $(), its bound to introduce unnecessary method calls (I've seen that a single call to $() can result in upto 100+ method calls).
0
141
false
0
1
how to increase Speed of a component made from Javascript or JQuery?
2,832,539
2
3
0
1
3
1
1.2
0
I want to make my component faster, I am using Javascript and JQuery to build that. I am using JSON object to communicate with component and back-end is python. Is there any suggestion to make component faster?
0
javascript,jquery,python,google-apps
2010-05-14T05:45:00.000
0
2,832,064
Setup some analysis to see what takes time to process. Then decide if you want to try to optimize the javascript and client code, the communication up/down with the server or the actual speed of the python execution. When you have decided what you want to make faster, you can post samples of that to this site and people will probably be willing to help you.
0
141
true
0
1
how to increase Speed of a component made from Javascript or JQuery?
2,832,159
1
3
0
3
0
0
1.2
0
I need to write some scripts to carry out some tasks on my server (running Ubuntu server 8.04 TLS). The tasks are to be run periodically, so I will be running the scripts as cron jobs. I have divided the tasks into "group A" and "group B" - because (in my mind at least), they are a bit different. Task Group A import data from a file and possibly reformat it - by reformatting, I mean doing things like santizing the data, possibly normalizing it and or running calculations on 'columns' of the data Import the munged data into a database. For now, I am mostly using mySQL for the vast majority of imports - although some files will be imported into a sqlLite database. Note: The files will be mostly text files, although some of the files are in a binary format (my own proprietary format, written by a C++ application I developed). Task Group B Extract data from the database Perform calculations on the data and either insert or update tables in the database. My coding experience is is primarily as a C/C++ developer, although I have been using PHP as well for the last 2 years or so (+ a few other languages which are not relevant for the purpose of this question). I am from a Windows background, so I am still finding my feet in the Linux environment. My question is this - I need to write scripts to perform the tasks I described above. Although I suppose I could write a few C++ applications to be used in the shell scripts, I think it may be better to write them in a scripting language, but this may be a flawed assumption. My thinking is that it would be easier to modify things in a script - no need to rebuild etc for changes to functionality. Additionally, C++ data munging in C++ tends to involve more lines of code than "natural" scripting languages such as Perl, Python etc. Assuming that the majority of people on here agree that scripting is the way to go, herein lies my dilemma. Which scripting language do I use to perform the tasks above (giving my background)? My gut instinct tells me that Perl (shudder) would be the most obvious choice for performing all of the above tasks. BUT (and that is a big BUT). The mere mention of Perl makes my toes curl, as I had a very, very bad experience with it a while back (bought the Perl Camel book + 'data munging with Perl' many years ago, but could still not 'grok' it just felt too alien. The syntax seems quite unnatural to me - despite how many times I have tried to learn it - so if possible, I would really like to give it a miss. PHP (which I already know), also am not sure is a good candidate for scripting on the CLI (I have not seen many examples on how to do this etc - so I may be wrong). The last thing I must mention is that IF I have to learn a new language in order to do this, I cannot afford (time constraint) to spend more than a day, in learning the key commands/features required in order to do this (I can always learn the details of the language later, once I have actually deployed the scripts). So, which scripting language would you recommend (PHP, Python, Perl, [insert your favorite here]) - and most importantly WHY? Or, should I just stick to writing little C++ applications that I call in a shell script? Lastly, if you have suggested a scripting language, can you please show with a FEW lines (Perl mongers - I'm looking in your direction [nothing too cryptic!]) how I can use the language you suggested to do what I am trying to do i.e. load a CSV file into some kind of data structure where you can access data columns easily for data manipulation dump the columnar data into a mySQL table load data from mySQL table into a data structure that allows columns/rows to be accessed in the scripting language Hopefully, the snippets will allow me to quickly spot the languages that will pose the steepest learning curve for me - as well as those that simple, elegant and efficient (hopefully those two criteria [elegance and shallow learning curve] are not orthogonal - though I suspect they might be).
0
php,python,perl,shell,data-munging
2010-05-14T10:10:00.000
1
2,833,312
import data from a file and possibly reformat it Python excels at this. Be sure to read up on the csv module so you don't waste time inventing it yourself. For binary data, you may have to use the struct module. [If you wrote the C++ program that produces the binary data, consider rewriting that program to stop using binary data. Your life will be simpler in the long run. Disk storage is cheaper than your time; highly compressed binary formats are more cost than value.] Import the munged data into a database. Extract data from the database Perform calculations on the data and either insert or update tables in the database. Use the mysqldb module for MySQL. SQLite is built-in to Python. Often, you'll want to use Object-Relational mapping rather than write your own SQL. Look at sqlobject and sqlalchemy for this. Also, before doing too much of this, buy a good book on data warehousing. Your two "task groups" sound like you're starting down the data warehousing road. It's easy to get this all fouled up through poor database design. Learn what a "Star Schema" is before you do anything else.
0
819
true
0
1
Data munging and data import scripting
2,833,559
3
5
0
0
4
0
0
0
I would like to know what are the certificates available for programming, like Zend for PHP SUN Certification for java What are the others? Javascript? C++? Python? etc... Please give me some suggestion for other available certifications.
0
php,javascript,python,programming-languages,certificate
2010-05-15T09:53:00.000
0
2,839,663
For linux (implies perl/bash) Comptia+ Red Hat Certified Engineer
0
3,805
false
1
1
What are the most valuable certification available for Programming?
2,840,048
3
5
0
16
4
0
1
0
I would like to know what are the certificates available for programming, like Zend for PHP SUN Certification for java What are the others? Javascript? C++? Python? etc... Please give me some suggestion for other available certifications.
0
php,javascript,python,programming-languages,certificate
2010-05-15T09:53:00.000
0
2,839,663
Most valuable thing for a developer: being able to show you can convert requirements into working and maintainable software. Certifications generally are worth very little, except in a few niches that demand them (or at least ask, until they give up and get someone who puts practice before pieces of paper).
0
3,805
false
1
1
What are the most valuable certification available for Programming?
2,839,684
3
5
0
7
4
0
1
0
I would like to know what are the certificates available for programming, like Zend for PHP SUN Certification for java What are the others? Javascript? C++? Python? etc... Please give me some suggestion for other available certifications.
0
php,javascript,python,programming-languages,certificate
2010-05-15T09:53:00.000
0
2,839,663
Let me be bold and say that your Experience is your best certificate.
0
3,805
false
1
1
What are the most valuable certification available for Programming?
2,839,748
1
2
0
5
5
0
0.462117
0
I'm working with web2py and for some reason web2py seems to fail to notice when code has changed in certain cases. I can't really narrow it down, but from time to time changes in the code are not reflected, web2py obviously has the old version cached somewhere. The only thing that helps is quitting web2py and restarting it (i'm using the internal server). Any hints ? Thank you !
0
python,caching,web2py
2010-05-15T13:07:00.000
0
2,840,201
web2py does cache your code, except for Google App Engine (for speed). That is not the problem. If you you edit code in models, views or controllers, you see the effect immediately. The problem may be modules; if you edit code in modules you will not see the effect immediately, unless you import them with local_import('module', reload=True), or by restarting web2py. Is that is also not your problem, then your browser is caching something. Please bring up this question to the web2py mailing list as we can help more. P.S. If you are using the latest web2py it no longer comes with cherrypy. The built-in web server is called Rocket.
0
1,547
false
1
1
Prevent web2py from caching?
2,840,650
4
5
0
4
0
0
0.158649
0
I know Ruby right now, however I want to learn a new language. I am running Ubuntu 10.04 right now but I am going to get a Mac later this summer. Anyways I want something more for GUI development. I was wondering if I should learn C on Ubuntu right now, and then learn Objective-C when I get an iMac? Will learning C give me an edge? Or should I just learn Python on Ubuntu and then learn Objective-C when I get a new computer?
0
python,c,objective-c,linux,macos
2010-05-15T16:54:00.000
1
2,840,932
It's frequently helpful to learn programming languages in the order they were created. The folks that wrote Objective-C clearly had C and its syntax, peculiarities, and features in mind when they defined the language. It can't hurt you to learn C now. You may have some insight into why Objective-C is structured the way it is later. C has a great, classic book on it, The C Programming Language by Kernighan & Ritchie, which is short and easy to digest if you already have another language under your belt.
0
835
false
0
1
If I start learning C on Ubuntu will it give me an edge when I start learning Objective-C later this summer?
2,840,981
4
5
0
1
0
0
0.039979
0
I know Ruby right now, however I want to learn a new language. I am running Ubuntu 10.04 right now but I am going to get a Mac later this summer. Anyways I want something more for GUI development. I was wondering if I should learn C on Ubuntu right now, and then learn Objective-C when I get an iMac? Will learning C give me an edge? Or should I just learn Python on Ubuntu and then learn Objective-C when I get a new computer?
0
python,c,objective-c,linux,macos
2010-05-15T16:54:00.000
1
2,840,932
Sure Objective-C is quite easier to learn if you know C and quite a few books on Objective-C even asume you know C. Also consider learning a bit about MacRuby for GUI development ;)
0
835
false
0
1
If I start learning C on Ubuntu will it give me an edge when I start learning Objective-C later this summer?
2,840,961
4
5
0
0
0
0
0
0
I know Ruby right now, however I want to learn a new language. I am running Ubuntu 10.04 right now but I am going to get a Mac later this summer. Anyways I want something more for GUI development. I was wondering if I should learn C on Ubuntu right now, and then learn Objective-C when I get an iMac? Will learning C give me an edge? Or should I just learn Python on Ubuntu and then learn Objective-C when I get a new computer?
0
python,c,objective-c,linux,macos
2010-05-15T16:54:00.000
1
2,840,932
Learning C will definitely be of help, as Objective C inherits its many properties and adds to it. You could learn Objective C either from 'Learn Objective C on the Mac', this one's really a great book, and then if you plan to learn cocoa, get 'Learn Cocoa on the Mac' or the one by James Davidson, they should give you a fine head start, you can then consider moving to the one by Hillegass, and for a stunner 'Objective C developer handbook' by David Chisnall, this is a keeper, you can read it in a month or two. For the compiler I would point you to clang though a gcc and gnustep combination will work. clang is a better choice if you want to work on Obj C 2.0 features and it is under heavy development.
0
835
false
0
1
If I start learning C on Ubuntu will it give me an edge when I start learning Objective-C later this summer?
2,909,728
4
5
0
0
0
0
0
0
I know Ruby right now, however I want to learn a new language. I am running Ubuntu 10.04 right now but I am going to get a Mac later this summer. Anyways I want something more for GUI development. I was wondering if I should learn C on Ubuntu right now, and then learn Objective-C when I get an iMac? Will learning C give me an edge? Or should I just learn Python on Ubuntu and then learn Objective-C when I get a new computer?
0
python,c,objective-c,linux,macos
2010-05-15T16:54:00.000
1
2,840,932
Yes. Learn how to program in C.
0
835
false
0
1
If I start learning C on Ubuntu will it give me an edge when I start learning Objective-C later this summer?
5,783,941
1
1
0
1
2
1
0.197375
0
I'm writing a simple parser of .git/* files. I covered almost everything, like objects, refs, pack files etc. But I have a problem. Let's say I have a big 300M repository (in a pack file) and I want to find out all the commits which changed /some/deep/inside/file file. What I'm doing now is: fetching last commit finding a file in it by: fetching parent tree finding out a tree inside recursively repeat until I get into the file additionally I'm checking hashes of each subfolders on my way to file. If one of them is the same as in commit before, I assume that file was not changed (because it's parent dir didn't change) then I store the hash of a file and fetch parent commit finding file again and check if hash change occurs if yes then original commit (i.e. one before parent) was changing a file And I repeat it over and over until I reach very first commit. This solution works, but it sucks. In worse case scenario, first search can take even 3 minutes (for 300M pack). Is there any way to speed it up ? I tried to avoid putting so large objects in memory, but right now I don't see any other way. And even that, initial memory load will take forever :( Greets and thanks for any help!
0
python,git
2010-05-15T21:40:00.000
0
2,841,863
That's the basic algorithm that git uses to track changes to a particular file. That's why "git log -- some/path/to/file.txt" is a comparatively slow operation, compared to many other SCM systems where it would be simple (e.g. in CVS, P4 et al each repo file is a server file with the file's history). It shouldn't take so long to evaluate though: the amount you ever have to keep in memory is quite small. You already mentioned the main point: remember the tree IDs going down to the path to quickly eliminate commits that didn't even touch that subtree. It's rare for tree objects to be very big, just like directories on a filesystem (unsurprisingly). Are you using the pack index? If you're not, then you essentially have to unpack the entire pack to find this out since trees could be at the end of a long delta chain. If you have an index, you'll still have to apply deltas to get your tree objects, but at least you should be able to find them quickly. Keep a cache of applied deltas, since obviously it's very common for trees to reuse the same or similar bases- most tree object changes are just changing 20 bytes from a previous tree object. So if in order to get tree T1, you have to start with object T8 and apply Td7 to get T7, T6.... etc. it's entirely likely that these other trees T2-8 will be referenced again.
0
75
false
0
1
How does git fetches commits associated to a file?
2,844,342
1
2
0
0
2
1
0
0
I'm using Python in a webapp (CGI for testing, FastCGI for production) that needs to send an occasional email (when a user registers or something else important happens). Since communicating with an SMTP server takes a long time, I'd like to spawn a thread for the mail function so that the rest of the app can finish up the request without waiting for the email to finish sending. I tried using thread.start_new(func, (args)), but the Parent return's and exits before the sending is complete, thereby killing the sending process before it does anything useful. Is there anyway to keep the process alive long enough for the child process to finish?
0
python,multithreading,smtp,cgi
2010-05-17T15:50:00.000
0
2,850,566
You might want to use threading.enumerate, if you have multiple workers and want to see which one(s) are still running. Other alternatives include using threading.Event---the main thread sets the event to True and starts the worker thread off. The worker thread unsets the event when if finishes work, and the main check whether the event is set/unset to figure out if it can exit.
0
3,627
false
0
1
Parent Thread exiting before Child Threads [python]
2,851,122
3
4
0
0
1
1
0
0
I decided to rewrite all our Bash scripts in Python (there are not so many of them) as my first Python project. The reason for it is that although being quite fluent in Bash I feel it's somewhat archaic language and since our system is in the first stages of its developments I think switching to Python now will be the right thing to do. Are there scripts that should always be written in Bash? For example, we have an init.d daemon script - is it OK to use Python for it? We run CentOS. Thanks.
0
python,linux,bash,scripting
2010-05-17T20:13:00.000
1
2,852,397
Certain scripts that I write simply involving looping over a glob in some directories, and then executing some a piped series of commands on them. This kind of thing is much more tedious in python.
0
387
false
0
1
What scripts should not be ported from bash to python?
2,853,719