Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
454,944 | 2009-01-18T10:42:00.000 | 1 | 0 | 0 | 0 | python,django,message-queue | 456,389 | 8 | false | 1 | 0 | Just add the emails to a database, and then write another script ran by some task scheduler utility (cron comes to mind) to send the emails. | 4 | 44 | 0 | I have an application in Django, that needs to send a large number of emails to users in various use cases. I don't want to handle this synchronously within the application for obvious reasons.
Has anyone any recommendations for a message queuing server which integrates well with Python, or they have used on a Django project? The rest of my stack is Apache, mod_python, MySQL. | Advice on Python/Django and message queues | 0.024995 | 0 | 0 | 20,229 |
455,075 | 2009-01-18T12:51:00.000 | 1 | 0 | 1 | 0 | python | 7,974,540 | 10 | false | 0 | 0 | What I like to do is(with my ti 83) instead of doing all my math by hand I like to program my calculator ti di the problem then do the rest of the problems with the new program. It's fun and you get your homework done so you could do this in python for a fun project(s). | 7 | 4 | 0 | Just like the title asks. I've been learning Python for a while now and I'd say I'm pretty decent with it. I'm looking for a medium or large project to keep me busy for quite a while. Your suggestions are greatly appreciated. | What's a good medium/large project for a Python programmer? | 0.019997 | 0 | 0 | 3,256 |
455,075 | 2009-01-18T12:51:00.000 | 1 | 0 | 1 | 0 | python | 456,338 | 10 | false | 0 | 0 | Anything that hasn't been done to death... no need for yet another clone of popular app x | 7 | 4 | 0 | Just like the title asks. I've been learning Python for a while now and I'd say I'm pretty decent with it. I'm looking for a medium or large project to keep me busy for quite a while. Your suggestions are greatly appreciated. | What's a good medium/large project for a Python programmer? | 0.019997 | 0 | 0 | 3,256 |
455,075 | 2009-01-18T12:51:00.000 | -1 | 0 | 1 | 0 | python | 455,211 | 10 | false | 0 | 0 | I think the best thing you can do now is spend time learning a new technology, preferably including a new programming language. | 7 | 4 | 0 | Just like the title asks. I've been learning Python for a while now and I'd say I'm pretty decent with it. I'm looking for a medium or large project to keep me busy for quite a while. Your suggestions are greatly appreciated. | What's a good medium/large project for a Python programmer? | -0.019997 | 0 | 0 | 3,256 |
455,075 | 2009-01-18T12:51:00.000 | 0 | 0 | 1 | 0 | python | 455,119 | 10 | false | 0 | 0 | If I had the time to code something just for the fun and the experience, I would personally start an open source project for something that people need and which does not already exist.
You can search the Web for a list of missing opensource projects, or you can base it on your own experience (for example, I would personally love to have some way to synchronize my iPhone with thunderbird+lightning : I hear there's a solution through Google Calendars, but I would like a solution without external servers). | 7 | 4 | 0 | Just like the title asks. I've been learning Python for a while now and I'd say I'm pretty decent with it. I'm looking for a medium or large project to keep me busy for quite a while. Your suggestions are greatly appreciated. | What's a good medium/large project for a Python programmer? | 0 | 0 | 0 | 3,256 |
455,075 | 2009-01-18T12:51:00.000 | 0 | 0 | 1 | 0 | python | 455,111 | 10 | false | 0 | 0 | Consider something that does the following:
is multi threaded and preferably include need for synchronization
reads/writes data to a remote database (or even local database)
reads from a web service and includes xml parsing
outputs xml/ html
There are a number of example projects you could do, but if you accomplish all the above, then it will surely give sufficient exposure. | 7 | 4 | 0 | Just like the title asks. I've been learning Python for a while now and I'd say I'm pretty decent with it. I'm looking for a medium or large project to keep me busy for quite a while. Your suggestions are greatly appreciated. | What's a good medium/large project for a Python programmer? | 0 | 0 | 0 | 3,256 |
455,075 | 2009-01-18T12:51:00.000 | 15 | 0 | 1 | 0 | python | 455,190 | 10 | false | 0 | 0 | Find a local charitable orgainzation with a lousy web presence. Solve their problem. Help other people. Learn more Python. Everyone wins. | 7 | 4 | 0 | Just like the title asks. I've been learning Python for a while now and I'd say I'm pretty decent with it. I'm looking for a medium or large project to keep me busy for quite a while. Your suggestions are greatly appreciated. | What's a good medium/large project for a Python programmer? | 1 | 0 | 0 | 3,256 |
455,075 | 2009-01-18T12:51:00.000 | 6 | 0 | 1 | 0 | python | 455,151 | 10 | true | 0 | 0 | What are you interested in doing? You could write a whole host of database programs, for keeping track of recipes, cd's, contacts, self tests, etc....
Basically make code to load/save to a database and enforce some business rules, then expose it by a web service. Then make both a web front end and an application graphical front end (using TK/WxWidgets/Qt (4.5 will be LGPL YAY)) that talk with the web service.
That should give you practice with creating/talking with web services (something more and more companies are doing) along with both main ways of creating a GUI. | 7 | 4 | 0 | Just like the title asks. I've been learning Python for a while now and I'd say I'm pretty decent with it. I'm looking for a medium or large project to keep me busy for quite a while. Your suggestions are greatly appreciated. | What's a good medium/large project for a Python programmer? | 1.2 | 0 | 0 | 3,256 |
455,552 | 2009-01-18T17:39:00.000 | 53 | 0 | 1 | 0 | python,eclipse,debugging,exception,pydev | 6,655,894 | 3 | false | 0 | 0 | This was added by the PyDev author, under Run > Manage Python Exception Breakpoints | 1 | 48 | 0 | Is it possible to get the pydev debugger to break on exception? | Break on exception in pydev | 1 | 0 | 0 | 12,226 |
455,717 | 2009-01-18T19:01:00.000 | 2 | 0 | 1 | 0 | python,version-control,python-3.x | 455,725 | 5 | false | 0 | 0 | For developement, option 3 is too cumbersome. Maintaining two branches is the easiest way although the way to do that will vary between VCSes. Many DVCS will be happier with separate repos (with a common ancestry to help merging) and centralized VCS will probably easier to work with with two branches. Option 1 is possible but you may miss something to merge and a bit more error-prone IMO.
For distribution, I'd use option 3 as well if possible. All 3 options are valid anyway and I have seen variations on these models from times to times. | 4 | 9 | 0 | Suppose I've developed a general-purpose end user utility written in Python. Previously, I had just one version available which was suitable for Python later than version 2.3 or so. It was sufficient to say, "download Python if you need to, then run this script". There was just one version of the script in source control (I'm using Git) to keep track of.
With Python 3, this is no longer necessarily true. For the foreseeable future, I will need to simultaneously develop two different versions, one suitable for Python 2.x and one suitable for Python 3.x. From a development perspective, I can think of a few options:
Maintain two different scripts in the same branch, making improvements to both simultaneously.
Maintain two separate branches, and merge common changes back and forth as development proceeds.
Maintain just one version of the script, plus check in a patch file that converts the script from one version to the other. When enough changes have been made that the patch no longer applies cleanly, resolve the conflicts and create a new patch.
I am currently leaning toward option 3, as the first two would involve a lot of error-prone tedium. But option 3 seems messy and my source control system is supposed to be managing patches for me.
For distribution packaging, there are more options to choose from:
Offer two different download packages, one suitable for Python 2 and one suitable for Python 3 (the user will have to know to download the correct one for whatever version of Python they have).
Offer one download package, with two different scripts inside (and then the user has to know to run the correct one).
One download package with two version-specific scripts, and a small stub loader that can run in both Python versions, that runs the correct script for the Python version installed.
Again I am currently leaning toward option 3 here, although I haven't tried to develop such a stub loader yet.
Any other ideas? | Python 3 development and distribution challenges | 0.07983 | 0 | 0 | 745 |
455,717 | 2009-01-18T19:01:00.000 | 1 | 0 | 1 | 0 | python,version-control,python-3.x | 455,770 | 5 | false | 0 | 0 | I would start by migrating to 2.6, which is very close to python 3.0. You might even want to wait for 2.7, which will be even closer to python 3.0.
And then, once you have migrated to 2.6 (or 2.7), I suggest you simply keep just one version of the script, with things like "if PY3K:... else:..." in the rare places where it will be mandatory. Of course it's not the kind of code we developers like to write, but then you don't have to worry about managing multiple scripts or branches or patches or distributions, which will be a nightmare.
Whatever you choose, make sure you have thorough tests with 100% code coverage.
Good luck! | 4 | 9 | 0 | Suppose I've developed a general-purpose end user utility written in Python. Previously, I had just one version available which was suitable for Python later than version 2.3 or so. It was sufficient to say, "download Python if you need to, then run this script". There was just one version of the script in source control (I'm using Git) to keep track of.
With Python 3, this is no longer necessarily true. For the foreseeable future, I will need to simultaneously develop two different versions, one suitable for Python 2.x and one suitable for Python 3.x. From a development perspective, I can think of a few options:
Maintain two different scripts in the same branch, making improvements to both simultaneously.
Maintain two separate branches, and merge common changes back and forth as development proceeds.
Maintain just one version of the script, plus check in a patch file that converts the script from one version to the other. When enough changes have been made that the patch no longer applies cleanly, resolve the conflicts and create a new patch.
I am currently leaning toward option 3, as the first two would involve a lot of error-prone tedium. But option 3 seems messy and my source control system is supposed to be managing patches for me.
For distribution packaging, there are more options to choose from:
Offer two different download packages, one suitable for Python 2 and one suitable for Python 3 (the user will have to know to download the correct one for whatever version of Python they have).
Offer one download package, with two different scripts inside (and then the user has to know to run the correct one).
One download package with two version-specific scripts, and a small stub loader that can run in both Python versions, that runs the correct script for the Python version installed.
Again I am currently leaning toward option 3 here, although I haven't tried to develop such a stub loader yet.
Any other ideas? | Python 3 development and distribution challenges | 0.039979 | 0 | 0 | 745 |
455,717 | 2009-01-18T19:01:00.000 | 2 | 0 | 1 | 0 | python,version-control,python-3.x | 455,831 | 5 | false | 0 | 0 | I don't think I'd take this path at all. It's painful whichever way you look at it. Really, unless there's strong commercial interest in keeping both versions simultaneously, this is more headache than gain.
I think it makes more sense to just keep developing for 2.x for now, at least for a few months, up to a year. At some point in time it will be just time to declare on a final, stable version for 2.x and develop the next ones for 3.x+
For example, I won't switch to 3.x until some of the major frameworks go that way: PyQt, matplotlib, numpy, and some others. And I don't really mind if at some point they stop 2.x support and just start developing for 3.x, because I'll know that in a short time I'll be able to switch to 3.x too. | 4 | 9 | 0 | Suppose I've developed a general-purpose end user utility written in Python. Previously, I had just one version available which was suitable for Python later than version 2.3 or so. It was sufficient to say, "download Python if you need to, then run this script". There was just one version of the script in source control (I'm using Git) to keep track of.
With Python 3, this is no longer necessarily true. For the foreseeable future, I will need to simultaneously develop two different versions, one suitable for Python 2.x and one suitable for Python 3.x. From a development perspective, I can think of a few options:
Maintain two different scripts in the same branch, making improvements to both simultaneously.
Maintain two separate branches, and merge common changes back and forth as development proceeds.
Maintain just one version of the script, plus check in a patch file that converts the script from one version to the other. When enough changes have been made that the patch no longer applies cleanly, resolve the conflicts and create a new patch.
I am currently leaning toward option 3, as the first two would involve a lot of error-prone tedium. But option 3 seems messy and my source control system is supposed to be managing patches for me.
For distribution packaging, there are more options to choose from:
Offer two different download packages, one suitable for Python 2 and one suitable for Python 3 (the user will have to know to download the correct one for whatever version of Python they have).
Offer one download package, with two different scripts inside (and then the user has to know to run the correct one).
One download package with two version-specific scripts, and a small stub loader that can run in both Python versions, that runs the correct script for the Python version installed.
Again I am currently leaning toward option 3 here, although I haven't tried to develop such a stub loader yet.
Any other ideas? | Python 3 development and distribution challenges | 0.07983 | 0 | 0 | 745 |
455,717 | 2009-01-18T19:01:00.000 | 0 | 0 | 1 | 0 | python,version-control,python-3.x | 455,763 | 5 | false | 0 | 0 | Whichever option for development is chosen, most potential issues could be alleviated with thorough unit testing to ensure that the two versions produce matching output. That said, option 2 seems most natural to me: applying changes from one source tree to another source tree is a task (most) version control systems were designed for--why not take advantages of the tools they provide to ease this.
For development, it is difficult to say without 'knowing your audience'. Power Python users would probably appreciate not having to download two copies of your software yet for a more general user-base it should probably 'just work'. | 4 | 9 | 0 | Suppose I've developed a general-purpose end user utility written in Python. Previously, I had just one version available which was suitable for Python later than version 2.3 or so. It was sufficient to say, "download Python if you need to, then run this script". There was just one version of the script in source control (I'm using Git) to keep track of.
With Python 3, this is no longer necessarily true. For the foreseeable future, I will need to simultaneously develop two different versions, one suitable for Python 2.x and one suitable for Python 3.x. From a development perspective, I can think of a few options:
Maintain two different scripts in the same branch, making improvements to both simultaneously.
Maintain two separate branches, and merge common changes back and forth as development proceeds.
Maintain just one version of the script, plus check in a patch file that converts the script from one version to the other. When enough changes have been made that the patch no longer applies cleanly, resolve the conflicts and create a new patch.
I am currently leaning toward option 3, as the first two would involve a lot of error-prone tedium. But option 3 seems messy and my source control system is supposed to be managing patches for me.
For distribution packaging, there are more options to choose from:
Offer two different download packages, one suitable for Python 2 and one suitable for Python 3 (the user will have to know to download the correct one for whatever version of Python they have).
Offer one download package, with two different scripts inside (and then the user has to know to run the correct one).
One download package with two version-specific scripts, and a small stub loader that can run in both Python versions, that runs the correct script for the Python version installed.
Again I am currently leaning toward option 3 here, although I haven't tried to develop such a stub loader yet.
Any other ideas? | Python 3 development and distribution challenges | 0 | 0 | 0 | 745 |
456,001 | 2009-01-18T22:09:00.000 | 35 | 1 | 1 | 0 | python,class,static-methods | 456,008 | 7 | true | 0 | 0 | There are none. This is what modules are for: grouping related functions. Using a class full of static methods makes me cringe from Javaitis. The only time I would use a static function is if the function is an integral part of the class. (In fact, I'd probably want to use a class method anyway.) | 5 | 20 | 0 | I have a Python class full of static methods. What are the advantages and disadvantages of packaging these in a class rather than raw functions? | Is there any advantage in using a Python class? | 1.2 | 0 | 0 | 6,805 |
456,001 | 2009-01-18T22:09:00.000 | 2 | 1 | 1 | 0 | python,class,static-methods | 456,081 | 7 | false | 0 | 0 | Classes are only useful when you have a set of functionality than interacts with a set of data (instance properties) that needs to be persisted between function calls and referenced in a discrete fashion.
If your class contains nothing other than static methods, then your class is just syntactic cruft, and straight functions are much clearer and all that you need. | 5 | 20 | 0 | I have a Python class full of static methods. What are the advantages and disadvantages of packaging these in a class rather than raw functions? | Is there any advantage in using a Python class? | 0.057081 | 0 | 0 | 6,805 |
456,001 | 2009-01-18T22:09:00.000 | 0 | 1 | 1 | 0 | python,class,static-methods | 456,013 | 7 | false | 0 | 0 | I agree with Benjamin. Rather than having a bunch of static methods, you should probably have a bunch of functions. And if you want to organize them, you should think about using modules rather than classes. However, if you want to refactor your code to be OO, that's another matter. | 5 | 20 | 0 | I have a Python class full of static methods. What are the advantages and disadvantages of packaging these in a class rather than raw functions? | Is there any advantage in using a Python class? | 0 | 0 | 0 | 6,805 |
456,001 | 2009-01-18T22:09:00.000 | 1 | 1 | 1 | 0 | python,class,static-methods | 456,016 | 7 | false | 0 | 0 | Not only are there no advantages, but it makes things slower than using a module full of methods. There's much less need for static methods in python than there is for them in java or c#, they are used in very special cases. | 5 | 20 | 0 | I have a Python class full of static methods. What are the advantages and disadvantages of packaging these in a class rather than raw functions? | Is there any advantage in using a Python class? | 0.028564 | 0 | 0 | 6,805 |
456,001 | 2009-01-18T22:09:00.000 | 0 | 1 | 1 | 0 | python,class,static-methods | 456,222 | 7 | false | 0 | 0 | Depends on the nature of the functions. If they're not strongly unrelated (minimal amount of calls between them) and they don't have any state then yes I'd say dump them into a module. However, you could be shooting yourself in the foot if you ever need to modify the behavior as you're throwing inheritance out the window. So my answer is maybe, and be sure you look at your particular scenario rather then always assuming a module is the best way to collect a set of methods. | 5 | 20 | 0 | I have a Python class full of static methods. What are the advantages and disadvantages of packaging these in a class rather than raw functions? | Is there any advantage in using a Python class? | 0 | 0 | 0 | 6,805 |
456,884 | 2009-01-19T08:32:00.000 | 8 | 1 | 0 | 0 | python,c++,c,swig,cython | 3,167,276 | 10 | false | 0 | 1 | An observation: Based on the benchmarking conducted by the pybindgen developers, there is no significant difference between boost.python and swig. I haven't done my own benchmarking to verify how much of this depends on the proper use of the boost.python functionality.
Note also that there may be a reason that pybindgen seems to be in general quite a bit faster than swig and boost.python: it may not produce as versatile a binding as the other two. For instance, exception propagation, call argument type checking, etc. I haven't had a chance to use pybindgen yet but I intend to.
Boost is in general quite big package to install, and last I saw you can't just install boost python you pretty much need the whole Boost library. As others have mentioned compilation will be slow due to heavy use of template programming, which also means typically rather cryptic error messages at compile time.
Summary: given how easy SWIG is to install and use, that it generates decent binding that is robust and versatile, and that one interface file allows your C++ DLL to be available from several other languages like LUA, C#, and Java, I would favor it over boost.python. But unless you really need multi-language support I would take a close look at PyBindGen because of its purported speed, and pay close attention to robustness and versatility of binding it generates. | 3 | 70 | 0 | I found the bottleneck in my python code, played around with psycho etc. Then decided to write a c/c++ extension for performance.
With the help of swig you almost don't need to care about arguments etc. Everything works fine.
Now my question: swig creates a quite large py-file which does a lot of 'checkings' and 'PySwigObject' before calling the actual .pyd or .so code.
Does anyone of you have any experience whether there is some more performance to gain if you hand-write this file or let swig do it. | Extending python - to swig, not to swig or Cython | 1 | 0 | 0 | 32,045 |
456,884 | 2009-01-19T08:32:00.000 | 6 | 1 | 0 | 0 | python,c++,c,swig,cython | 461,364 | 10 | false | 0 | 1 | There be dragons here. Don't swig, don't boost. For any complicated project the code you have to fill in yourself to make them work becomes unmanageable quickly. If it's a plain C API to your library (no classes), you can just use ctypes. It will be easy and painless, and you won't have to spend hours trawling through the documentation for these labyrinthine wrapper projects trying to find the one tiny note about the feature you need. | 3 | 70 | 0 | I found the bottleneck in my python code, played around with psycho etc. Then decided to write a c/c++ extension for performance.
With the help of swig you almost don't need to care about arguments etc. Everything works fine.
Now my question: swig creates a quite large py-file which does a lot of 'checkings' and 'PySwigObject' before calling the actual .pyd or .so code.
Does anyone of you have any experience whether there is some more performance to gain if you hand-write this file or let swig do it. | Extending python - to swig, not to swig or Cython | 1 | 0 | 0 | 32,045 |
456,884 | 2009-01-19T08:32:00.000 | 3 | 1 | 0 | 0 | python,c++,c,swig,cython | 456,894 | 10 | false | 0 | 1 | If its not a big extension, boost::python might also be an option, it executes faster than swig, because you control what's happening, but it'll take longer to dev.
Anyways swig's overhead is acceptable if the amount of work within a single call is large enough. For example if you issue is that you have some medium sized logic block you want to move to C/C++, but that block is called within a tight-loop, frequently, you might have to avoid swig, but I can't really think of any real-world examples except for scripted graphics shaders. | 3 | 70 | 0 | I found the bottleneck in my python code, played around with psycho etc. Then decided to write a c/c++ extension for performance.
With the help of swig you almost don't need to care about arguments etc. Everything works fine.
Now my question: swig creates a quite large py-file which does a lot of 'checkings' and 'PySwigObject' before calling the actual .pyd or .so code.
Does anyone of you have any experience whether there is some more performance to gain if you hand-write this file or let swig do it. | Extending python - to swig, not to swig or Cython | 0.059928 | 0 | 0 | 32,045 |
456,926 | 2009-01-19T08:52:00.000 | 0 | 1 | 0 | 0 | java,c++,python,c | 460,265 | 3 | false | 0 | 0 | I want to test the Memory utilization but after executing the code i am unable to test the same.
As i am new to this so help me more on this.
Let we have 3 Virtual machine V1,V2,V3
For V1 - Set shared resource as High
For V2 - Set shared resources as Normal
For V3 - Set shared resources as Normal
So it means total is 2 GB then V1 get 1 GB and V2,V3 gets 512 MB each . So i want to test using programming if some one changes the Shared or reservation or Limit then how it works. | 2 | 1 | 0 | Please tell me C++/Java code which utilize memory more than 70% .
For Example we have 3 Virtual machine and in memory resources we want to test the
memory utilization as per memory resources allocated by user. | Code to utilize memory more than 70% | 0 | 0 | 0 | 352 |
456,926 | 2009-01-19T08:52:00.000 | 4 | 1 | 0 | 0 | java,c++,python,c | 456,948 | 3 | false | 0 | 0 | Which memory? On a 64 bit platform, a 64 bit process can use far more than 4GB. You'd be filling swap for hours before you hit those limits.
If you want to test "70% of physical RAM", you might discover that you cannot allocate 70% of the 32 bits address space. A significant amount is already claimed by the OS. | 2 | 1 | 0 | Please tell me C++/Java code which utilize memory more than 70% .
For Example we have 3 Virtual machine and in memory resources we want to test the
memory utilization as per memory resources allocated by user. | Code to utilize memory more than 70% | 0.26052 | 0 | 0 | 352 |
457,207 | 2009-01-19T10:43:00.000 | 0 | 0 | 0 | 0 | python,pdf,pypdf2,pypdf | 459,639 | 7 | false | 1 | 0 | Acrobat Javascript API has a setPageBoxes method, but Adobe doesn't provide any Python code samples. Only C++, C# and VB. | 2 | 23 | 0 | I was wondering if anyone had any experience in working programmatically with .pdf files. I have a .pdf file and I need to crop every page down to a certain size.
After a quick Google search I found the pyPdf library for python but my experiments with it failed. When I changed the cropBox and trimBox attributes on a page object the results were not what I had expected and appeared to be quite random.
Has anyone had any experience with this? Code examples would be well appreciated, preferably in python. | Cropping pages of a .pdf file | 0 | 0 | 0 | 40,532 |
457,207 | 2009-01-19T10:43:00.000 | 0 | 0 | 0 | 0 | python,pdf,pypdf2,pypdf | 459,523 | 7 | false | 1 | 0 | You can convert the PDF to Postscript (pstopdf or ps2pdf) and than use text processing on the Postscript file. After that you can convert the output back to PDF.
This works nicely if the PDFs you want to process are all generated by the same application and are somewhat similar. If they come from different sources it is usually to hard to process the Postscript files - the structure is varying to much. But even than you migt be able to fix page sizes and the like with a few regular expressions. | 2 | 23 | 0 | I was wondering if anyone had any experience in working programmatically with .pdf files. I have a .pdf file and I need to crop every page down to a certain size.
After a quick Google search I found the pyPdf library for python but my experiments with it failed. When I changed the cropBox and trimBox attributes on a page object the results were not what I had expected and appeared to be quite random.
Has anyone had any experience with this? Code examples would be well appreciated, preferably in python. | Cropping pages of a .pdf file | 0 | 0 | 0 | 40,532 |
458,311 | 2009-01-19T16:58:00.000 | 6 | 0 | 1 | 0 | python,easy-install,egg | 1,878,505 | 4 | false | 0 | 0 | sh setuptools-0.6c9-py2.5.egg | 1 | 17 | 0 | I have Python 2.6 and I want to install easy _ install module. The problem is that the only available installation package of easy _ install for Python 2.6 is an .egg file! What should I do? | How do I install an .egg file without easy_install in Windows? | 1 | 0 | 0 | 25,527 |
458,340 | 2009-01-19T17:05:00.000 | 14 | 0 | 0 | 0 | python,qt,layout,printing,gpl | 458,353 | 10 | false | 1 | 0 | There's LaTeX. Not sure if that falls into the "as easy to use as html" category, but it's not hard. | 1 | 15 | 0 | I'm using Python and Qt 4.4 and I have to print some pages. Initially I thought I'd use HTML with CSS to produce those pages. But HTML has some limitations.
Now the question is: is there anything that's better than HTML but just (or almost) as easy to use? Additionally, it should be GPL-compatible.
Edit:
kdgregory & Mark G: The most obvious limitation is that I can't specify the printer margins. There is another problem: How do I add page numbers?
Jeremy French: One thing I have to print is a list of all the products someone ordered which can spread over a few pages. | Is there a better layout language than HTML for printing? | 1 | 0 | 0 | 6,190 |
460,068 | 2009-01-20T03:43:00.000 | 3 | 0 | 0 | 1 | python,sockets,twisted,multiprocess | 460,245 | 1 | true | 0 | 0 | It sounds like you might need to keep a reference to the transport (or protocol) along with the bytes the just came in on that protocol in your 'event' object. That way responses that came in on a connection go out on the same connection.
If things don't need to be processed serially perhaps you should think about setting up functors that can handle the data in parallel to remove the need for queueing. Just keep in mind that you will need to protect critical sections of your code.
Edit:
Judging from your other question about evaluating your server design it would seem that processing in parallel may not be possible for your situation, so my first suggestion stands. | 1 | 2 | 0 | I have a "manager" process on a node, and several worker processes. The manager is the actual server who holds all of the connections to the clients. The manager accepts all incoming packets and puts them into a queue, and then the worker processes pull the packets out of the queue, process them, and generate a result. They send the result back to the manager (by putting them into another queue which is read by the manager), but here is where I get stuck: how do I send the result to a specific socket? When dealing with the processing of the packets on a single process, it's easy, because when you receive a packet you can reply to it by just grabbing the "transport" object in-context. But how would I do this with the method I'm using? | Python/Twisted - Sending to a specific socket object? | 1.2 | 0 | 1 | 829 |
460,144 | 2009-01-20T04:38:00.000 | 6 | 0 | 0 | 1 | python,tcp,twisted,packet | 460,224 | 3 | true | 0 | 0 | In the dataReceived method you get back the data as a string of indeterminate length meaning that it may be a whole message in your protocol or it may only be part of the message that some 'client' sent to you. You will have to inspect the data to see if it comprises a whole message in your protocol.
I'm currently using Twisted on one of my projects to implement a protocol and decided to use the struct module to pack/unpack my data. The protocol I am implementing has a fixed header size so I don't construct any messages until I've read at least HEADER_SIZE amount of bytes. The total message size is declared in this header data portion.
I guess you don't really need to define a message length as part of your protocol but it helps. If you didn't define one you would have to have a special delimiter that determines when a message begins/ends. Sort of how the FIX protocol uses the SOH byte to delimit fields. Though it does have a required field that tells you how long a message is (just not how many fields are in a message). | 3 | 6 | 0 | In Twisted when implementing the dataReceived method, there doesn't seem to be any examples which refer to packets being fragmented. In every other language this is something you manually implement, so I was just wondering if this is done for you in twisted already or what? If so, do I need to prefix my packets with a length header? Or do I have to do this manually? If so, what way would that be? | Python/Twisted - TCP packet fragmentation? | 1.2 | 0 | 0 | 3,827 |
460,144 | 2009-01-20T04:38:00.000 | 6 | 0 | 0 | 1 | python,tcp,twisted,packet | 461,477 | 3 | false | 0 | 0 | When dealing with TCP, you should really forget all notion of 'packets'. TCP is a stream protocol - you stream data in and data streams out the other side. Once the data is sent, it is allowed to arrive in as many or as few blocks as it wants, as long as the data all arrives in the right order. You'll have to manually do the delimitation as with other languages, with a length field, or a message type field, or a special delimiter character, etc. | 3 | 6 | 0 | In Twisted when implementing the dataReceived method, there doesn't seem to be any examples which refer to packets being fragmented. In every other language this is something you manually implement, so I was just wondering if this is done for you in twisted already or what? If so, do I need to prefix my packets with a length header? Or do I have to do this manually? If so, what way would that be? | Python/Twisted - TCP packet fragmentation? | 1 | 0 | 0 | 3,827 |
460,144 | 2009-01-20T04:38:00.000 | 2 | 0 | 0 | 1 | python,tcp,twisted,packet | 817,378 | 3 | false | 0 | 0 | You can also use a LineReceiver protocol | 3 | 6 | 0 | In Twisted when implementing the dataReceived method, there doesn't seem to be any examples which refer to packets being fragmented. In every other language this is something you manually implement, so I was just wondering if this is done for you in twisted already or what? If so, do I need to prefix my packets with a length header? Or do I have to do this manually? If so, what way would that be? | Python/Twisted - TCP packet fragmentation? | 0.132549 | 0 | 0 | 3,827 |
462,068 | 2009-01-20T16:37:00.000 | 1 | 0 | 0 | 0 | python,netbeans,project | 462,107 | 1 | false | 1 | 0 | Python support is in beta, and as someone who works with NB for a past 2 years, I can say that even a release versions are buggy and sometimes crashes. Early Ruby support was also very shaky. | 1 | 1 | 0 | Can you give me some links or explain how to configure an existing python project onto Netbeans?
I'm trying it these days and it continues to crash also code navigation doesn't work well and I've problems with debugging. Surely these problems are related to my low eperience about python and I need support also in trivial things as organizing source folders, imports ecc,, thank you very much.
Valerio | Python with Netbeans 6.5 | 0.197375 | 0 | 0 | 263 |
462,320 | 2009-01-20T17:44:00.000 | 1 | 0 | 0 | 0 | wxpython,wxwidgets,shaped-window | 462,387 | 1 | true | 0 | 1 | Using a menu is a no-go, because wxWidgets can't put widgets on a menu. Using the shaped frame would be possible in principle, but the problem is then to get the position of the button you clicked, to display the window at the right position. I tried to do that back then, but didn't have luck (in C++ wxWidgets). Maybe this situation changed in between though, good luck.
You can also try a wxComboCtrl, which allows you to have a custom popup window. That one could then display the radio boxes and the input control. | 1 | 1 | 0 | I'm looking for a way to implement this design in wxPython on Linux...
I have a toolbar with a button, when the button is pressed a popup should appear, mimicking an extension of the toolbar (like a menu), and this popup should show two columns of radio buttons (say 2x5) and a text box...
My main problem is that the toolbar is small in height, so the popup has to overflow the bounds of the window/client area..
I thought of two possible implementations:
by using a wxMenu, since a menu can be drawn outside the client area. I fear that the layout possibilities aren't flexible enough for my goal
by using a shaped frame. Pressing the button would re-shape the frame and draw the needed widgets as requested.
My question is: am I missing something / wrong on something? :) Is this doable at all? | Window-overflowing widget in wxWidgets | 1.2 | 0 | 0 | 551 |
462,933 | 2009-01-20T20:20:00.000 | 1 | 0 | 0 | 0 | python,wxpython,transparency,opacity | 464,706 | 2 | false | 0 | 1 | You probably need some graphics rendering widget. As far as I know, in wxPython you can use either built-in wxGraphicsContext or pyCairo directly. Cairo is more powerful. However, I don't know the details. | 2 | 1 | 0 | I am adding some wx.StaticText objects on top of my main wx.Frame, which already has a background image applied. However, the StaticText always seems to draw with a solid (opaque) background color, hiding the image. I have tried creating a wx.Color object and changing the alpha value there, but that yields no results. Is there any way I can put text on the frame and have the background shine through? And furthermore, is it possible to make the text itself translucent? Thanks. | Is it possible to make text translucent in wxPython? | 0.099668 | 0 | 0 | 472 |
462,933 | 2009-01-20T20:20:00.000 | 0 | 0 | 0 | 0 | python,wxpython,transparency,opacity | 598,202 | 2 | false | 0 | 1 | I would try aggdraw into a small canvas.
Any Static Text uses the platform's native label machinery, so you don't get that sort of control over it. | 2 | 1 | 0 | I am adding some wx.StaticText objects on top of my main wx.Frame, which already has a background image applied. However, the StaticText always seems to draw with a solid (opaque) background color, hiding the image. I have tried creating a wx.Color object and changing the alpha value there, but that yields no results. Is there any way I can put text on the frame and have the background shine through? And furthermore, is it possible to make the text itself translucent? Thanks. | Is it possible to make text translucent in wxPython? | 0 | 0 | 0 | 472 |
463,714 | 2009-01-21T00:38:00.000 | 11 | 0 | 0 | 0 | python,django,internationalization,translation | 463,928 | 2 | false | 1 | 0 | The fuzzy marker is added to the .po file by makemessages. When you have a new string (with no translations), it looks for similar strings, and includes them as the translation, with the fuzzy marker. This means, this is a crude match, so don't display it to the user, but it could be a good start for the human translator.
It isn't a Django behavior, it comes from the gettext facility. | 1 | 9 | 0 | I have a medium sized Django project, (running on AppEngine if it makes any difference), and have all the strings living in .po files like they should.
I'm seeing strange behavior where certain strings just don't translate. They show up in the .po file when I run make_messages, with the correct file locations marked where my {% trans %} tags are. The translations are in place and look correct compared to other strings on either side of them. But when I display the page in question, about 1/4 of the strings simply don't translate.
Digging into the relevant generated .mo file, I don't see either the msgid or the msgstr present.
Has anybody seen anything similar to this? Any idea what might be happening?
trans tags look correct
.po files look correct
no errors during compile_messages | Django missing translation of some strings. Any idea why? | 1 | 0 | 0 | 2,057 |
463,963 | 2009-01-21T02:54:00.000 | -3 | 0 | 1 | 1 | python,ruby,bash,shell,parallel-processing | 463,981 | 12 | false | 0 | 0 | Can you elaborate what you mean by in parallel? It sounds like you need to implement some sort of locking in the queue so your entries are not selected twice, etc and the commands run only once.
Most queue systems cheat -- they just write a giant to-do list, then select e.g. ten items, work them, and select the next ten items. There's no parallelization.
If you provide some more details, I'm sure we can help you out. | 3 | 45 | 0 | I have a list/queue of 200 commands that I need to run in a shell on a Linux server.
I only want to have a maximum of 10 processes running (from the queue) at once. Some processes will take a few seconds to complete, other processes will take much longer.
When a process finishes I want the next command to be "popped" from the queue and executed.
Does anyone have code to solve this problem?
Further elaboration:
There's 200 pieces of work that need to be done, in a queue of some sort. I want to have at most 10 pieces of work going on at once. When a thread finishes a piece of work it should ask the queue for the next piece of work. If there's no more work in the queue, the thread should die. When all the threads have died it means all the work has been done.
The actual problem I'm trying to solve is using imapsync to synchronize 200 mailboxes from an old mail server to a new mail server. Some users have large mailboxes and take a long time tto sync, others have very small mailboxes and sync quickly. | Parallel processing from a command queue on Linux (bash, python, ruby... whatever) | -0.049958 | 0 | 0 | 22,010 |
463,963 | 2009-01-21T02:54:00.000 | 7 | 0 | 1 | 1 | python,ruby,bash,shell,parallel-processing | 464,007 | 12 | false | 0 | 0 | GNU make (and perhaps other implementations as well) has the -j argument, which governs how many jobs it will run at once. When a job completes, make will start another one. | 3 | 45 | 0 | I have a list/queue of 200 commands that I need to run in a shell on a Linux server.
I only want to have a maximum of 10 processes running (from the queue) at once. Some processes will take a few seconds to complete, other processes will take much longer.
When a process finishes I want the next command to be "popped" from the queue and executed.
Does anyone have code to solve this problem?
Further elaboration:
There's 200 pieces of work that need to be done, in a queue of some sort. I want to have at most 10 pieces of work going on at once. When a thread finishes a piece of work it should ask the queue for the next piece of work. If there's no more work in the queue, the thread should die. When all the threads have died it means all the work has been done.
The actual problem I'm trying to solve is using imapsync to synchronize 200 mailboxes from an old mail server to a new mail server. Some users have large mailboxes and take a long time tto sync, others have very small mailboxes and sync quickly. | Parallel processing from a command queue on Linux (bash, python, ruby... whatever) | 1 | 0 | 0 | 22,010 |
463,963 | 2009-01-21T02:54:00.000 | 13 | 0 | 1 | 1 | python,ruby,bash,shell,parallel-processing | 628,543 | 12 | false | 0 | 0 | For this kind of job PPSS is written: Parallel processing shell script. Google for this name and you will find it, I won't linkspam. | 3 | 45 | 0 | I have a list/queue of 200 commands that I need to run in a shell on a Linux server.
I only want to have a maximum of 10 processes running (from the queue) at once. Some processes will take a few seconds to complete, other processes will take much longer.
When a process finishes I want the next command to be "popped" from the queue and executed.
Does anyone have code to solve this problem?
Further elaboration:
There's 200 pieces of work that need to be done, in a queue of some sort. I want to have at most 10 pieces of work going on at once. When a thread finishes a piece of work it should ask the queue for the next piece of work. If there's no more work in the queue, the thread should die. When all the threads have died it means all the work has been done.
The actual problem I'm trying to solve is using imapsync to synchronize 200 mailboxes from an old mail server to a new mail server. Some users have large mailboxes and take a long time tto sync, others have very small mailboxes and sync quickly. | Parallel processing from a command queue on Linux (bash, python, ruby... whatever) | 1 | 0 | 0 | 22,010 |
464,040 | 2009-01-21T03:59:00.000 | 0 | 0 | 0 | 0 | python,post,get | 464,086 | 6 | false | 0 | 0 | Python is only a language, to get GET and POST data, you need a web framework or toolkit written in Python. Django is one, as Charlie points out, the cgi and urllib standard modules are others. Also available are Turbogears, Pylons, CherryPy, web.py, mod_python, fastcgi, etc, etc.
In Django, your view functions receive a request argument which has request.GET and request.POST. Other frameworks will do it differently. | 1 | 142 | 0 | In PHP you can just use $_POST for POST and $_GET for GET (Query string) variables. What's the equivalent in Python? | How are POST and GET variables handled in Python? | 0 | 0 | 0 | 198,745 |
464,314 | 2009-01-21T07:14:00.000 | 1 | 0 | 0 | 0 | python | 464,347 | 3 | true | 0 | 0 | When you say the last minute, do you mean the exact last seconds or the last full minute from x:00 to x:59? The latter will be easier to implement and would probably give accurate results. You have one prev variable holding the value of the hits for the previous minute. Then you have a current value that increments every time there is a new hit. You return the value of prev to the users. At the change of the minute you swap prev with current and reset current.
If you want higher analysis you could split the minute in 2 to 6 slices. You need a variable or list entry for every slice. Let's say you have 6 slices of 10 seconds. You also have an index variable pointing to the current slice (0..5). For every hit you increment a temp variable. When the slice is over, you replace the value of the indexed variable with the value of temp, reset temp and move the index forward. You return the sum of the slice variables to the users. | 3 | 2 | 0 | This seems like such a trivial problem, but I can't seem to pin how I want to do it. Basically, I want to be able to produce a figure from a socket server that at any time can give the number of packets received in the last minute. How would I do that?
I was thinking of maybe summing a dictionary that uses the current second as a key, and when receiving a packet it increments that value by one, as well as setting the second+1 key above it to 0, but this just seems sloppy. Any ideas? | Python - Hits per minute implementation? | 1.2 | 0 | 1 | 303 |
464,314 | 2009-01-21T07:14:00.000 | 1 | 0 | 0 | 0 | python | 464,329 | 3 | false | 0 | 0 | For what it's worth, your implementation above won't work if you don't receive a packet every second, as the next second entry won't necessarily be reset to 0.
Either way, afaik the "correct" way to do this, ala logs analysis, is to keep a limited record of all the queries you receive. So just chuck the query, time received etc. into a database, and then simple database queries will give you the use over a minute, or any minute in the past. Not sure whether this is too heavyweight for you, though. | 3 | 2 | 0 | This seems like such a trivial problem, but I can't seem to pin how I want to do it. Basically, I want to be able to produce a figure from a socket server that at any time can give the number of packets received in the last minute. How would I do that?
I was thinking of maybe summing a dictionary that uses the current second as a key, and when receiving a packet it increments that value by one, as well as setting the second+1 key above it to 0, but this just seems sloppy. Any ideas? | Python - Hits per minute implementation? | 0.066568 | 0 | 1 | 303 |
464,314 | 2009-01-21T07:14:00.000 | 3 | 0 | 0 | 0 | python | 464,322 | 3 | false | 0 | 0 | A common pattern for solving this in other languages is to let the thing being measured simply increment an integer. Then you leave it to the listening client to determine intervals and frequencies.
So you basically do not let the socket server know about stuff like "minutes", because that's a feature the observer calculates. Then you can also support multiple listeners with different interval resolution.
I suppose you want some kind of ring-buffer structure to do the rolling logging. | 3 | 2 | 0 | This seems like such a trivial problem, but I can't seem to pin how I want to do it. Basically, I want to be able to produce a figure from a socket server that at any time can give the number of packets received in the last minute. How would I do that?
I was thinking of maybe summing a dictionary that uses the current second as a key, and when receiving a packet it increments that value by one, as well as setting the second+1 key above it to 0, but this just seems sloppy. Any ideas? | Python - Hits per minute implementation? | 0.197375 | 0 | 1 | 303 |
464,342 | 2009-01-21T07:33:00.000 | 1 | 0 | 1 | 0 | python,list,sorting | 464,357 | 21 | false | 0 | 0 | Well, the naive approach (combine 2 lists into large one and sort) will be O(N*log(N)) complexity. On the other hand, if you implement the merge manually (i do not know about any ready code in python libs for this, but i'm no expert) the complexity will be O(N), which is clearly faster.
The idea is described wery well in post by Barry Kelly. | 2 | 82 | 0 | I have two lists of objects. Each list is already sorted by a property of the object that is of the datetime type. I would like to combine the two lists into one sorted list. Is the best way just to do a sort or is there a smarter way to do this in Python? | Combining two sorted lists in Python | 0.009524 | 0 | 0 | 123,744 |
464,342 | 2009-01-21T07:33:00.000 | -1 | 0 | 1 | 0 | python,list,sorting | 51,053,496 | 21 | false | 0 | 0 | Hope this helps. Pretty Simple and straight forward:
l1 = [1, 3, 4, 7]
l2 = [0, 2, 5, 6, 8, 9]
l3 = l1 + l2
l3.sort()
print (l3)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9] | 2 | 82 | 0 | I have two lists of objects. Each list is already sorted by a property of the object that is of the datetime type. I would like to combine the two lists into one sorted list. Is the best way just to do a sort or is there a smarter way to do this in Python? | Combining two sorted lists in Python | -0.009524 | 0 | 0 | 123,744 |
464,543 | 2009-01-21T09:15:00.000 | 1 | 1 | 0 | 1 | python,unit-testing,twisted | 465,422 | 4 | false | 1 | 0 | I think you chose the wrong direction. It's true that the Trial docs is very light. But Trial is base on unittest and only add some stuff to deal with the reactor loop and the asynchronous calls (it's not easy to write tests that deal with deffers). All your tests that are not including deffer/asynchronous call will be exactly like normal unittest.
The Trial command is a test runner (a bit like nose), so you don't have to write test suites for your tests. You will save time with it. On top of that, the Trial command can output profiling and coverage information. Just do Trial -h for more info.
But in any way the first thing you should ask yourself is which kind of tests do you need the most, unit tests, integration tests or system tests (black-box). It's possible to do all with Trial but it's not necessary allways the best fit. | 3 | 3 | 0 | I wrote an application server (using python & twisted) and I want to start writing some tests. But I do not want to use Twisted's Trial due to time constraints and not having time to play with it now. So here is what I have in mind: write a small test client that connects to the app server and makes the necessary requests (the communication protocol is some in-house XML), store in a static way the received XML and then write some tests on those static data using unitest.
My question is: Is this a correct approach and if yes, what kind of tests are covered with this approach?
Also, using this method has several disadvantages, like: not being able to access the database layer in order to build/rebuild the schema, when will the test client going to connect to the server: per each unit test or before running the test suite? | unit testing for an application server | 0.049958 | 0 | 0 | 1,357 |
464,543 | 2009-01-21T09:15:00.000 | 1 | 1 | 0 | 1 | python,unit-testing,twisted | 464,870 | 4 | true | 1 | 0 | "My question is: Is this a correct approach?"
It's what you chose. You made a lot of excuses, so I'm assuming that your pretty well fixed on this course. It's not the best, but you've already listed all your reasons for doing it (and then asked follow-up questions on this specific course of action). "correct" doesn't enter into it anymore, so there's no answer to this question.
"what kind of tests are covered with this approach?"
They call it "black-box" testing. The application server is a black box that has a few inputs and outputs, and you can't test any of it's internals. It's considered one acceptable form of testing because it tests the bottom-line external interfaces for acceptable behavior.
If you have problems, it turns out to be useless for doing diagnostic work. You'll find that you need to also to white-box testing on the internal structures.
"not being able to access the database layer in order to build/rebuild the schema,"
Why not? This is Python. Write a separate tool that imports that layer and does database builds.
"when will the test client going to connect to the server: per each unit test or before running the test suite?"
Depends on the intent of the test. Depends on your use cases. What happens in the "real world" with your actual intended clients?
You'll want to test client-like behavior, making connections the way clients make connections.
Also, you'll want to test abnormal behavior, like clients dropping connections or doing things out of order, or unconnected. | 3 | 3 | 0 | I wrote an application server (using python & twisted) and I want to start writing some tests. But I do not want to use Twisted's Trial due to time constraints and not having time to play with it now. So here is what I have in mind: write a small test client that connects to the app server and makes the necessary requests (the communication protocol is some in-house XML), store in a static way the received XML and then write some tests on those static data using unitest.
My question is: Is this a correct approach and if yes, what kind of tests are covered with this approach?
Also, using this method has several disadvantages, like: not being able to access the database layer in order to build/rebuild the schema, when will the test client going to connect to the server: per each unit test or before running the test suite? | unit testing for an application server | 1.2 | 0 | 0 | 1,357 |
464,543 | 2009-01-21T09:15:00.000 | 0 | 1 | 0 | 1 | python,unit-testing,twisted | 464,596 | 4 | false | 1 | 0 | haven't used twisted before, and the twisted/trial documentation isn't stellar from what I just saw, but it'll likely take you 2-3 days to implement correctly the test system you describe above. Now, like I said I have no idea about Trial, but I GUESS you could probably get it working in 1-2 days, since you already have a Twisted application. Now if Trial gives you more coverage in less time, I'd go with Trial.
But remember this is just an answer from a very cursory look at the docs | 3 | 3 | 0 | I wrote an application server (using python & twisted) and I want to start writing some tests. But I do not want to use Twisted's Trial due to time constraints and not having time to play with it now. So here is what I have in mind: write a small test client that connects to the app server and makes the necessary requests (the communication protocol is some in-house XML), store in a static way the received XML and then write some tests on those static data using unitest.
My question is: Is this a correct approach and if yes, what kind of tests are covered with this approach?
Also, using this method has several disadvantages, like: not being able to access the database layer in order to build/rebuild the schema, when will the test client going to connect to the server: per each unit test or before running the test suite? | unit testing for an application server | 0 | 0 | 0 | 1,357 |
465,605 | 2009-01-21T14:50:00.000 | 1 | 0 | 1 | 0 | python,readline,ipython | 488,461 | 3 | false | 0 | 0 | It looks like I can use readline.set_completion_display_matches_hook([function]) (new in Python 2.6) to display the results. The completer would return a list of possibilities as usual, but would also store the results of inspect.classify_class_attrs(cls) where applicable. The completion_display_matches_hook would have to hold a reference to the completer to retrieve the most recent list of completions plus the classification information I am looking for because only receives a list of match names in its arguments. Then the hook displays the list of completions in a pleasing way. | 2 | 5 | 0 | When an object has hundreds of methods, tab completion is hard to use. More often than not the interesting methods are the ones defined or overridden by the inspected object's class and not its base classes.
How can I get IPython to group its tab completion possibilities so the methods and properties defined in the inspected object's class come first, followed by those in base classes?
It looks like the undocumented inspect.classify_class_attrs(cls) function along with inspect.getmro(cls) give me most of the information I need (these were originally written to implement python's help(object) feature).
By default readline displays completions alphabetically, but the function used to display completions can be replaced with ctypes or the readline module included with Python 2.6 and above. I've overridden readline's completions display and it works great.
Now all I need is a method to merge per-class information (from inspect.* per above) with per-instance information, sort the results by method resolution order, pretty print and paginate.
For extra credit, it would be great to store the chosen autocompletion, and display the most popular choices first next time autocomplete is attempted on the same object. | How do I make IPython organize tab completion possibilities by class? | 0.066568 | 0 | 0 | 2,102 |
465,605 | 2009-01-21T14:50:00.000 | 1 | 0 | 1 | 0 | python,readline,ipython | 467,430 | 3 | false | 0 | 0 | I don't think this can be accomplished easily. There's no mechanism in Ipython to perform it in any case.
Initially I had thought you could modify Ipython's source to change the order (eg by changing the dir2() function in genutils.py). However it looks like readline alphabetically sorts the completions you pass to it, so this won't work (at least not without a lot more effort), though you could perhaps exclude methods on the base class completely. | 2 | 5 | 0 | When an object has hundreds of methods, tab completion is hard to use. More often than not the interesting methods are the ones defined or overridden by the inspected object's class and not its base classes.
How can I get IPython to group its tab completion possibilities so the methods and properties defined in the inspected object's class come first, followed by those in base classes?
It looks like the undocumented inspect.classify_class_attrs(cls) function along with inspect.getmro(cls) give me most of the information I need (these were originally written to implement python's help(object) feature).
By default readline displays completions alphabetically, but the function used to display completions can be replaced with ctypes or the readline module included with Python 2.6 and above. I've overridden readline's completions display and it works great.
Now all I need is a method to merge per-class information (from inspect.* per above) with per-instance information, sort the results by method resolution order, pretty print and paginate.
For extra credit, it would be great to store the chosen autocompletion, and display the most popular choices first next time autocomplete is attempted on the same object. | How do I make IPython organize tab completion possibilities by class? | 0.066568 | 0 | 0 | 2,102 |
465,795 | 2009-01-21T15:43:00.000 | 3 | 0 | 1 | 0 | python,perl,metadata | 465,840 | 8 | false | 0 | 0 | The simplest way to do what you want is this...
>>> text = "this is some of the sample text"
>>> words = [word for word in set(text.split(" ")) if len(word) > 3]
>>> words
['this', 'some', 'sample', 'text']
I don't know of any standard module that does this, but it wouldn't be hard to replace the limit on three letter words with a lookup into a set of common English words. | 2 | 18 | 0 | I suppose I could take a text and remove high frequency English words from it. By keywords, I mean that I want to extract words that are most the characterizing of the content of the text (tags ) . It doesn't have to be perfect, a good approximation is perfect for my needs.
Has anyone done anything like that? Do you known a Perl or Python library that does that?
Lingua::EN::Tagger is exactly what I asked however I needed a library that could work for french text too. | What is a simple way to generate keywords from a text? | 0.07486 | 0 | 0 | 4,188 |
465,795 | 2009-01-21T15:43:00.000 | 0 | 0 | 1 | 0 | python,perl,metadata | 469,690 | 8 | false | 0 | 0 | I think the most accurate way that still maintains a semblance of simplicity would be to count the word frequencies in your source, then weight them according to their frequencies in common English (or whatever other language) usage.
Words that appear less frequently in common use, like "coffeehouse" are more likely to be a keyword than words that appear more often, like "dog." Still, if your source mentions "dog" 500 times and "coffeehouse" twice it's more likely that "dog" is a keyword even though it's a common word.
Deciding on the weighting scheme would be the difficult part. | 2 | 18 | 0 | I suppose I could take a text and remove high frequency English words from it. By keywords, I mean that I want to extract words that are most the characterizing of the content of the text (tags ) . It doesn't have to be perfect, a good approximation is perfect for my needs.
Has anyone done anything like that? Do you known a Perl or Python library that does that?
Lingua::EN::Tagger is exactly what I asked however I needed a library that could work for french text too. | What is a simple way to generate keywords from a text? | 0 | 0 | 0 | 4,188 |
466,684 | 2009-01-21T19:40:00.000 | 1 | 0 | 1 | 1 | python,operating-system | 466,755 | 7 | false | 0 | 0 | It looks like you want to get a lot more information than the standard Python library offers. If I were you, I would download the source code for 'ps' or 'top', or the Gnome/KDE version of the same, or any number of system monitoring/graphing programs which are more likely to have all the necessary Unix cross platform bits, see what they do, and then make the necessary native calls with ctypes.
It's trivial to detect the platform. For example with ctypes you might try to load libc.so, if that throws an exception try to load 'msvcrt.dll' and so on. Not to mention simply checking the operating system's name with os.name. Then just delegate calls to your new cross-platform API to the appropriate platform-specific (sorry) implementation.
When you're done, don't forget to upload the resulting package to pypi. | 1 | 25 | 0 | Using Python, how can information such as CPU usage, memory usage (free, used, etc), process count, etc be returned in a generic manner so that the same code can be run on Linux, Windows, BSD, etc?
Alternatively, how could this information be returned on all the above systems with the code specific to that OS being run only if that OS is indeed the operating environment? | How can I return system information in Python? | 0.028564 | 0 | 0 | 29,587 |
466,897 | 2009-01-21T20:43:00.000 | 1 | 1 | 1 | 0 | .net,python,ruby,ironpython,ironruby | 466,930 | 7 | false | 0 | 0 | IronPython/IronRuby are built to work on the .net virtual machine, so they are as you say essentially platform specific.
Apparently they are compatible with Python and Ruby as long as you don't use any of the .net framework in your programs. | 6 | 5 | 0 | I'm curious about how .NET will affect Python and Ruby applications.
Will applications written in IronPython/IronRuby be so specific to the .NET environment, that they will essentially become platform specific?
If they don't use any of the .NET features, then what is the advantage of IronPython/IronRuby over their non .NET counterparts? | How will Python and Ruby applications be affected by .NET? | 0.028564 | 0 | 0 | 501 |
466,897 | 2009-01-21T20:43:00.000 | 0 | 1 | 1 | 0 | .net,python,ruby,ironpython,ironruby | 467,031 | 7 | false | 0 | 0 | You answer your first question with the second one, if you don't use anything from .Net only the original libs provided by the implementation of the language, you could interpret your *.py or *.rb file with another implementation and it should work.
The advantage would be if your a .Net shop you usually take care of having the right framework installed on client machine etc... well if you want python or ruby code, you now need to support another "framework" need to distribute install, take care of version problem etc... So there 2 advantages, using .Net framework power inside another language + keep the distribution/maintenance as simple as possible. | 6 | 5 | 0 | I'm curious about how .NET will affect Python and Ruby applications.
Will applications written in IronPython/IronRuby be so specific to the .NET environment, that they will essentially become platform specific?
If they don't use any of the .NET features, then what is the advantage of IronPython/IronRuby over their non .NET counterparts? | How will Python and Ruby applications be affected by .NET? | 0 | 0 | 0 | 501 |
466,897 | 2009-01-21T20:43:00.000 | 0 | 1 | 1 | 0 | .net,python,ruby,ironpython,ironruby | 467,067 | 7 | false | 0 | 0 | It would be cool to run Rails/Django under IIS rather then Apache/Mongrel type solutions | 6 | 5 | 0 | I'm curious about how .NET will affect Python and Ruby applications.
Will applications written in IronPython/IronRuby be so specific to the .NET environment, that they will essentially become platform specific?
If they don't use any of the .NET features, then what is the advantage of IronPython/IronRuby over their non .NET counterparts? | How will Python and Ruby applications be affected by .NET? | 0 | 0 | 0 | 501 |
466,897 | 2009-01-21T20:43:00.000 | 1 | 1 | 1 | 0 | .net,python,ruby,ironpython,ironruby | 467,145 | 7 | false | 0 | 0 | If you create a library or framework, people can use it on .NET with their .NET code. That's pretty cool for them, and for you!
When developing an application, if you use .NET's facilities with abandon then you lose "cross-platformity", which is not always an issue.
If you wrap these uses with an internal API, you can replace the .NET implementations later with pure-Python, wrapped C (for CPython), or Java (for Jython) later. | 6 | 5 | 0 | I'm curious about how .NET will affect Python and Ruby applications.
Will applications written in IronPython/IronRuby be so specific to the .NET environment, that they will essentially become platform specific?
If they don't use any of the .NET features, then what is the advantage of IronPython/IronRuby over their non .NET counterparts? | How will Python and Ruby applications be affected by .NET? | 0.028564 | 0 | 0 | 501 |
466,897 | 2009-01-21T20:43:00.000 | 1 | 1 | 1 | 0 | .net,python,ruby,ironpython,ironruby | 467,385 | 7 | false | 0 | 0 | According to the Mono page, IronPython is compatible with Mono's implementation of the .Net runtime, so executables should work both on Windows and Linux. | 6 | 5 | 0 | I'm curious about how .NET will affect Python and Ruby applications.
Will applications written in IronPython/IronRuby be so specific to the .NET environment, that they will essentially become platform specific?
If they don't use any of the .NET features, then what is the advantage of IronPython/IronRuby over their non .NET counterparts? | How will Python and Ruby applications be affected by .NET? | 0.028564 | 0 | 0 | 501 |
466,897 | 2009-01-21T20:43:00.000 | 2 | 1 | 1 | 0 | .net,python,ruby,ironpython,ironruby | 1,285,474 | 7 | false | 0 | 0 | Will applications written in IronPython/IronRuby be so specific to the .NET environment, that they will essentially become platform specific?
IronRuby currently ships with most of the core ruby standard library, and support for ruby gems.
This means that it will support pretty much any native ruby app that doesn't rely on C extensions.
The flipside is that it will be possible to write native ruby apps in IronRuby that don't rely on the CLR, and those will be portable to MRI.
Whether or not people choose to create or use extensions for their apps using the CLR is the same question as to whether people create or use C extensions for MRI - one is no more portable than the other.
There is a side-question of "because it is so much easier to create IronRuby extensions in C# than it is to create CRuby extensions in C, will people create extensions where they should be sticking to native ruby code?", but that's entirely subjective.
On the whole though, I think anything that makes creating extensions easier is a big win.
If they don't use any of the .NET features, then what is the advantage of IronPython/IronRuby over their non .NET counterparts?
Performance: IronRuby is already faster for the most part than MRI 1.8, and isn't far off MRI 1.9, and things will only improve in future. I think python is similar in this respect.
Deployment: As people have mentioned, running a native ruby cross-platform rails app inside IIS is an attractive proposition to some windows-based developers, as it lets them better integrate with existing servers/management infrastructure/etc
Stability: While MRI 1.9 is much better than 1.8 was, I don't think anyone could disagree that CLR has a much better garbage collector and base runtime than C ruby does. | 6 | 5 | 0 | I'm curious about how .NET will affect Python and Ruby applications.
Will applications written in IronPython/IronRuby be so specific to the .NET environment, that they will essentially become platform specific?
If they don't use any of the .NET features, then what is the advantage of IronPython/IronRuby over their non .NET counterparts? | How will Python and Ruby applications be affected by .NET? | 0.057081 | 0 | 0 | 501 |
467,878 | 2009-01-22T02:38:00.000 | 5 | 0 | 1 | 0 | python | 467,928 | 1 | true | 0 | 0 | You would have to find an algorithm that cuts off more than one unwanted permutation after a single check, in order to gain anything. The obvious strategy is to build the permutations sequentially, for example, in a tree. Each cut then eliminates a whole branch.
edit:
Example: in the set (A B C D), let's say that B and C, and A and D are not allowed to be neighbours.
(A) (B) (C) (D)
/ | \ / | \ / | \ / | \
AB AC AD BA BC BD CA CB CD DA DB DC
| \ | \ X / \ X / \ / \ X / \ X / \ / \
ABC ABD ACB ACD BAC BAD BDA BDC CAB CAD CDA CDB DBA DBC DCA DCB
X | X | | X X | | X X | | X | X
ABDC ACDB BACD BDCA CABD CDBA DBAC DCAB
v v v v v v v v
Each of the strings without parentheses needs a check. As you see, the Xs (where subtrees have been cut off) save checks, one if they are in the third row, but four if they are in the second row. We saved 24 of 60 checks here and got down to 36. However, there are only 24 permutations overall anyway, so if checking the restrictions (as opposed to building the lists) is the bottleneck, we would have been better off to just construct all the permutations and check them at the end... IF the checks couldn't be optimized when we go this way.
Now, as you see, the checks only need to be performed on the new part of each list. This makes the checks much leaner; actually, we divide the check that would be needed for a full permutation into small chunks. In the above example, we only have to look whether the added letter is allowed to stand besides the last one, not all the letters before.
However, also if we first construct, then filter, the checks could be cut short as soon as a no-no is encountered. So, on checking, there is no real gain compared to the first-build-then-filter algorithm; there is rather the danger of further overhead through more function calls.
What we do save is the time to build the lists, and the peak memory consumption. Building a list is generally rather fast, but peak memory consumption might be a consideration if the number of object gets larger. For the first-build-then-filter, both grow linearly with the number of objects. For the tree version, it grows slower, depending on the constraints. From a certain number of objects and rules on, there is also actual check saving.
In general, I think that you would need to try it out and time the two algorithms. If you really have only 5 objects, stick to the simple (filter rules (build-permutations set)). If your number of objects gets large, the tree algorithm will at some point perform noticably better (you know, big O).
Um. Sorry, I got into lecture mode; bear with me. | 1 | 1 | 0 | I have a list of objects (for the sake of example, let's say 5). I want a list of some of the possible permutations. Specifically, given that some pairs are not together, and some triples don't make sandwiches, how can I generate all other permutations? I realize that I generate all of them first and check that they work, but I think it would be faster to not even consider the pairs and triples that don't work.
Am I wrong that it would be faster to check first and generate later?
How would I do it? | Permutations in python, with a twist | 1.2 | 0 | 0 | 590 |
468,736 | 2009-01-22T11:22:00.000 | 2 | 0 | 0 | 0 | python,django,django-templates | 468,751 | 2 | true | 1 | 0 | "This seems wasteful" Why does it seem that way?
Every template is a mix of tags and text. In your case some block of text has already been visited by a template engine. So what? Once it's been transformed it's just text and passes through the next template engine very, very quickly.
Do you have specific performance problems? Are you not meeting your transaction throughput requirements? Is there a specific problem?
Is the code too complex? Is it hard to maintain? Does it break all the time?
I think your solution is adequate. I'm not sure template tags in dynamic content is good from a debugging point of view, but from a basic "template rendering" point of view, it is fine. | 1 | 1 | 0 | I've got a CMS that takes some dynamic content and renders it using a standard template. However I am now using template tags in the dynamic content itself so I have to do a render_to_string and then pass the results of that as a context variable to render_to_response. This seems wasteful.
What's a better way to do this? | Templates within templates. How to avoid rendering twice? | 1.2 | 0 | 0 | 561 |
470,139 | 2009-01-22T17:41:00.000 | 1 | 0 | 1 | 0 | python,evaluation,operator-precedence | 470,163 | 6 | false | 0 | 0 | Think it as 1 + (+1*(+1*2))). The first + is operator and following plus signs are sign of second operand (= 2).
Just like 1---2 is same as 1 - -(-(2)) or 1- (-1*(-1*(2)) | 2 | 32 | 0 | How does Python evaluate the expression 1+++2?
How many ever + I put in between, it is printing 3 as the answer. Please can anyone explain this behavior
And for 1--2 it is printing 3 and for 1---2 it is printing -1 | Why does 1+++2 = 3? | 0.033321 | 0 | 0 | 6,102 |
470,139 | 2009-01-22T17:41:00.000 | 4 | 0 | 1 | 0 | python,evaluation,operator-precedence | 470,160 | 6 | false | 0 | 0 | 1+(+(+2)) = 3
1 - (-2) = 3
1 - (-(-2)) = -1 | 2 | 32 | 0 | How does Python evaluate the expression 1+++2?
How many ever + I put in between, it is printing 3 as the answer. Please can anyone explain this behavior
And for 1--2 it is printing 3 and for 1---2 it is printing -1 | Why does 1+++2 = 3? | 0.132549 | 0 | 0 | 6,102 |
471,191 | 2009-01-22T22:57:00.000 | 4 | 0 | 1 | 0 | python,compilation | 30,850,028 | 10 | false | 0 | 0 | We use compiled code to distribute to users who do not have access to the source code. Basically to stop inexperienced programers accidentally changing something or fixing bugs without telling us. | 3 | 284 | 0 | Why would you compile a Python script? You can run them directly from the .py file and it works fine, so is there a performance advantage or something?
I also notice that some files in my application get compiled into .pyc while others do not, why is this? | Why compile Python code? | 0.07983 | 0 | 0 | 222,453 |
471,191 | 2009-01-22T22:57:00.000 | 2 | 0 | 1 | 0 | python,compilation | 471,222 | 10 | false | 0 | 0 | Yep, performance is the main reason and, as far as I know, the only reason.
If some of your files aren't getting compiled, maybe Python isn't able to write to the .pyc file, perhaps because of the directory permissions or something. Or perhaps the uncompiled files just aren't ever getting loaded... (scripts/modules only get compiled when they first get loaded) | 3 | 284 | 0 | Why would you compile a Python script? You can run them directly from the .py file and it works fine, so is there a performance advantage or something?
I also notice that some files in my application get compiled into .pyc while others do not, why is this? | Why compile Python code? | 0.039979 | 0 | 0 | 222,453 |
471,191 | 2009-01-22T22:57:00.000 | 7 | 0 | 1 | 0 | python,compilation | 471,217 | 10 | false | 0 | 0 | There's certainly a performance difference when running a compiled script. If you run normal .py scripts, the machine compiles it every time it is run and this takes time. On modern machines this is hardly noticeable but as the script grows it may become more of an issue. | 3 | 284 | 0 | Why would you compile a Python script? You can run them directly from the .py file and it works fine, so is there a performance advantage or something?
I also notice that some files in my application get compiled into .pyc while others do not, why is this? | Why compile Python code? | 1 | 0 | 0 | 222,453 |
471,279 | 2009-01-22T23:25:00.000 | 2 | 0 | 0 | 0 | python,model-view-controller,user-interface,architecture,wxpython | 471,297 | 3 | false | 0 | 1 | If you've looked at MVC you're probably moving in the right direction. MVC, MVP, Passive View, Supervising Controller. Those are all different ways, each with their own pros and cons, of accomplishing what you're after. I find that Passive View is the "ideal", but it causes you to introduce far too many widgets into your GUI interfaces (i.e. IInterface). In general I find that Supervising Controller is a good compromise. | 2 | 9 | 0 | This is going to be a generic question.
I am struggling in designing a GUI application, esp. with dealing with interactions between different parts.
I don't know how I should deal with shared state. On one hand, shared state is bad, and things should be as explicit as possible. On the other hand, not having shared state introduces unwanted coupling between components.
An example:
I want my application to be extendable in an Emacs/Vim sort of way, via scripts. Clearly, some sort of shared state needs to be modified, so that the GUI will use it. My initial plan was having a global "session" that is accessible from everywhere, but I'm not so sure about it.
One tricky use case is key bindings. I want the user to be able to specify custom keybindings from a script. Each keybinding maps to an arbitrary command, that receives the session as the only argument.
Now, the editor component captures keypresses. It has to have access to the keymappings, which are per-session, so it needs access to the session. Is coupling the editor to the session a good idea? Other components will also need to access the keybindings, so the session now becomes shared and can be a singleton...
Is there any good reading about designing GUI applications that goes beyond MVC?
This is Python and wxPython, FWIW.
[EDIT]: Added concrete usecase. | Organising a GUI application | 0.132549 | 0 | 0 | 805 |
471,279 | 2009-01-22T23:25:00.000 | 1 | 0 | 0 | 0 | python,model-view-controller,user-interface,architecture,wxpython | 471,307 | 3 | false | 0 | 1 | In MVC, the Model stuff is the shared state of the information.
The Control stuff is the shared state of the GUI control settings and responses to mouse-clicks and what-not.
Your scripting angle can
1) Update the Model objects. This is good. The Control can be "Observers" of the model objects and the View be updated to reflect the observed changes.
2) Update the Control objects. This is not so good, but... The Control objects can then make appropriate changes to the Model and/or View.
I'm not sure what the problem is with MVC. Could you provide a more detailed design example with specific issues or concerns? | 2 | 9 | 0 | This is going to be a generic question.
I am struggling in designing a GUI application, esp. with dealing with interactions between different parts.
I don't know how I should deal with shared state. On one hand, shared state is bad, and things should be as explicit as possible. On the other hand, not having shared state introduces unwanted coupling between components.
An example:
I want my application to be extendable in an Emacs/Vim sort of way, via scripts. Clearly, some sort of shared state needs to be modified, so that the GUI will use it. My initial plan was having a global "session" that is accessible from everywhere, but I'm not so sure about it.
One tricky use case is key bindings. I want the user to be able to specify custom keybindings from a script. Each keybinding maps to an arbitrary command, that receives the session as the only argument.
Now, the editor component captures keypresses. It has to have access to the keymappings, which are per-session, so it needs access to the session. Is coupling the editor to the session a good idea? Other components will also need to access the keybindings, so the session now becomes shared and can be a singleton...
Is there any good reading about designing GUI applications that goes beyond MVC?
This is Python and wxPython, FWIW.
[EDIT]: Added concrete usecase. | Organising a GUI application | 0.066568 | 0 | 0 | 805 |
471,546 | 2009-01-23T01:36:00.000 | 37 | 0 | 1 | 0 | python | 471,561 | 4 | true | 0 | 0 | You cannot override the and, or, and not boolean operators. | 3 | 39 | 0 | I tried overriding __and__, but that is for the & operator, not and - the one that I want. Can I override and? | Any way to override the and operator in Python? | 1.2 | 0 | 0 | 10,809 |
471,546 | 2009-01-23T01:36:00.000 | 3 | 0 | 1 | 0 | python | 471,559 | 4 | false | 0 | 0 | Not really. There's no special method name for the short-circuit logic operators. | 3 | 39 | 0 | I tried overriding __and__, but that is for the & operator, not and - the one that I want. Can I override and? | Any way to override the and operator in Python? | 0.148885 | 0 | 0 | 10,809 |
471,546 | 2009-01-23T01:36:00.000 | 47 | 0 | 1 | 0 | python | 471,567 | 4 | false | 0 | 0 | No you can't override and and or. With the behavior that these have in Python (i.e. short-circuiting) they are more like control flow tools than operators and overriding them would be more like overriding if than + or -.
You can influence the truth value of your objects (i.e. whether they evaluate as true or false) by overriding __nonzero__ (or __bool__ in Python 3). | 3 | 39 | 0 | I tried overriding __and__, but that is for the & operator, not and - the one that I want. Can I override and? | Any way to override the and operator in Python? | 1 | 0 | 0 | 10,809 |
471,660 | 2009-01-23T02:21:00.000 | 2 | 0 | 1 | 1 | python,twisted,multi-user | 474,353 | 2 | true | 0 | 0 | I think that B is problematic. The thread would only run on one CPU, and even if it runs a process, the thread is still running. A may be better.
It is best to try and measure both in terms of time and see which one is faster and which one scales well. However, I'll reiterate that I highly doubt that B will scale well. | 1 | 2 | 0 | In Python, if I want my server to scale well CPU-wise, I obviously need to spawn multiple processes. I was wondering which is better (using Twisted):
A) The manager process (the one who holds the actual socket connections) puts received packets into a shared queue (the one from the multiprocessing module), and worker processes pull the packets out of the queue, process them and send the results back to the client.
B) The manager process (the one who holds the actual socket connections) launches a deferred thread and then calls the apply() function on the process pool. Once the result returns from the worker process, the manager sends the result back to the client.
In both implementations, the worker processes use thread pools so they can work on more than one packet at once (since there will be a lot of database querying). | Python/Twisted multiuser server - what is more efficient? | 1.2 | 0 | 0 | 1,148 |
471,712 | 2009-01-23T02:49:00.000 | 7 | 1 | 0 | 0 | python,ironpython,ironpython-studio | 471,725 | 3 | false | 0 | 1 | The way you describe things, it sounds like you're company is switching to Python simple for the sake of Python. Is there some specific reason you want to use Python? Is a more dynamic language necessary? Is the functional programming going to help you at all? If you've got a perfectly good working set of tools in C#, why bother switching?
If you're set on switching, you may want to consider starting with standard Python unless you're specifically tied to the .NET libraries. You can write cross platform GUIs using a number of different frameworks like wxPython, pyQt, etc. That said, Visual Studio has a far superior GUI designer to just about any of the tools out there for creating Python windowed layouts. | 2 | 16 | 0 | We are ready in our company to move everything to Python instead of C#, we are a consulting company and we usually write small projects in C# we don't do huge projects and our work is more based on complex mathematical models not complex software structures. So we believe IronPython is a good platform for us because it provides standard GUI functionality on windows and access to all of .Net libraries.
I know Ironpython studio is not complete, and in fact I had a hard time adding my references but I was wondering if someone could list some of the pros and cons of this migration for us, considering Python code is easier to read by our clients and we usually deliver a proof-of-concept prototype instead of a full-functional code, our clients usually go ahead and implement the application themselves | Pros and cons of IronPython and IronPython Studio | 1 | 0 | 0 | 5,128 |
471,712 | 2009-01-23T02:49:00.000 | 18 | 1 | 0 | 0 | python,ironpython,ironpython-studio | 472,355 | 3 | true | 0 | 1 | My company, Resolver Systems, develops what is probably the biggest application written in IronPython yet. (It's called Resolver One, and it's a Pythonic spreadsheet). We are also hosting the Ironclad project (to run CPython extensions under IronPython) and that is going well (we plan to release a beta of Resolver One & numpy soon).
The reason we chose IronPython was the .NET integration - our clients want 100% integration on Windows and the easiest way to do that right now is .NET.
We design our GUI (without behaviour) in Visual Studio, compile it into a DLL and subclass it from IronPython to add behaviour.
We have found that IronPython is faster at some cases and slower at some others. However, the IronPython team is very responsive, whenever we report a regression they fix it and usually backport it to the bugfix release. If you worry about performance, you can always implement a critical part in C# (we haven't had to do that yet).
If you have experience with C#, then IronPython will be natural for you, and easier than C#, especially for prototypes.
Regarding IronPython studio, we don't use it. Each of us has his editor of choice (TextPad, Emacs, Vim & Wing), and everything works fine. | 2 | 16 | 0 | We are ready in our company to move everything to Python instead of C#, we are a consulting company and we usually write small projects in C# we don't do huge projects and our work is more based on complex mathematical models not complex software structures. So we believe IronPython is a good platform for us because it provides standard GUI functionality on windows and access to all of .Net libraries.
I know Ironpython studio is not complete, and in fact I had a hard time adding my references but I was wondering if someone could list some of the pros and cons of this migration for us, considering Python code is easier to read by our clients and we usually deliver a proof-of-concept prototype instead of a full-functional code, our clients usually go ahead and implement the application themselves | Pros and cons of IronPython and IronPython Studio | 1.2 | 0 | 0 | 5,128 |
473,498 | 2009-01-23T16:16:00.000 | 1 | 0 | 1 | 0 | python,python-imaging-library,dpi | 473,514 | 5 | false | 0 | 0 | Printers have various resolutions in which they print. If you select a print resolution of 200 DPI for instance (or if it's set as default in the printer driver), then a 200 pixel image should be one inch in size. | 4 | 5 | 0 | Using Python's Imaging Library I want to create a PNG file.
I would like it if when printing this image, without any scaling, it would always print at a known and consistent 'size' on the printed page.
Is the resolution encoded in the image?
If so, how do I specify it?
And even if it is, does this have any relevance when it goes to the printer? | When printing an image, what determines how large it will appear on a page? | 0.039979 | 0 | 0 | 4,781 |
473,498 | 2009-01-23T16:16:00.000 | 0 | 0 | 1 | 0 | python,python-imaging-library,dpi | 473,539 | 5 | false | 0 | 0 | Much is going to depend on the software you're using to print. If you're placing the image in a Word document, it will scale according to the DPI, up to the width of your page. If you're putting it on a web page, the DPI will not matter at all. | 4 | 5 | 0 | Using Python's Imaging Library I want to create a PNG file.
I would like it if when printing this image, without any scaling, it would always print at a known and consistent 'size' on the printed page.
Is the resolution encoded in the image?
If so, how do I specify it?
And even if it is, does this have any relevance when it goes to the printer? | When printing an image, what determines how large it will appear on a page? | 0 | 0 | 0 | 4,781 |
473,498 | 2009-01-23T16:16:00.000 | 1 | 0 | 1 | 0 | python,python-imaging-library,dpi | 473,556 | 5 | false | 0 | 0 | Both image print size and resolution are relevant to printing an image of a specific scale and quality. Bear in mind that if the image is then included with a desktop publishing workspace (Word, InDesign) or even a web page, the image is then subject to any specified resolution in the parent document -- this won't necessarily alter the relative scale of the image in the case of desktop publishing programs but will alter image quality.
And yes, all images have a resolution property, which answers half your question - I don't know Python... | 4 | 5 | 0 | Using Python's Imaging Library I want to create a PNG file.
I would like it if when printing this image, without any scaling, it would always print at a known and consistent 'size' on the printed page.
Is the resolution encoded in the image?
If so, how do I specify it?
And even if it is, does this have any relevance when it goes to the printer? | When printing an image, what determines how large it will appear on a page? | 0.039979 | 0 | 0 | 4,781 |
473,498 | 2009-01-23T16:16:00.000 | 3 | 0 | 1 | 0 | python,python-imaging-library,dpi | 21,163,772 | 5 | false | 0 | 0 | I found a very simple way to get dpi information into the png:
im.save('myfile.png',dpi=[600,600])
Unfortunately I did not find this documented anywhere and had to dig into the PIL source code. | 4 | 5 | 0 | Using Python's Imaging Library I want to create a PNG file.
I would like it if when printing this image, without any scaling, it would always print at a known and consistent 'size' on the printed page.
Is the resolution encoded in the image?
If so, how do I specify it?
And even if it is, does this have any relevance when it goes to the printer? | When printing an image, what determines how large it will appear on a page? | 0.119427 | 0 | 0 | 4,781 |
473,973 | 2009-01-23T18:34:00.000 | 4 | 0 | 1 | 0 | python,arrays,random,shuffle | 40,674,024 | 11 | false | 0 | 0 | In addition to the previous replies, I would like to introduce another function.
numpy.random.shuffle as well as random.shuffle perform in-place shuffling. However, if you want to return a shuffled array numpy.random.permutation is the function to use. | 1 | 325 | 1 | What's the easiest way to shuffle an array with python? | Shuffle an array with python, randomize array item order with python | 0.072599 | 0 | 0 | 285,289 |
474,034 | 2009-01-23T18:53:00.000 | 8 | 0 | 0 | 0 | python,gtk,pygtk,widget | 474,134 | 2 | true | 0 | 1 | An endless number of widgets in a column: Sounds like a GtkVBox.
Vertical scrollbar: Put your VBox in a GtkScrolledWindow.
Horizontal stretching: This requires setting the appropriate properties for the VBox, ScrolledWindow, and your other widgets. At least in Glade the defaults seem to mostly handle this (You will probably want to change the scrollbar policy of the ScrolledWindow).
Now for the trick. If you just do what I've listed above, the contents of the VBox will try to resize vertically as well as horizontally, and you won't get your scrollbar. The solution is to place your VBox in a GtkViewport.
So the final hierarchy is ScrolledWindow( Viewport( VBox( widgets ) ) ). | 1 | 4 | 0 | I'm working with PyGTK, trying to come up with a combination of widgets that will do the following:
Let me add an endless number of widgets in a column
Provide a vertical scrollbar to get to the ones that run off the bottom
Make the widgets' width adjust to fill available horizontal space when the window is resized
Thanks - I'm new to GTK. | Which GTK widget combination to use for scrollable column of widgets? | 1.2 | 0 | 0 | 4,354 |
474,261 | 2009-01-23T19:55:00.000 | 7 | 0 | 0 | 0 | python,sqlite,pysqlite,python-db-api | 474,296 | 3 | true | 0 | 0 | That's because parameters can only be passed to VALUES. The table name can't be parametrized.
Also you have quotes around a parametrized argument on the second query. Remove the quotes, escaping is handled by the underlining library automatically for you. | 1 | 1 | 0 | I think I am being a bonehead, maybe not importing the right package, but when I do...
from pysqlite2 import dbapi2 as sqlite
import types
import re
import sys
...
def create_asgn(self):
stmt = "CREATE TABLE ? (login CHAR(8) PRIMARY KEY NOT NULL, grade INTEGER NOT NULL)"
stmt2 = "insert into asgn values ('?', ?)"
self.cursor.execute(stmt, (sys.argv[2],))
self.cursor.execute(stmt2, [sys.argv[2], sys.argv[3]])
...
I get the error pysqlite2.dbapi2.OperationalError: near "?": syntax error
This makes very little sense to me, as the docs show that pysqlite is qmark parametrized. I am new to python and db-api though, help me out! THANKS | Python pysqlite not accepting my qmark parameterization | 1.2 | 1 | 0 | 1,629 |
475,216 | 2009-01-24T00:40:00.000 | 13 | 1 | 0 | 0 | python,security,reverse-engineering | 475,246 | 5 | true | 0 | 0 | Security through obscurity never works. If you must use a proprietary license, enforce it through the law, not half-baked obfuscation attempts.
If you're worried about them learning your security (e.g. cryptography) algorithm, the same applies. Real, useful, security algorithms (like AES) are secure even though the algorithm is fully known. | 2 | 8 | 0 | If there is truly a 'best' way, what is the best way to ship a python app and ensure people can't (easily) reverse engineer your algorithms/security/work in general?
If there isn't a 'best' way, what are the different options available?
Background:
I love coding in Python and would love to release more apps with it. One thing that I wonder about is the possibility of people circumventing any licensing code I put in, or being able to just rip off my entire source base. I've heard of Py2Exe and similar applications, but I'm curious if there are 'preferred' ways of doing it, or if this problem is just a fact of life. | Python Applications: Can You Secure Your Code Somehow? | 1.2 | 0 | 0 | 12,980 |
475,216 | 2009-01-24T00:40:00.000 | 8 | 1 | 0 | 0 | python,security,reverse-engineering | 475,394 | 5 | false | 0 | 0 | Even if you use a compiled language like C# or Java, people can perform reverse engineering if they are motivated and technically competent. Obfuscation is not a reliable protection against this.
You can add prohibition against reverse-engineering to your end-user license agreement for your software. Most proprietary companies do this. But that doesn't prevent violation, it only gives you legal recourse.
The best solution is to offer products and services in which the user's access to read your code does not harm your ability to sell your product or service. Base your business on service provided, or subscription to periodic updates to data, rather than the code itself.
Example: Slashdot actually makes their code for their website available. Does this harm their ability to run their website? No.
Another remedy is to set your price point such that the effort to pirate your code is more costly than simply buying legitimate licenses to use your product. Joel Spolsky has made a recommendation to this effects in his articles and podcasts. | 2 | 8 | 0 | If there is truly a 'best' way, what is the best way to ship a python app and ensure people can't (easily) reverse engineer your algorithms/security/work in general?
If there isn't a 'best' way, what are the different options available?
Background:
I love coding in Python and would love to release more apps with it. One thing that I wonder about is the possibility of people circumventing any licensing code I put in, or being able to just rip off my entire source base. I've heard of Py2Exe and similar applications, but I'm curious if there are 'preferred' ways of doing it, or if this problem is just a fact of life. | Python Applications: Can You Secure Your Code Somehow? | 1 | 0 | 0 | 12,980 |
475,302 | 2009-01-24T01:38:00.000 | 1 | 0 | 0 | 0 | python,postgresql | 476,089 | 3 | false | 0 | 0 | I was in the exact same situation as you and went with PL/Python after giving up on PL/SQL after a while. It was a good decision, looking back. Some things that bit me where unicode issues (client encoding, byte sequence) and specific postgres data types (bytea). | 2 | 3 | 0 | I have been working with PostgreSQL, playing around with Wikipedia's millions of hyperlinks and such, for 2 years now. I either do my thing directly by sending SQL commands, or I write a client side script in python to manage a million queries when this cannot be done productively (efficiently and effectively) manually.
I would run my python script on my 32bit laptop and have it communicate with a $6000 64bit server running PostgreSQL; I would hence have an extra 2.10 Ghz, 3 GB of RAM, psyco and a multithreaded SQL query manager.
I now realize that it is time for me to level up. I need to learn to server-side script using a procedural language (PL); I really need to reduce network traffic and its inherent serializing overhead.
Now, I really do not feel like researching all the PLs. Knowing that I already know python, and that I am looking for the means between effort and language efficiency, what PL do you guys fancy I should install, learn and use, and why and how? | PostgreSQL procedural languages: to choose? | 0.066568 | 1 | 0 | 2,801 |
475,302 | 2009-01-24T01:38:00.000 | 2 | 0 | 0 | 0 | python,postgresql | 475,939 | 3 | false | 0 | 0 | Why can't you run your Python on the database server? That has the fewest complexities -- you can run the program you already have. | 2 | 3 | 0 | I have been working with PostgreSQL, playing around with Wikipedia's millions of hyperlinks and such, for 2 years now. I either do my thing directly by sending SQL commands, or I write a client side script in python to manage a million queries when this cannot be done productively (efficiently and effectively) manually.
I would run my python script on my 32bit laptop and have it communicate with a $6000 64bit server running PostgreSQL; I would hence have an extra 2.10 Ghz, 3 GB of RAM, psyco and a multithreaded SQL query manager.
I now realize that it is time for me to level up. I need to learn to server-side script using a procedural language (PL); I really need to reduce network traffic and its inherent serializing overhead.
Now, I really do not feel like researching all the PLs. Knowing that I already know python, and that I am looking for the means between effort and language efficiency, what PL do you guys fancy I should install, learn and use, and why and how? | PostgreSQL procedural languages: to choose? | 0.132549 | 1 | 0 | 2,801 |
476,586 | 2009-01-24T20:10:00.000 | 4 | 0 | 1 | 0 | python,metaprogramming,design-patterns,factory,metaclass | 476,743 | 5 | true | 0 | 0 | The class system in Smalltalk is an interesting one to study. In Smalltalk, everything is an object and every object has a class. This doesn't imply that the hierarchy goes to infinity. If I remember correctly, it goes something like:
5 -> Integer -> Integer class -> Metaclass -> Metaclass class -> Metaclass -> ... (it loops)
Where '->' denotes "is an instance of". | 3 | 15 | 0 | I recently discovered metaclasses in python.
Basically a metaclass in python is a class that creates a class. There are many useful reasons why you would want to do this - any kind of class initialisation for example. Registering classes on factories, complex validation of attributes, altering how inheritance works, etc. All of this becomes not only possible but simple.
But in python, metaclasses are also plain classes. So, I started wondering if the abstraction could usefully go higher, and it seems to me that it can and that:
a metaclass corresponds to or implements a role in a pattern (as in GOF pattern languages).
a meta-metaclass is the pattern itself (if we allow it to create tuples of classes representing abstract roles, rather than just a single class)
a meta-meta-metaclass is a pattern factory, which corresponds to the GOF pattern groupings, e.g. Creational, Structural, Behavioural. A factory where you could describe a case of a certain type of problem and it would give you a set of classes that solved it.
a meta-meta-meta-metaclass (as far as I could go), is a pattern factory factory, a factory to which you could perhaps describe the type of your problem and it would give you a pattern factory to ask.
I have found some stuff about this online, but mostly not very useful. One problem is that different languages define metaclasses slightly differently.
Has anyone else used metaclasses like this in python/elsewhere, or seen this used in the wild, or thought about it? What are the analogues in other languages? E.g. in C++ how deep can the template recursion go?
I'd very much like to research it further. | Is anyone using meta-meta-classes / meta-meta-meta-classes in Python/ other languages? | 1.2 | 0 | 0 | 1,972 |
476,586 | 2009-01-24T20:10:00.000 | 8 | 0 | 1 | 0 | python,metaprogramming,design-patterns,factory,metaclass | 476,633 | 5 | false | 0 | 0 | To answer your question: no.
Feel free to research it further.
Note, however, that you've conflated design patterns (which are just ideas) with code (which is an implementation.)
Good code often reflects a number of interlocking design patterns. There's no easy way for formalize this. The best you can do is a nice picture, well-written docstrings, and method names that reflect the various design patterns.
Also note that a meta-class is a class. That's a loop. There's no higher level of abstractions. At that point, it's just intent. The idea of meta-meta-class doesn't mean much -- it's a meta-class for meta-classes, which is silly but technically possible. It's all just a class, however.
Edit
"Are classes that create metaclasses really so silly? How does their utility suddenly run out?"
A class that creates a class is fine. That's pretty much it. The fact that the target class is a meta class or an abstract superclass or a concrete class doesn't matter. Metaclasses make classes. They might make other metaclasses, which is weird, but they're still just metaclasses making classes.
The utility "suddenly" runs out because there's no actual thing you need (or can even write) in a metaclass that makes another metaclass. It isn't that it "suddenly" becomes silly. It's that there's nothing useful there.
As I seed, feel free to research it. For example, actually write a metaclass that builds another metaclass. Have fun. There might be something useful there.
The point of OO is to write class definitions that model real-world entities. As such, a metaclass is sometimes handy to define cross-cutting aspects of several related classes. (It's a way to do some Aspect-Oriented Programming.) That's all a metaclass can really do; it's a place to hold a few functions, like __new__(), that aren't proper parts of the class itself. | 3 | 15 | 0 | I recently discovered metaclasses in python.
Basically a metaclass in python is a class that creates a class. There are many useful reasons why you would want to do this - any kind of class initialisation for example. Registering classes on factories, complex validation of attributes, altering how inheritance works, etc. All of this becomes not only possible but simple.
But in python, metaclasses are also plain classes. So, I started wondering if the abstraction could usefully go higher, and it seems to me that it can and that:
a metaclass corresponds to or implements a role in a pattern (as in GOF pattern languages).
a meta-metaclass is the pattern itself (if we allow it to create tuples of classes representing abstract roles, rather than just a single class)
a meta-meta-metaclass is a pattern factory, which corresponds to the GOF pattern groupings, e.g. Creational, Structural, Behavioural. A factory where you could describe a case of a certain type of problem and it would give you a set of classes that solved it.
a meta-meta-meta-metaclass (as far as I could go), is a pattern factory factory, a factory to which you could perhaps describe the type of your problem and it would give you a pattern factory to ask.
I have found some stuff about this online, but mostly not very useful. One problem is that different languages define metaclasses slightly differently.
Has anyone else used metaclasses like this in python/elsewhere, or seen this used in the wild, or thought about it? What are the analogues in other languages? E.g. in C++ how deep can the template recursion go?
I'd very much like to research it further. | Is anyone using meta-meta-classes / meta-meta-meta-classes in Python/ other languages? | 1 | 0 | 0 | 1,972 |
476,586 | 2009-01-24T20:10:00.000 | 8 | 0 | 1 | 0 | python,metaprogramming,design-patterns,factory,metaclass | 2,090,276 | 5 | false | 0 | 0 | During the History of Programming Languages conference in 2007, Simon Peyton Jones commented that Haskell allows meta programming using Type Classes, but that its really turtles all the way down. You can meta-meta-meta-meta etc program in Haskell, but that he's never heard of anyone using more than 3 levels of indirection.
Guy Steele pointed out that its the same thing in Lisp and Scheme. You can do meta-programming using backticks and evals (you can think of a backtick as a Python lambda, kinda), but he's never seen more than 3 backticks used.
Presumably they have seen more code than you or I ever has, so its only a slight exaggeration to say that no-one has ever gone beyond 3 levels of meta.
If you think about it, most people don't ever use meta-programming, and two levels is pretty hard to wrap your head around. I would guess that three is nearly impossible, and the that last guy to try four ended up in an asylum. | 3 | 15 | 0 | I recently discovered metaclasses in python.
Basically a metaclass in python is a class that creates a class. There are many useful reasons why you would want to do this - any kind of class initialisation for example. Registering classes on factories, complex validation of attributes, altering how inheritance works, etc. All of this becomes not only possible but simple.
But in python, metaclasses are also plain classes. So, I started wondering if the abstraction could usefully go higher, and it seems to me that it can and that:
a metaclass corresponds to or implements a role in a pattern (as in GOF pattern languages).
a meta-metaclass is the pattern itself (if we allow it to create tuples of classes representing abstract roles, rather than just a single class)
a meta-meta-metaclass is a pattern factory, which corresponds to the GOF pattern groupings, e.g. Creational, Structural, Behavioural. A factory where you could describe a case of a certain type of problem and it would give you a set of classes that solved it.
a meta-meta-meta-metaclass (as far as I could go), is a pattern factory factory, a factory to which you could perhaps describe the type of your problem and it would give you a pattern factory to ask.
I have found some stuff about this online, but mostly not very useful. One problem is that different languages define metaclasses slightly differently.
Has anyone else used metaclasses like this in python/elsewhere, or seen this used in the wild, or thought about it? What are the analogues in other languages? E.g. in C++ how deep can the template recursion go?
I'd very much like to research it further. | Is anyone using meta-meta-classes / meta-meta-meta-classes in Python/ other languages? | 1 | 0 | 0 | 1,972 |
476,659 | 2009-01-24T21:09:00.000 | 1 | 0 | 1 | 0 | python,windows-xp,cygwin,python-2.6 | 476,809 | 3 | false | 0 | 0 | cygwin is effectively a Unix subkernel. Setup and installed in its default manner it won't interrupt or change any existing Windows XP functionality. However, you'll have to start the cygwin equivalent of the command prompt before you can use its functionality.
With that said, some of the functionality you're talking about is available in Windows. Piping definitely is. For instance:
netstat -ano | findstr :1433
is a command line I use to make sure my SQL Server is listening on the default port. The output of netstat is being piped to findstr so I only have to see any lines containing :1433. | 1 | 2 | 0 | New to python (and programming). What exactly do I need from Cygwin? I'm running python 2.6 on winxp. Can I safely download the complete Cygwin? It just seems like a huge bundle of stuff.
Well, I keep running into modules and functionality (i.e. piping output) which suggest downloading various cygwin components. Will cygwin change or modify any other os functionality or have any other side effects? | Cygwin and Python 2.6 | 0.066568 | 0 | 0 | 5,753 |
476,968 | 2009-01-25T00:41:00.000 | 6 | 0 | 0 | 0 | java,python,jython | 477,533 | 6 | false | 1 | 0 | Wrap your Java-Code in a Container (Servlet / EJB).
So you don´t loose time in the vm-startup and you go the way to more service-oriented.
For the wraping you can use jython (only make sense if you are familiar with python)
Choose a communication-protocoll in which python and java can use:
json (see www.json.org)
rmi (Python: JPype)
REST
SOAP (only for the brave)
Choose something you or your partners are familliar with! | 1 | 41 | 0 | I have a python app and java app. The python app generates input for the java app and invokes it on the command line.
I'm sure there must be a more elegant solution to this; just like using JNI to invoke C code from Java.
Any pointers?
(FYI I'm v. new to Python)
Clarification (at the cost of a long question: apologies)
The py app (which I don't own) takes user input in the form of a number of configuration files. It then interprits these and farms work off to a number of (hidden) tools via a plugin mechanism. I'm looking to add support for the functionality provided by the legacy Java app.
So it doesn't make sense to call the python app from the java app and I can't run the py app in a jython environment (on the JVM).
Since there is no obvious mechanism for this I think the simple CL invocation is the best solution. | Using a java library from python | 1 | 0 | 0 | 60,373 |
477,061 | 2009-01-25T02:19:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,unicode | 477,104 | 4 | false | 0 | 0 | In the general case, it's probably not possible to compare unicode strings. The problem is that there are several ways to compose the same characters. A simple example is accented roman characters. Although there are codepoints for basically all of the commonly used accented characters, it is also correct to compose them from unaccented base letters and a non-spacing accent. This issue is more significant in many non-roman alphabets. | 1 | 31 | 0 | I work in Python and would like to read user input (from command line) in Unicode format, ie a Unicode equivalent of raw_input?
Also, I would like to test Unicode strings for equality and it looks like a standard == does not work. | How to read Unicode input and compare Unicode strings in Python? | 0.049958 | 0 | 0 | 37,652 |
477,335 | 2009-01-25T07:40:00.000 | -1 | 1 | 1 | 0 | python,sage | 477,372 | 3 | false | 0 | 0 | I would use C++, since it spends three days I would have time to write C++ code, and it's a lot faster then python, which would be my choice if it were a one day competition. So I would probably use C++ with OpenGL and SDL for the models. The simulations would I first write in C++, and if I had time at the end I would try to implement them in a shader if it were possible. | 2 | 0 | 0 | I will participate a modeling competition, which spends three days.
I need a language which is fast and designed for modeling, such as to 2D/3D models.
I have considered these languages:
Python
Sage
Which languages would you use? | Most suitable language(s) for simulations in modeling? | -0.066568 | 0 | 0 | 185 |
477,335 | 2009-01-25T07:40:00.000 | 4 | 1 | 1 | 0 | python,sage | 477,384 | 3 | true | 0 | 0 | You should use the language that you know best and that has good-enough tools for the task at hand. Depending on when the competition is you may have no time to learn a new language/environment. | 2 | 0 | 0 | I will participate a modeling competition, which spends three days.
I need a language which is fast and designed for modeling, such as to 2D/3D models.
I have considered these languages:
Python
Sage
Which languages would you use? | Most suitable language(s) for simulations in modeling? | 1.2 | 0 | 0 | 185 |
478,359 | 2009-01-25T21:39:00.000 | 7 | 0 | 0 | 1 | python,linux,chroot | 478,396 | 2 | true | 0 | 0 | Yes there are pitfalls. Security wise:
If you run as root, there are always ways to break out. So first chroot(), then PERMANENTLY drop privileges to an other user.
Put nothing which isn't absolutely required into the chroot tree. Especially no suid/sgid files, named pipes, unix domain sockets and device nodes.
Python wise your whole module loading gets screwed up. Python is simply not made for such scenarios. If your application is moderately complex you will run into module loading issues.
I think much more important than chrooting is running as a non privileged user and simply using the file system permissions to keep that user from reading anything of importance. | 1 | 2 | 0 | I'm writing a web-server in Python as a hobby project. The code is targeted at *NIX machines. I'm new to developing on Linux and even newer to Python itself.
I am worried about people breaking out of the folder that I'm using to serve up the web-site. The most obvious way to do this is to filter requests for documents like /../../etc/passwd. However, I'm worried that there might be clever ways to go up the directory tree that I'm not aware of and consequentially my filter won't catch.
I'm considering adding using the os.chroot so that the root directory is the web-site itself. Is this is a safe way of protecting against these jail breaking attacks? Are there any potential pitfalls to doing this that will hurt me down the road? | Python and os.chroot | 1.2 | 0 | 0 | 5,674 |
478,458 | 2009-01-25T22:48:00.000 | 10 | 0 | 1 | 0 | python,regex | 478,470 | 11 | false | 0 | 0 | There is a limit because it would take too much memory to store the complete state machine efficiently. I'd say that if you have more than 100 groups in your re, something is wrong either in the re itself or in the way you are using them. Maybe you need to split the input and work on smaller chunks or something. | 4 | 22 | 0 | Is there any way to beat the 100-group limit for regular expressions in Python? Also, could someone explain why there is a limit. | Python regular expressions with more than 100 groups? | 1 | 0 | 0 | 9,757 |
478,458 | 2009-01-25T22:48:00.000 | 1 | 0 | 1 | 0 | python,regex | 56,148,806 | 11 | false | 0 | 0 | I've found that Python 3 doesn't have this limitation, whereas the same code ran in latest 2.7 displays this error. | 4 | 22 | 0 | Is there any way to beat the 100-group limit for regular expressions in Python? Also, could someone explain why there is a limit. | Python regular expressions with more than 100 groups? | 0.01818 | 0 | 0 | 9,757 |
478,458 | 2009-01-25T22:48:00.000 | -2 | 0 | 1 | 0 | python,regex | 13,458,403 | 11 | false | 0 | 0 | It's very ease to resolve this error:
Open the re class and you'll see this constant _MAXCACHE = 100.
Change the value to 1000, for example, and do a test. | 4 | 22 | 0 | Is there any way to beat the 100-group limit for regular expressions in Python? Also, could someone explain why there is a limit. | Python regular expressions with more than 100 groups? | -0.036348 | 0 | 0 | 9,757 |
478,458 | 2009-01-25T22:48:00.000 | -1 | 0 | 1 | 0 | python,regex | 478,484 | 11 | false | 0 | 0 | I would say you could reduce the number of groups by using non-grouping parentheses, but whatever it is that you're doing seems like you want all these groupings. | 4 | 22 | 0 | Is there any way to beat the 100-group limit for regular expressions in Python? Also, could someone explain why there is a limit. | Python regular expressions with more than 100 groups? | -0.01818 | 0 | 0 | 9,757 |
480,178 | 2009-01-26T15:34:00.000 | 1 | 0 | 1 | 0 | python,decorator | 482,209 | 4 | false | 0 | 0 | Decorators aren't even required to return a function. I've used @atexit.register before. | 2 | 75 | 0 | I know of @staticmethod, @classmethod, and @property, but only through scattered documentation. What are all the function decorators that are built into Python? Is that in the docs? Is there an up-to-date list maintained somewhere? | Python - what are all the built-in decorators? | 0.049958 | 0 | 0 | 35,745 |
480,178 | 2009-01-26T15:34:00.000 | -4 | 0 | 1 | 0 | python,decorator | 482,035 | 4 | false | 0 | 0 | There is no such thing as a list of all decorators. There's no list of all functions. There's no list of all classes.
Decorators are a handy tool for defining a common aspect across functions, methods, or classes. There are the built-in decorators. Plus there are any number of cool and useless decorators. In the same way there are any number of cool and useless classes. | 2 | 75 | 0 | I know of @staticmethod, @classmethod, and @property, but only through scattered documentation. What are all the function decorators that are built into Python? Is that in the docs? Is there an up-to-date list maintained somewhere? | Python - what are all the built-in decorators? | -1 | 0 | 0 | 35,745 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.