Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,957,787 | 2009-12-24T10:37:00.000 | 2 | 0 | 1 | 0 | php,phpdoc,docbook,restructuredtext,python-sphinx | 2,035,342 | 2 | true | 0 | 0 | You can convert ReST to DocBook using pandoc. | 1 | 0 | 0 | I need a documentation system for a PHP project and I wanted it to be able to integrate external documentation (use cases, project scope etc.) with the documentation generated from code comments. It seems that phpDocumentor has exactly the right feature set, but external documentation must be written in DocBook which is too complex for our team.
If it were in python, sphinx would be just about perfect for this job (ReST is definitely simpler than docbook). Is there any way I can integrate external ReST documentation with the docs extracted from phpdoc? Should I just separate the external documentation (eg. use ReST for external and phpdoc for internal)? Or do you have a better suggestion for managing the external documentation? | External documentation for PHP, no DocBook | 1.2 | 1 | 0 | 853 |
1,959,811 | 2009-12-24T21:26:00.000 | 1 | 0 | 1 | 1 | python,windows,py2exe | 1,962,069 | 4 | true | 0 | 0 | I did not find the cause to the problem, but using python 2.5 with py2exe on the same script worked fine on the server.
I guess there is something wrong with py2exe under 2.6. | 1 | 5 | 0 | A simple python script needs to run on a windows server with no python installed.
I used py2exe, which generated a healthy dist subdirectory, with script.exe that runs fine on the local machine.
However, when I run it on the server (Windows Server 2003 R2), it produces this:
The system cannot execute the specified program.
and ERRORLEVEL is 9020.
Any ideas? | Windows Server cannot execute a py2exe-generated app | 1.2 | 0 | 0 | 3,390 |
1,960,155 | 2009-12-25T00:25:00.000 | 0 | 1 | 0 | 0 | python,mysql,unit-testing,ubuntu | 1,960,164 | 2 | false | 0 | 0 | You can try the Blackhole and Memory table types in MySQL. | 2 | 2 | 0 | If I want to be able to test my application against a empty MySQL database each time my application's testsuite is run, how can I start up a server as a non-root user which refers to a empty (not saved anywhere, or in saved to /tmp) MySQL database?
My application is in Python, and I'm using unittest on Ubuntu 9.10. | Start a "throwaway" MySQL session for testing code? | 0 | 1 | 0 | 287 |
1,960,155 | 2009-12-25T00:25:00.000 | 1 | 1 | 0 | 0 | python,mysql,unit-testing,ubuntu | 1,960,160 | 2 | true | 0 | 0 | --datadir for just the data or --basedir | 2 | 2 | 0 | If I want to be able to test my application against a empty MySQL database each time my application's testsuite is run, how can I start up a server as a non-root user which refers to a empty (not saved anywhere, or in saved to /tmp) MySQL database?
My application is in Python, and I'm using unittest on Ubuntu 9.10. | Start a "throwaway" MySQL session for testing code? | 1.2 | 1 | 0 | 287 |
1,960,516 | 2009-12-25T05:00:00.000 | 1 | 0 | 1 | 0 | python,json,floating-point,decimal | 69,778,023 | 18 | false | 0 | 0 | If someone is still looking for the answer, it is most probably you have a 'NaN' in your data that you are trying to encode. Because NaN is considered as float by Python. | 1 | 310 | 0 | I have a Decimal('3.9') as part of an object, and wish to encode this to a JSON string which should look like {'x': 3.9}. I don't care about precision on the client side, so a float is fine.
Is there a good way to serialize this? JSONDecoder doesn't accept Decimal objects, and converting to a float beforehand yields {'x': 3.8999999999999999} which is wrong, and will be a big waste of bandwidth. | Python JSON serialize a Decimal object | 0.011111 | 0 | 0 | 309,796 |
1,961,013 | 2009-12-25T11:23:00.000 | 19 | 0 | 0 | 0 | python,mongodb,couchdb,database,nosql | 3,007,620 | 4 | true | 0 | 0 | Since a nosql database can contain huge amounts of data you can not migrate it in the regular rdbms sence. Actually you can't do it for rdbms as well as soon as your data passes some size threshold. It is impractical to bring your site down for a day to add a field to an existing table, and so with rdbms you end up doing ugly patches like adding new tables just for the field and doing joins to get to the data.
In nosql world you can do several things.
As others suggested you can write your code so that it will handle different 'versions' of the possible schema. this is usually simpler then it looks. Many kinds of schema changes are trivial to code around. for example if you want to add a new field to the schema, you just add it to all new records and it will be empty on the all old records (you will not get "field doesn't exist" errors or anything ;). if you need a 'default' value for the field in the old records it is too trivially done in code.
Another option and actually the only sane option going forward with non-trivial schema changes like field renames and structural changes is to store schema_version in EACH record, and to have code to migrate data from any version to the next on READ. i.e. if your current schema version is 10 and you read a record from the database with the version of 7, then your db layer should call migrate_8, migrate_9, and migrate_10. This way the data that is accessed will be gradually migrated to the new version. and if it is not accessed, then who cares which version is it;) | 4 | 27 | 0 | I'm looking a way to automate schema migration for such databases like MongoDB or CouchDB.
Preferably, this instument should be written in python, but any other language is ok. | Are there any tools for schema migration for NoSQL databases? | 1.2 | 1 | 0 | 5,990 |
1,961,013 | 2009-12-25T11:23:00.000 | 2 | 0 | 0 | 0 | python,mongodb,couchdb,database,nosql | 1,961,090 | 4 | false | 0 | 0 | One of the supposed benefits of these databases is that they are schemaless, and therefore don't need schema migration tools. Instead, you write your data handling code to deal with the variety of data stored in the db. | 4 | 27 | 0 | I'm looking a way to automate schema migration for such databases like MongoDB or CouchDB.
Preferably, this instument should be written in python, but any other language is ok. | Are there any tools for schema migration for NoSQL databases? | 0.099668 | 1 | 0 | 5,990 |
1,961,013 | 2009-12-25T11:23:00.000 | 2 | 0 | 0 | 0 | python,mongodb,couchdb,database,nosql | 1,966,375 | 4 | false | 0 | 0 | If your data are sufficiently big, you will probably find that you cannot EVER migrate the data, or that it is not beneficial to do so. This means that when you do a schema change, the code needs to continue to be backwards compatible with the old formats forever.
Of course if your data "age" and eventually expire anyway, this can do schema migration for you - simply change the format for newly added data, then wait for all data in the old format to expire - you can then retire the backward-compatibility code. | 4 | 27 | 0 | I'm looking a way to automate schema migration for such databases like MongoDB or CouchDB.
Preferably, this instument should be written in python, but any other language is ok. | Are there any tools for schema migration for NoSQL databases? | 0.099668 | 1 | 0 | 5,990 |
1,961,013 | 2009-12-25T11:23:00.000 | 1 | 0 | 0 | 0 | python,mongodb,couchdb,database,nosql | 3,007,685 | 4 | false | 0 | 0 | When a project has a need for a schema migration in regards to a NoSQL database makes me think that you are still thinking in a Relational database manner, but using a NoSQL database.
If anybody is going to start working with NoSQL databases, you need to realize that most of the 'rules' for a RDBMS (i.e. MySQL) need to go out the window too. Things like strict schemas, normalization, using many relationships between objects. NoSQL exists to solve problems that don't need all the extra 'features' provided by a RDBMS.
I would urge you to write your code in a manner that doesn't expect or need a hard schema for your NoSQL database - you should support an old schema and convert a document record on the fly when you access if if you really want more schema fields on that record.
Please keep in mind that NoSQL storage works best when you think and design differently compared to when using a RDBMS | 4 | 27 | 0 | I'm looking a way to automate schema migration for such databases like MongoDB or CouchDB.
Preferably, this instument should be written in python, but any other language is ok. | Are there any tools for schema migration for NoSQL databases? | 0.049958 | 1 | 0 | 5,990 |
1,961,158 | 2009-12-25T13:20:00.000 | 2 | 0 | 1 | 0 | c++,python,exception | 1,961,310 | 5 | false | 0 | 0 | I don't have my copy of Bjarne Stroustrup's "Design & Evolution" handy, but I believe he wrote in there about some experience with resumable exceptions. They found that they made things considerably harder to get correct. After all, if an unexpected error happens in some line, your exception handler then has to patch the problem up sufficiently to allow execution to resume without knowing the context. This may be possible for an out-of-memory error (although such errors are frequently a result of runaway memory allocation, and adding some more memory won't really fix anything), but not for exceptions in general.
So, in C++ and all languages I'm familiar with, execution resumes with the catch, and doesn't automatically go back to the place that threw the exception. | 5 | 5 | 0 | In general, where does program execution resume after an exception has been thrown and caught? Does it resume following the line of code where the exception was thrown, or does it resume following where it's caught? Also, is this behavior consistent across most programming languages? | Where does execution resume following an exception? | 0.07983 | 0 | 0 | 7,214 |
1,961,158 | 2009-12-25T13:20:00.000 | 4 | 0 | 1 | 0 | c++,python,exception | 1,961,180 | 5 | true | 0 | 0 | the execution resumes where the exception is caught, that is at the beginning of the catch block which specifically address the current exception type. the catch block is executed, the other catch blocks are ignored (think of multiple catch block as a switch statement). in some languages, a finally block may also be executed after the catch. then the program proceed with the next instruction following the whole try ... catch ... finally ....
you should note that if an exception is not caught in a block, the exception is propagated to the caller of the current function, and up the call stack until a catch processes the exception. in this case, you can think of function calls like a macro: insert the code of each function where it is called, and you will clearly see the nesting of every try .. catch ... finally ... blocks.
if there is no handler for an exception, the program generally crashes. (some languages may be different on this point).
the behavior for the execution flow is consistent accross every languages i know. the only difference lies in the try ... catch ... finally ... construct: the finally does not exists in every language, some languages does not allow a finally and a catch in the same block (you have to nest two try to use the 2), some languages allows to catch everything (the catch (...) in C++) while some languages don't. | 5 | 5 | 0 | In general, where does program execution resume after an exception has been thrown and caught? Does it resume following the line of code where the exception was thrown, or does it resume following where it's caught? Also, is this behavior consistent across most programming languages? | Where does execution resume following an exception? | 1.2 | 0 | 0 | 7,214 |
1,961,158 | 2009-12-25T13:20:00.000 | 2 | 0 | 1 | 0 | c++,python,exception | 1,961,177 | 5 | false | 0 | 0 | Execution continues in the catch block (where the exception was caught).
This is consistent across languages that uses exceptions.
The important point to note (especially in C++)
Between the throw and the catch point the stack is unwound in an orderly manor so that all objects created on the stack are correctly destroyed (in the expected order). This has resulted in the technique knows as RAII. | 5 | 5 | 0 | In general, where does program execution resume after an exception has been thrown and caught? Does it resume following the line of code where the exception was thrown, or does it resume following where it's caught? Also, is this behavior consistent across most programming languages? | Where does execution resume following an exception? | 0.07983 | 0 | 0 | 7,214 |
1,961,158 | 2009-12-25T13:20:00.000 | 1 | 0 | 1 | 0 | c++,python,exception | 1,961,165 | 5 | false | 0 | 0 | It resumes where the exception is caught. Otherwise, what would be the point of writing the exception clause? | 5 | 5 | 0 | In general, where does program execution resume after an exception has been thrown and caught? Does it resume following the line of code where the exception was thrown, or does it resume following where it's caught? Also, is this behavior consistent across most programming languages? | Where does execution resume following an exception? | 0.039979 | 0 | 0 | 7,214 |
1,961,158 | 2009-12-25T13:20:00.000 | 7 | 0 | 1 | 0 | c++,python,exception | 1,961,176 | 5 | false | 0 | 0 | The code inside the catch block is executed and the original execution continues right after the catch block. | 5 | 5 | 0 | In general, where does program execution resume after an exception has been thrown and caught? Does it resume following the line of code where the exception was thrown, or does it resume following where it's caught? Also, is this behavior consistent across most programming languages? | Where does execution resume following an exception? | 1 | 0 | 0 | 7,214 |
1,961,394 | 2009-12-25T15:59:00.000 | 2 | 0 | 1 | 0 | python,floating-point | 1,961,401 | 5 | false | 0 | 0 | Because you're doing an integer division. If you do -22.0/10 instead, you'll get the correct result. | 2 | 3 | 0 | Why Does -22/10 return -3 in python. Any pointers regarding this will be helpful for me. | Floating Point Concepts in Python | 0.07983 | 0 | 0 | 1,143 |
1,961,394 | 2009-12-25T15:59:00.000 | 2 | 0 | 1 | 0 | python,floating-point | 1,961,702 | 5 | false | 0 | 0 | This happens because the operation of integer division returns the number, which when multiplied by the divisor gives the largest possible integer that is no larger than the number you divided.
This is exactly why 22/10 gives 2: 10*2=20, which is the largest integer multiple of 10 not bigger than 20.
When this goes to the negative, your operation becomes -22/10. Your result is -3. Applying the same logic as in the previous case, we see that 10*-3=-30, which is the largest integer multiple of 10 not bigger than -20.
This is why you get a slightly unexpected answer when dealing with negative numbers.
Hope that helps | 2 | 3 | 0 | Why Does -22/10 return -3 in python. Any pointers regarding this will be helpful for me. | Floating Point Concepts in Python | 0.07983 | 0 | 0 | 1,143 |
1,962,130 | 2009-12-25T22:47:00.000 | 1 | 0 | 0 | 0 | asp.net,python,sqlite,networking,udp | 1,977,499 | 5 | true | 0 | 0 | This sounds like a premature optimization (apologizes if you've already done the profiling). What I would suggest is go ahead and write the system in the simplest, cleanest way, but put a bit of abstraction around the database bits so they can easily by swapped out. Then profile it and find your bottleneck.
If it turns out it is the database, optimize the database in the usual way (indexes, query optimizations, etc...). If its still too slow, most databases support an in-memory table format. Or you can mount a RAM disk and mount individual tables or the whole database on it. | 2 | 0 | 0 | Python --> SQLite --> ASP.NET C#
I am looking for an in memory database application that does not have to write the data it receives to disc. Basically, I'll be having a Python server which receives gaming UDP data and translates the data and stores it in the memory database engine.
I want to stay away from writing to disc as it takes too long. The data is not important, if something goes wrong, it simply flushes and fills up with the next wave of data sent by players.
Next, another ASP.NET server must be able to connect to this in memory database via TCP/IP at regular intervals, say once every second, or 10 seconds. It has to pull this data, and this will in turn update on a website that displays "live" game data.
I'm looking at SQlite, and wondering, is this the right tool for the job, anyone have any suggestions?
Thanks!!! | In memory database with socket capability | 1.2 | 1 | 0 | 309 |
1,962,130 | 2009-12-25T22:47:00.000 | 0 | 0 | 0 | 0 | asp.net,python,sqlite,networking,udp | 1,962,162 | 5 | false | 0 | 0 | The application of SQlite depends on your data complexity.
If you need to perform complex queries on relational data, then it might be a viable option. If your data is flat (i.e. not relational) and processed as a whole, then some python-internal data structures might be applicable. | 2 | 0 | 0 | Python --> SQLite --> ASP.NET C#
I am looking for an in memory database application that does not have to write the data it receives to disc. Basically, I'll be having a Python server which receives gaming UDP data and translates the data and stores it in the memory database engine.
I want to stay away from writing to disc as it takes too long. The data is not important, if something goes wrong, it simply flushes and fills up with the next wave of data sent by players.
Next, another ASP.NET server must be able to connect to this in memory database via TCP/IP at regular intervals, say once every second, or 10 seconds. It has to pull this data, and this will in turn update on a website that displays "live" game data.
I'm looking at SQlite, and wondering, is this the right tool for the job, anyone have any suggestions?
Thanks!!! | In memory database with socket capability | 0 | 1 | 0 | 309 |
1,962,273 | 2009-12-25T23:56:00.000 | 1 | 0 | 0 | 0 | python,qt,pyqt | 1,966,523 | 5 | true | 0 | 1 | The PyQt documentation is exactly as provided on the website, and as
included in the installer. It is not integrated with Assistant (it will be
in a future version). If you want to use Assistant then you can use the Qt
documentation instead (a lot of people do) and translate between C++ and
Python as you read it. | 2 | 7 | 0 | I have installed PyQt GPL v4.6.2 for Python v3.1 and Qt by Nokia v4.6.0 (OpenSource), but the documentation in PyQt is not coming up. Example docs are all blank, too.
Would anyone mind writing a step-by-step guide on what links to visit and what procedures must be executed in order to get text to come up for the PyQt documentation?
Edit: The programs are running on Windows, and the documentation is not coming up in PyQt GPL v4.6.2 for Python v3.1 > Examples > PyQt Examples and Demos and PyQt GPL v4.6.2 for Python v3.1 > Assistant. What needs to done to let both programs access the docs? | PyQt documentation | 1.2 | 0 | 0 | 16,124 |
1,962,273 | 2009-12-25T23:56:00.000 | 1 | 0 | 0 | 0 | python,qt,pyqt | 8,174,151 | 5 | false | 0 | 1 | If you installed the Qt documentation, you should have an app named Assistant. This is a simple-minded browser for a local copy of the Qt doc as found at doc.qt.nokia.com. It is written for C++ but the mental translation to Python is not difficult, and it is nicely formatted and richly cross-linked. I keep Assistant running all the time I'm coding in PyQt4 and find it very helpful.
The PyQt doc, as given at www.riverbankcomputing.co.uk/static/Docs/PyQt4/html/classes.html, is merely this same Nokia text with its formatting and many of the internal links stripped out and edited to python class and function syntax. | 2 | 7 | 0 | I have installed PyQt GPL v4.6.2 for Python v3.1 and Qt by Nokia v4.6.0 (OpenSource), but the documentation in PyQt is not coming up. Example docs are all blank, too.
Would anyone mind writing a step-by-step guide on what links to visit and what procedures must be executed in order to get text to come up for the PyQt documentation?
Edit: The programs are running on Windows, and the documentation is not coming up in PyQt GPL v4.6.2 for Python v3.1 > Examples > PyQt Examples and Demos and PyQt GPL v4.6.2 for Python v3.1 > Assistant. What needs to done to let both programs access the docs? | PyQt documentation | 0.039979 | 0 | 0 | 16,124 |
1,962,447 | 2009-12-26T02:05:00.000 | 4 | 0 | 0 | 0 | python | 1,964,328 | 4 | false | 1 | 0 | File / New Project / enter your project name.
In the Project Browser, create a package named "source"
Right-click the source package, "Code Engineering", "Import Source Directory".
Pick the directory containing your module(s) as the "Root Directory"
Set "Source Type" to Python
Enable "Recursively Process Subdirectories"
Select "Package Per File"
Click "OK". | 2 | 2 | 0 | Please let me know how to create a uml diagram along with its equivalent documentation for the source code(.py format) using enterprise architecture 7.5
Please help me find the solution, I have read the solution for the question on this website related to my topic but in vain | python source code conversion to uml diagram with Sparx Systems Enterprise Architect | 0.197375 | 0 | 0 | 4,824 |
1,962,447 | 2009-12-26T02:05:00.000 | 1 | 0 | 0 | 0 | python | 45,345,997 | 4 | false | 1 | 0 | Go to project browser
Create a model
Right-click model > Add > Add View > Class
Right-click class > Code Engineering > Import Source Directory...
Check "one package per folder"
The last one ensures you'll have an interesting diagram full of classes. | 2 | 2 | 0 | Please let me know how to create a uml diagram along with its equivalent documentation for the source code(.py format) using enterprise architecture 7.5
Please help me find the solution, I have read the solution for the question on this website related to my topic but in vain | python source code conversion to uml diagram with Sparx Systems Enterprise Architect | 0.049958 | 0 | 0 | 4,824 |
1,962,592 | 2009-12-26T04:02:00.000 | 1 | 0 | 0 | 0 | python,wxpython | 1,962,595 | 3 | false | 0 | 1 | You can take a look at the wxPython examples, they also include code samples for almost all of the widgets supported by wxPython. If you use Windows they can be found in the Start Menu folder of WxPython. | 1 | 4 | 0 | I'm starting to learn both Python and wxPython and as part of the app I'm doing, I need to have a simple browser on the left pane of my app. I'm wondering how do I do it? Or at least point me to the right direction that'll help me more on how to do one. Thanks in advance!
EDIT:
a sort of side question, how much of wxPython do I need to learn? Should I use tools like wxGlade? | How do I make a simple file browser in wxPython? | 0.066568 | 0 | 0 | 6,299 |
1,963,353 | 2009-12-26T13:15:00.000 | 0 | 0 | 1 | 0 | python,unicode,encoding,utf-8 | 1,963,489 | 6 | false | 0 | 0 | Have you thought about writing your own converter? It wouldn't be hard to write something that would go through a file and replace \N{A umlaut} with \N{LATIN SMALL LETTER A WITH DIAERESIS} and all the rest. | 1 | 4 | 0 | Are there short Unicode u"\N{...}" names for Latin1 characters in Python ?
\N{A umlaut} etc. would be nice,
\N{LATIN SMALL LETTER A WITH DIAERESIS} etc. is just too long to type every time.
(Added:) I use an English keyboard, but occasionally need German letters, as in "Löwenbräu Weißbier".
Yes one can cut-paste them singly, L cutpaste ö wenbr cutpaste ä ...
but that breaks the flow; I was hoping for a keyboard-only way. | short Unicode \N{} names for Latin-1 characters in Python? | 0 | 0 | 0 | 2,649 |
1,963,453 | 2009-12-26T14:05:00.000 | 3 | 1 | 0 | 0 | python,c,import,python-c-api,python-embedding | 1,963,510 | 5 | false | 0 | 1 | Even if you implement a module in Python, the user would have to import it. This is the way Python works, and it's actually a good thing - it's one of the great pluses of Python - the namespace/module system is robust, easy to use and simple to understand.
For academic exercises only, you could of course add your new functionality to Python itself, by creating a custom interpreter. You could even create new keywords this way. But for any practical purpose, this isn't recommended. | 2 | 1 | 0 | I'm trying to extend Python interpreter by a few C functions I wrote. From reading docs, to expose those function the user has to import the module encompassing the functions.
Is it possible to load pre-load or pre-import via C API the module so that the user doesn't have to type import <mymodule>? Or even better, from <mymodule> import <function>?
Edit: I can do PyRun_SimpleString("from mymodule import myfunction") just after Py_Initialize(); - I was just wondering if there is another way of doing this..?
Edit 2: In other words, I have an application written in C which embeds a Python interpreter. That application provides some functionality which I want to expose to the users so they can write simple Python scripts for the app. All I want is to remove the need of writing from mymodule import myfunction1, myfunction2 because, since it is very specialized app and the script wont work without the app anyway, it doesn't make sense to require to import ... all the time. | Extending Python: pre-load my C module | 0.119427 | 0 | 0 | 697 |
1,963,453 | 2009-12-26T14:05:00.000 | 0 | 1 | 0 | 0 | python,c,import,python-c-api,python-embedding | 1,963,505 | 5 | false | 0 | 1 | Nope. You could add it to the Python interpreter itself, but that would mean creating a custom Python version, which, I guess, is not what you want.
That import <mymodule> is not just for loading the module, it's also for making this module visible in the (main|current) namespace. Being able to do that, w/o hacking the actual Python interpreter, would run against "Explicit is better than implicit" very strongly. | 2 | 1 | 0 | I'm trying to extend Python interpreter by a few C functions I wrote. From reading docs, to expose those function the user has to import the module encompassing the functions.
Is it possible to load pre-load or pre-import via C API the module so that the user doesn't have to type import <mymodule>? Or even better, from <mymodule> import <function>?
Edit: I can do PyRun_SimpleString("from mymodule import myfunction") just after Py_Initialize(); - I was just wondering if there is another way of doing this..?
Edit 2: In other words, I have an application written in C which embeds a Python interpreter. That application provides some functionality which I want to expose to the users so they can write simple Python scripts for the app. All I want is to remove the need of writing from mymodule import myfunction1, myfunction2 because, since it is very specialized app and the script wont work without the app anyway, it doesn't make sense to require to import ... all the time. | Extending Python: pre-load my C module | 0 | 0 | 0 | 697 |
1,964,126 | 2009-12-26T19:23:00.000 | 3 | 0 | 1 | 0 | python,exception | 1,964,131 | 6 | false | 0 | 0 | I would make a specific one. You can catch it and deal with that specific exception since it is a special circumstance that you created :) | 2 | 12 | 0 | Suppose in python you have a routine that accepts three named parameters (as **kwargs), but any two out of these three must be filled in. If only one is filled in, it's an error. If all three are, it's an error. What kind of error would you raise? RuntimeError, a specifically created exception, or other? | What exception to raise if wrong number of arguments passed in to **kwargs? | 0.099668 | 0 | 0 | 10,631 |
1,964,126 | 2009-12-26T19:23:00.000 | 0 | 0 | 1 | 0 | python,exception | 1,964,172 | 6 | false | 0 | 0 | I would use a ValueError, or a subclass thereof: "Raised when a built-in operation or function receives an argument that has the right type but an inappropriate value, and the situation is not described by a more precise exception such as IndexError."
Passing 3 or 1 values when exactly 2 are required would technically be an inappropriate value if you consider all of the arguments a single tuple... At least in my opinion! :) | 2 | 12 | 0 | Suppose in python you have a routine that accepts three named parameters (as **kwargs), but any two out of these three must be filled in. If only one is filled in, it's an error. If all three are, it's an error. What kind of error would you raise? RuntimeError, a specifically created exception, or other? | What exception to raise if wrong number of arguments passed in to **kwargs? | 0 | 0 | 0 | 10,631 |
1,964,583 | 2009-12-26T23:08:00.000 | 1 | 0 | 0 | 0 | python,xpath | 1,964,631 | 2 | false | 0 | 0 | Using XML to store data is probably not optimal, as you experience here. Editing XML is extremely costly.
One way of doing the editing is parsing the xml into a tree, and then inserting stuff into that three, and then rebuilding the xml file.
Editing an xml file in place is also possible, but then you need some kind of search mechanism that finds the location you need to edit or insert into, and then write to the file from that point. Remember to also read the remaining data, because it will be overwritten. This is fine for inserting new tags or data, but editing existing data makes it even more complicated.
My own rule is to not use XML for storage, but to present data. So the storage facility, or some kind of middle man, needs to form xml files from the data it has. | 1 | 2 | 0 | Is it possible to do in place edit of XML document using xpath ?
I'd prefer any python solution but Java would be fine too. | edit in place using xpath | 0.099668 | 0 | 1 | 375 |
1,965,213 | 2009-12-27T04:56:00.000 | 1 | 0 | 1 | 0 | python,multithreading | 1,965,219 | 4 | false | 0 | 0 | You need to fetch completely separate parts of the file on each thread. Calculate the chunk start and end positions based on the number of threads. Each chunk must have no overlap obviously.
For example, if target file was 3000 bytes long and you want to fetch using three thread:
Thread 1: fetches bytes 1 to 1000
Thread 2: fetches bytes 1001 to 2000
Thread 3: fetches bytes 2001 to 3000
You would pre-allocate an empty file of the original size, and write back to the respective positions within the file. | 1 | 0 | 0 | I'm creating a python script which accepts a path to a remote file and an n number of threads. The file's size will be divided by the number of threads, when each thread completes I want them to append the fetch data to a local file.
How do I manage it so that the order in which the threads where generated will append to the local file in order so that the bytes don't get scrambled?
Also, what if I'm to download several files simultaneously? | File downloading using python with threads | 0.049958 | 0 | 1 | 3,862 |
1,966,591 | 2009-12-27T18:04:00.000 | 7 | 0 | 1 | 0 | python,iterator | 1,967,844 | 17 | false | 0 | 0 | You can tee the iterator using, itertools.tee, and check for StopIteration on the teed iterator. | 1 | 216 | 0 | Haven't Python iterators got a has_next method? | has_next in Python iterators? | 1 | 0 | 0 | 192,536 |
1,967,040 | 2009-12-27T21:04:00.000 | 2 | 0 | 1 | 0 | python,audio | 1,967,145 | 4 | false | 0 | 0 | you could use any library that produces MIDI output, in case of .net I'd recommend
the one created by Stephen Toub from Microsoft(can't find from where i got it, but google for it.) | 1 | 7 | 0 | I am confused because there are a lot of programms. But i am looking something like this. I will type a melody like "a4 c3 h3 a2" etc. and then i want to hear this. Does anybody know what i am looking for?
thanks in advance | How can i create a melody? Is there any sound-module? | 0.099668 | 0 | 0 | 6,317 |
1,967,888 | 2009-12-28T03:53:00.000 | -1 | 0 | 0 | 0 | python,wxpython | 1,967,976 | 7 | false | 0 | 1 | If you're used to a more command line approach, this would be a bad idea. Responding to user input is a completely different paradigm, and you're not likely to get it right the first time.
If you're just talking about the difference between wxPython and another GUI, don't worry about it. | 7 | 12 | 0 | Is it better to do it all at once? I'm very new to wxPython and I'm thinking it would be better to write the program in a way familiar to me, then apply the wxPython gui to it after I'm satisfied with the overall design of the app. Any advice? | Is it a bad idea to design and develop a python applications backend and then once finished try to apply a GUI to it? | -0.028564 | 0 | 0 | 724 |
1,967,888 | 2009-12-28T03:53:00.000 | 0 | 0 | 0 | 0 | python,wxpython | 1,970,144 | 7 | false | 0 | 1 | Since you are new to GUI programming, your approach is perfectly valid. It will likely result in a less than optimal UI, but that's OK for now. And in fact, there are some very successful multi-million dollar commercial projects that are built this way.
Arguably a better approach is to first design the UI since that is the most important part. After that is completel you can then create a back-end that can support that UI. This approach still results in separate front- and back-ends but puts the emphasis on the needs of the user, where it should be. | 7 | 12 | 0 | Is it better to do it all at once? I'm very new to wxPython and I'm thinking it would be better to write the program in a way familiar to me, then apply the wxPython gui to it after I'm satisfied with the overall design of the app. Any advice? | Is it a bad idea to design and develop a python applications backend and then once finished try to apply a GUI to it? | 0 | 0 | 0 | 724 |
1,967,888 | 2009-12-28T03:53:00.000 | -1 | 0 | 0 | 0 | python,wxpython | 1,967,968 | 7 | false | 0 | 1 | What level of interactivity do you need? If you need rich feedback and interaction, then you need an OO program model, then you can ad the GUI on top of the objects.
If you just have filters and functions (no real feedback, or just a results window) than a library or component model would be better.
Either way, you are better off coding your logic separate to the GUI, so you can test it more easily. | 7 | 12 | 0 | Is it better to do it all at once? I'm very new to wxPython and I'm thinking it would be better to write the program in a way familiar to me, then apply the wxPython gui to it after I'm satisfied with the overall design of the app. Any advice? | Is it a bad idea to design and develop a python applications backend and then once finished try to apply a GUI to it? | -0.028564 | 0 | 0 | 724 |
1,967,888 | 2009-12-28T03:53:00.000 | 1 | 0 | 0 | 0 | python,wxpython | 1,968,045 | 7 | false | 0 | 1 | Separation of the user interface from the engine code is the unixy way to do it and there's a lot of merit to doing it that way. It results in modular re-usable programs and code that can play nicely with other programs and fit into a larger tool chain.
Having said that, such an approach tends to discount the value of creating a really usable UI experience. It's very difficult and rare for a program's internal model to match the user model when you design your program's functionality first and then the user interface later. As a result, you need to impedance-match the two sides after creating them independently. This results in either creating a compromise in usability (your ui becomes nothing more than a front-end to the command line switches your program takes) or a large glue layer between the UI and the core program which tends to be messy and buggy.
If your program is primarily designed to be run through a user interface interactively with a user, then it probably makes sense to design the user interface in parallel with your actual functionality.
So:
it would be better to write the program in a way familiar to me, then apply the wxPython gui to it after I'm satisfied with the overall design of the app
If your UI is the main means of operating your program, then that UI is part of the program design. Not something to be painted over the program when its done. | 7 | 12 | 0 | Is it better to do it all at once? I'm very new to wxPython and I'm thinking it would be better to write the program in a way familiar to me, then apply the wxPython gui to it after I'm satisfied with the overall design of the app. Any advice? | Is it a bad idea to design and develop a python applications backend and then once finished try to apply a GUI to it? | 0.028564 | 0 | 0 | 724 |
1,967,888 | 2009-12-28T03:53:00.000 | 2 | 0 | 0 | 0 | python,wxpython | 1,967,904 | 7 | false | 0 | 1 | That depends on the problem domain. An image processing tool would be rather difficult to implement without reference to a GUI. For most apps, though, I would argue strongly in favour of separating the two parts. It is much, much easier to develop, test and evolve a UI-free back-end. The gains will vastly outweigh the cost of defining a clean API between the front and back end. In fact, the process of defining the API will yield a better design overall. | 7 | 12 | 0 | Is it better to do it all at once? I'm very new to wxPython and I'm thinking it would be better to write the program in a way familiar to me, then apply the wxPython gui to it after I'm satisfied with the overall design of the app. Any advice? | Is it a bad idea to design and develop a python applications backend and then once finished try to apply a GUI to it? | 0.057081 | 0 | 0 | 724 |
1,967,888 | 2009-12-28T03:53:00.000 | 0 | 0 | 0 | 0 | python,wxpython | 1,967,905 | 7 | false | 0 | 1 | IMHO, that would rather be a better idea. To keep the underlying business logic not tied down to the UI is a better approach that we can worry more about the underlying logic than bogging down too much about the interface.
At the same time, it is also important to have some basic design for your interface so that it helps you have an idea about what kind of inputs and outputs are involved, and making the underlying logic support a wide range of inputs/outputs or simply wide range of interfaces. | 7 | 12 | 0 | Is it better to do it all at once? I'm very new to wxPython and I'm thinking it would be better to write the program in a way familiar to me, then apply the wxPython gui to it after I'm satisfied with the overall design of the app. Any advice? | Is it a bad idea to design and develop a python applications backend and then once finished try to apply a GUI to it? | 0 | 0 | 0 | 724 |
1,967,888 | 2009-12-28T03:53:00.000 | 16 | 0 | 0 | 0 | python,wxpython | 1,967,900 | 7 | true | 0 | 1 | This is a viable approach. In fact, some programmers use it for the advantages it brings:
Modular non-GUI code can then be tied in with different GUIs, not just a single library
It can also be used for a command-line application (or a batch interface to a GUI one)
It can be reused for a web application
And most importantly: it can make unit-testing of the code easier.
However keep in mind that it requires some careful design. You'll want your "logic code" to be free from GUI constraints, and sometimes it is difficult (especially when the code relies on GUI idioms like an event loop). | 7 | 12 | 0 | Is it better to do it all at once? I'm very new to wxPython and I'm thinking it would be better to write the program in a way familiar to me, then apply the wxPython gui to it after I'm satisfied with the overall design of the app. Any advice? | Is it a bad idea to design and develop a python applications backend and then once finished try to apply a GUI to it? | 1.2 | 0 | 0 | 724 |
1,968,343 | 2009-12-28T07:16:00.000 | 1 | 0 | 0 | 0 | python,plone,zope,archetypes | 1,977,227 | 2 | true | 1 | 0 | Yes, IObjectEditedEvent (a direct subclass of IObjectModifiedEvent) is emitted when an Archetypes content object is being changed.
However, the event itself will not tell you if a new file was uploaded. It should be possible however, to obtain the request (context.REQUEST should give you the current request through acquisition, for example) and see if there is a file object there matching the field. If so, the user uploaded a new file for that field and the FileField will have been updated. | 1 | 2 | 0 | I have an AT content type in Plone. It has a number of fields, including a file field. When the user edits an object of this type, how can I tell if a new file was uploaded?
For that matter, how can I tell if any of the fields have been changed?
I am currently using subscribers to hook into the IObjectEditedEvent to do some after the object changes - can I do these things here? | How can I tell if a field has changed value in an AT object in plone? | 1.2 | 0 | 0 | 231 |
1,969,472 | 2009-12-28T13:30:00.000 | 6 | 1 | 0 | 0 | .net,performance,ironpython,ironruby | 1,972,630 | 1 | true | 0 | 0 | IronPython has had more time to focus on performance improvements, but IronRuby has made significant performance improvements as of late. However, we rarely pin IronRuby up against IronPython. While people may comment here that one or the other is faster, and certain special cases/examples may even be uses to prove this, there is no exhaustive comparison available today. | 1 | 4 | 0 | We're aiming to implement a scripting mechanism, using DLR's Microsoft.Scripting and hosting assembly.
Now, someone knows about any performance difference between IronRuby 1.0 and IronPython 2.6?
To my understanding they have different compilers, but IronPython seems more mature and tested, but if anyone has documentation or knowledge on this issue, that would be appreciated. | Performance comparison between IronRuby and IronPython | 1.2 | 0 | 0 | 753 |
1,969,490 | 2009-12-28T13:33:00.000 | 1 | 0 | 1 | 0 | algorithm,python | 1,969,504 | 5 | false | 0 | 0 | if string in [x.name for x in list_of_x] | 1 | 1 | 0 | I have a list with objects of x type. Those objects have an attribute name.
I want to find if a string matchs any of those object names. If I would have a list with the object names I just would do if string in list, so I was wondering given the current situation if there is a way to do it without having to loop over the list. | Shortest way to find if a string matchs an object's attribute value in a list of objects of that type in Python | 0.039979 | 0 | 0 | 167 |
1,972,172 | 2009-12-28T23:46:00.000 | 1 | 0 | 0 | 0 | python,algorithm,interpolation | 1,972,198 | 3 | false | 0 | 0 | Why not try quadlinear interpolation?
extend Trilinear interpolation by another dimension. As long as a linear interpolation model fits your data, it should work. | 2 | 8 | 1 | I have a 3D space (x, y, z) with an additional parameter at each point (energy), giving 4 dimensions of data in total.
I would like to find a set of x, y, z points which correspond to an iso-energy surface found by interpolating between the known points.
The spacial mesh has constant spacing and surrounds the iso-energy surface entirely, however, it does not occupy a cubic space (the mesh occupies a roughly cylindrical space)
Speed is not crucial, I can leave this number crunching for a while. Although I'm coding in Python and NumPy, I can write portions of the code in FORTRAN. I can also wrap existing C/C++/FORTRAN libraries for use in the scripts, if such libraries exist.
All examples and algorithms that I have so far found online (and in Numerical Recipes) stop short of 4D data. | Interpolating a scalar field in a 3D space | 0.066568 | 0 | 0 | 5,443 |
1,972,172 | 2009-12-28T23:46:00.000 | 2 | 0 | 0 | 0 | python,algorithm,interpolation | 1,973,347 | 3 | false | 0 | 0 | Since you have a spatial mesh with constant spacing, you can identify all neighbors on opposite sides of the isosurface. Choose some form of interpolation (q.v. Reed Copsey's answer) and do root-finding along the line between each such neighbor. | 2 | 8 | 1 | I have a 3D space (x, y, z) with an additional parameter at each point (energy), giving 4 dimensions of data in total.
I would like to find a set of x, y, z points which correspond to an iso-energy surface found by interpolating between the known points.
The spacial mesh has constant spacing and surrounds the iso-energy surface entirely, however, it does not occupy a cubic space (the mesh occupies a roughly cylindrical space)
Speed is not crucial, I can leave this number crunching for a while. Although I'm coding in Python and NumPy, I can write portions of the code in FORTRAN. I can also wrap existing C/C++/FORTRAN libraries for use in the scripts, if such libraries exist.
All examples and algorithms that I have so far found online (and in Numerical Recipes) stop short of 4D data. | Interpolating a scalar field in a 3D space | 0.132549 | 0 | 0 | 5,443 |
1,972,672 | 2009-12-29T02:49:00.000 | 3 | 0 | 1 | 0 | python,python-3.x,sorting,key,python-2.x | 1,972,691 | 3 | false | 0 | 0 | Besides key=, the sort method of lists in Python 2.x could alternatively take a cmp= argument (not a good idea, it's been removed in Python 3); with either or none of these two, you can always pass reverse=True to have the sort go downwards (instead of upwards as is the default, and which you can also request explicitly with reverse=False if you're really keen to do that for some reason). I have no idea what that value argument you're mentioning is supposed to do. | 1 | 16 | 0 | Is there any other argument than key, for example: value? | What arguments does Python sort() function have? | 0.197375 | 0 | 0 | 35,819 |
1,975,769 | 2009-12-29T17:19:00.000 | 0 | 0 | 0 | 0 | python,django | 1,976,174 | 3 | false | 1 | 0 | Your question is little too generic.
The general way of doing it involves:
Extend templates of the reusable apps
Pass the new template name to the view (Reusable apps should accepts that argument)
Also pass extra_context to the reusable-generic-view
Use your own view to create an extra_context and return the reuse-able view, from your view. | 1 | 1 | 0 | Is it good practice to treat individual app views as a blocks of HTML that can be pieced together to form a larger site? If not, what is the best way to reuse app views from project to project, assuming each one uses a different set of templates? | Piecing together Django views | 0 | 0 | 0 | 141 |
1,977,521 | 2009-12-29T23:15:00.000 | 1 | 0 | 0 | 0 | python,audio,pyglet | 1,980,577 | 2 | true | 1 | 0 | It doesn't appear that pyglet has support for setting a stop time. Your options are:
Poll the current time and stop playback when you've reached your desired endpoint. This may not be precise enough for you.
Or, use a sound file library to extract the portion you want into a temporary sound file, then use pyglet to play that sound file in its entirety. Python has built-in support for .wav files (the "wave" module), or you could shell out to a command-line tool like "sox". | 1 | 1 | 0 | How can I use the pyglet API for sound to play subsets of a sound file e.g. from 1 second in to 3.5seconds of a 6 second sound clip?
I can load a sound file and play it, and can seek to the start of the interval desired, but am wondering how to stop playback at the point indicated? | Play Subset of audio file using Pyglet | 1.2 | 0 | 0 | 1,156 |
1,977,571 | 2009-12-29T23:30:00.000 | 7 | 1 | 0 | 0 | python,paramiko | 1,978,007 | 1 | true | 0 | 0 | No, paramiko has no support for telnet or ftp -- you're indeed better off using a higher-level abstraction and implementing it twice, with paramiko and without it (with the ftplib and telnetlib modules of the Python standard library). | 1 | 6 | 0 | I'm looking at existing python code that heavily uses Paramiko to do SSH and FTP. I need to allow the same code to work with some hosts that do not support a secure connection and over which I have no control.
Is there a quick and easy way to do it via Paramiko, or do I need to step back, create some abstraction that supports both paramiko and Python's FTP libraries, and refactor the code to use this abstraction? | Does Paramiko support non-secure telnet and ftp instead of just SSH and SFTP? | 1.2 | 0 | 1 | 7,051 |
1,978,139 | 2009-12-30T02:44:00.000 | 0 | 1 | 1 | 0 | java,python,optimization,jython | 6,481,974 | 3 | false | 0 | 0 | I know this is an old question, but I'm just putting this for completeness.
You can use:-J-server flag to launch Jython in the Java server mode, which can help speed up the hot loops. (JVM will look to aggressively optimize, but might slow up the start up time) | 3 | 2 | 0 | Are their any ways to optimize Jython without resorting to profiling or significantly changing the code?
Specifically are there any flags that can be passed to the compiler, or code hints in tight loops. | Jython Optimizations | 0 | 0 | 0 | 707 |
1,978,139 | 2009-12-30T02:44:00.000 | 1 | 1 | 1 | 0 | java,python,optimization,jython | 1,990,415 | 3 | true | 0 | 0 | Jython compiler does not offer lots of optimization choices. However, since the Java virtual machine (java) and perhaps compiler (javac) are getting invoked in the back end or at runtime, you should take a look at them.
Java has different runtime switches to use depending on whether you are going to launch it as a server process, client process, etc. You can also tell how much memory to allocate too. | 3 | 2 | 0 | Are their any ways to optimize Jython without resorting to profiling or significantly changing the code?
Specifically are there any flags that can be passed to the compiler, or code hints in tight loops. | Jython Optimizations | 1.2 | 0 | 0 | 707 |
1,978,139 | 2009-12-30T02:44:00.000 | 6 | 1 | 1 | 0 | java,python,optimization,jython | 1,978,207 | 3 | false | 0 | 0 | No flags, no code hints. You can optimize by tweaking your code much as you would for any other Python implementation (hoisting, etc), but profiling helps by telling you where it's worth your while to expend such effort -- so, sure, you can optimize "without resorting to profiling" (and the code changes to do so may well be deemed to be not significant), but you're unlikely to guess right about where your time and energy are best spent, while profiling helps you determine exactly that. | 3 | 2 | 0 | Are their any ways to optimize Jython without resorting to profiling or significantly changing the code?
Specifically are there any flags that can be passed to the compiler, or code hints in tight loops. | Jython Optimizations | 1 | 0 | 0 | 707 |
1,978,188 | 2009-12-30T03:01:00.000 | 5 | 0 | 0 | 0 | python,directory,web2py | 1,987,564 | 3 | false | 1 | 0 | In any multi-threaded Python program (and not only Python) you should not use os.chdir and you should not change sys.path when you have more than one thread running. It is not safe because it affects other threads. Moreover you should not sys.path.append() in a loop because it may explode.
All web frameworks are multi-threaded and requests are executed in a loop. Some web frameworks do not allow you to install/un-install applications without restarting the web server and therefore IF os.chdir/sys.path.append are only executed at startup then there is no problem.
In web2py we want to be able to install/uninstall applications without restarting the web server. We want apps to be very dynamical (for example define models based on information provided with the http request). We want each app to have its own models folder and we want complete separation between apps so that if two apps need to different versions of the same module, they do not conflict with each other, so we provide APIs to do so (request.folder, local_import).
You can still use the normal os.chdir and sys.path.append but you should do it outside threads (and this is not a web2py specific issue). You can use import anywhere you like as you would in any other Python program.
I strongly suggest moving this discussion to the web2py mailing list. | 1 | 3 | 0 | Well I want to use WEb2Py because it's pretty nice..
I just need to change the working directory to the directory where all my modules/libraries/apps are so i can use them. I want to be able to import my real program when I use the web2py interface/applications. I need to do this instead of putting all my apps and stuff inside the Web2Py folder... I'm trying to give my program a web frontend without putting the program in the Web2Py folder.. Sorry if this is hard to understand . | Web2Py Working Directory | 0.321513 | 0 | 0 | 1,740 |
1,978,426 | 2009-12-30T04:27:00.000 | 6 | 0 | 0 | 0 | python,web2py | 1,980,510 | 1 | false | 1 | 0 | In web2py your models and controllers are executed, not imported. They are executed every time a request arrives. If you press the button [compile] in admin, they will be bytecode compiled and some other optimizations are performs.
If your app (in models and controllers) does "import somemodule", then the import statement is executed at every request but "somemodule" is actually imported only the first time it is executed, as you asked. | 1 | 4 | 0 | I'm using Web2Py and i want to import my program simply once per session... not everytime the page is loaded. is this possible ? such as "import Client" being used on the page but only import it once per session.. | Web2py Import Once per Session | 1 | 0 | 1 | 953 |
1,978,791 | 2009-12-30T06:56:00.000 | 0 | 0 | 0 | 0 | python,cherrypy | 1,978,818 | 5 | false | 0 | 0 | Why dont you use open source build tools (continuous integration tools) like Cruise. Most of them come with a web server/xml interface and sometimes with fancy reports as well. | 4 | 0 | 0 | I want to develop a tool for my project using python. The requirements are:
Embed a web server to let the user get some static files, but the traffic is not very high.
User can configure the tool using http, I don't want a GUI page, I just need a RPC interface, like XML-RPC? or others?
Besides the web server, the tool need some background job to do, so these jobs need to be done with the web server.
So, Which python web server is best choice? I am looking at CherryPy, If you have other recommendation, please write it here. | Python web server? | 0 | 0 | 1 | 1,002 |
1,978,791 | 2009-12-30T06:56:00.000 | -3 | 0 | 0 | 0 | python,cherrypy | 1,979,792 | 5 | false | 0 | 0 | This sounds like a fun project. So, why don't write your own HTTP server? Its not so complicated after all, HTTP is a well-known and easy to implement protocol and you'll gain a lot of new knowledge!
Check documentation or manual pages (whatever you prefer) of socket(), bind(), listen(), accept() and so on. | 4 | 0 | 0 | I want to develop a tool for my project using python. The requirements are:
Embed a web server to let the user get some static files, but the traffic is not very high.
User can configure the tool using http, I don't want a GUI page, I just need a RPC interface, like XML-RPC? or others?
Besides the web server, the tool need some background job to do, so these jobs need to be done with the web server.
So, Which python web server is best choice? I am looking at CherryPy, If you have other recommendation, please write it here. | Python web server? | -0.119427 | 0 | 1 | 1,002 |
1,978,791 | 2009-12-30T06:56:00.000 | 1 | 0 | 0 | 0 | python,cherrypy | 1,979,714 | 5 | false | 0 | 0 | Use the WSGI Reference Implementation wsgiref already provided with Python
Use REST protocols with JSON (not XML-RPC). It's simpler and faster than XML.
Background jobs are started with subprocess. | 4 | 0 | 0 | I want to develop a tool for my project using python. The requirements are:
Embed a web server to let the user get some static files, but the traffic is not very high.
User can configure the tool using http, I don't want a GUI page, I just need a RPC interface, like XML-RPC? or others?
Besides the web server, the tool need some background job to do, so these jobs need to be done with the web server.
So, Which python web server is best choice? I am looking at CherryPy, If you have other recommendation, please write it here. | Python web server? | 0.039979 | 0 | 1 | 1,002 |
1,978,791 | 2009-12-30T06:56:00.000 | 3 | 0 | 0 | 0 | python,cherrypy | 1,979,101 | 5 | true | 0 | 0 | what about the internal python webserver ?
just type "python web server" in google, and host the 1st result... | 4 | 0 | 0 | I want to develop a tool for my project using python. The requirements are:
Embed a web server to let the user get some static files, but the traffic is not very high.
User can configure the tool using http, I don't want a GUI page, I just need a RPC interface, like XML-RPC? or others?
Besides the web server, the tool need some background job to do, so these jobs need to be done with the web server.
So, Which python web server is best choice? I am looking at CherryPy, If you have other recommendation, please write it here. | Python web server? | 1.2 | 0 | 1 | 1,002 |
1,980,454 | 2009-12-30T14:24:00.000 | 0 | 0 | 1 | 1 | python,windows-installer,mysql | 2,179,175 | 1 | false | 0 | 0 | did you use an egg?
if so, python might not be able to find it.
import os,sys
os.environ['PYTHON_EGG_CACHE'] = 'C:/temp'
sys.path.append('C:/path/to/MySQLdb.egg') | 1 | 0 | 0 | I'm trying to install the module mySQLdb on a windows vista 64 (amd) machine.
I've installed python on a different folder other than suggested by Python installer.
When I try to install the .exe mySQLdb installer, it can't find python 2.5 and it halts the installation.
Is there anyway to supply the installer with the correct python location (even thou the registry and path are right)? | Problem installing MySQLdb on windows - Can't find python | 0 | 1 | 0 | 431 |
1,980,479 | 2009-12-30T14:33:00.000 | 2 | 0 | 1 | 0 | python,multithreading,process,locking,multiprocessing | 1,980,508 | 3 | false | 0 | 0 | multiprocessing and threading packages have slightly different aims, though both are concurrency related. threading coordinates threads within one process, while multiprocessing provide thread-like interface for coordinating multiple processes.
If your application doesn't spawn new processes which require data synchronization, multiprocessing is a bit more heavy weight, and threading package should be better suited. | 3 | 15 | 0 | If a software project supports a version of Python that multiprocessing has been backported to, is there any reason to use threading.Lock over multiprocessing.Lock? Would a multiprocessing lock not be thread safe as well?
For that matter, is there a reason to use any synchronization primitives from threading that are also in multiprocessing? | Is there any reason to use threading.Lock over multiprocessing.Lock? | 0.132549 | 0 | 0 | 5,708 |
1,980,479 | 2009-12-30T14:33:00.000 | 20 | 0 | 1 | 0 | python,multithreading,process,locking,multiprocessing | 1,980,929 | 3 | true | 0 | 0 | The threading module's synchronization primitive are lighter and faster than multiprocessing, due to the lack of dealing with shared semaphores, etc. If you are using threads; use threading's locks. Processes should use multiprocessing's locks. | 3 | 15 | 0 | If a software project supports a version of Python that multiprocessing has been backported to, is there any reason to use threading.Lock over multiprocessing.Lock? Would a multiprocessing lock not be thread safe as well?
For that matter, is there a reason to use any synchronization primitives from threading that are also in multiprocessing? | Is there any reason to use threading.Lock over multiprocessing.Lock? | 1.2 | 0 | 0 | 5,708 |
1,980,479 | 2009-12-30T14:33:00.000 | 3 | 0 | 1 | 0 | python,multithreading,process,locking,multiprocessing | 1,980,503 | 3 | false | 0 | 0 | I would expect the multi-threading synchronization primitives to be quite faster as they can use shared memory area easily. But I suppose you will have to perform speed test to be sure of it. Also, you might have side-effects that are quite unwanted (and unspecified in the doc).
For example, a process-wise lock could very well block all threads of the process. And if it doesn't, releasing a lock might not wake up the threads of the process.
In short, if you want your code to work for sure, you should use the thread-synchronization primitives if you are using threads and the process-synchronization primitives if you are using processes. Otherwise, it might work on your platform only, or even just with your specific version of Python. | 3 | 15 | 0 | If a software project supports a version of Python that multiprocessing has been backported to, is there any reason to use threading.Lock over multiprocessing.Lock? Would a multiprocessing lock not be thread safe as well?
For that matter, is there a reason to use any synchronization primitives from threading that are also in multiprocessing? | Is there any reason to use threading.Lock over multiprocessing.Lock? | 0.197375 | 0 | 0 | 5,708 |
1,981,208 | 2009-12-30T16:59:00.000 | 1 | 1 | 1 | 0 | python,attributes | 1,981,378 | 8 | false | 0 | 0 | I think your friend has misplaced his frustration in the language. His real problem is lack of debugging techniques. teach him how to break down a program into small pieces to examine the output. like a manual unit test, this way any inconsistency is found and any assumptions are proven or discarded. | 5 | 4 | 0 | A friend was "burned" when starting to learn Python, and now sees the language as perhaps fatally flawed.
He was using a library and changed the value of an object's attribute (the class being in the library), but he used the wrong abbreviation for the attribute name. It took him "forever" to figure out what was wrong. His objection to Python is thus that it allows one to accidentally add attributes to an object.
Unit tests don't provide a solution to this. One doesn't write unit tests against an API being used. One may have a mock for the class, but the mock could have the same typo or incorrect assumption about the attribute name.
It's possible to use __setattr__() to guard against this, but (as far as I know), no one does.
The only thing I've been able to tell my friend is that after several years of writing Python code full-time, I don't recall ever being burned by this. What else can I tell him? | Protection from accidentally misnaming object attributes in Python? | 0.024995 | 0 | 0 | 521 |
1,981,208 | 2009-12-30T16:59:00.000 | 0 | 1 | 1 | 0 | python,attributes | 1,981,269 | 8 | false | 0 | 0 | I had a similar bad experience with Python when I first started ... took me 3 months to get over it. Having a tool which warns would be nice back then ... | 5 | 4 | 0 | A friend was "burned" when starting to learn Python, and now sees the language as perhaps fatally flawed.
He was using a library and changed the value of an object's attribute (the class being in the library), but he used the wrong abbreviation for the attribute name. It took him "forever" to figure out what was wrong. His objection to Python is thus that it allows one to accidentally add attributes to an object.
Unit tests don't provide a solution to this. One doesn't write unit tests against an API being used. One may have a mock for the class, but the mock could have the same typo or incorrect assumption about the attribute name.
It's possible to use __setattr__() to guard against this, but (as far as I know), no one does.
The only thing I've been able to tell my friend is that after several years of writing Python code full-time, I don't recall ever being burned by this. What else can I tell him? | Protection from accidentally misnaming object attributes in Python? | 0 | 0 | 0 | 521 |
1,981,208 | 2009-12-30T16:59:00.000 | 5 | 1 | 1 | 0 | python,attributes | 1,981,255 | 8 | false | 0 | 0 | If the possibility to make mistakes is enough for him to consider a language "fatally flawed", I don't think you can convince him otherwise. The more you can do with a language, the more you can do wrong with the language. It's a caveat of flexibility—but that's true for any language. | 5 | 4 | 0 | A friend was "burned" when starting to learn Python, and now sees the language as perhaps fatally flawed.
He was using a library and changed the value of an object's attribute (the class being in the library), but he used the wrong abbreviation for the attribute name. It took him "forever" to figure out what was wrong. His objection to Python is thus that it allows one to accidentally add attributes to an object.
Unit tests don't provide a solution to this. One doesn't write unit tests against an API being used. One may have a mock for the class, but the mock could have the same typo or incorrect assumption about the attribute name.
It's possible to use __setattr__() to guard against this, but (as far as I know), no one does.
The only thing I've been able to tell my friend is that after several years of writing Python code full-time, I don't recall ever being burned by this. What else can I tell him? | Protection from accidentally misnaming object attributes in Python? | 0.124353 | 0 | 0 | 521 |
1,981,208 | 2009-12-30T16:59:00.000 | 12 | 1 | 1 | 0 | python,attributes | 1,981,279 | 8 | true | 0 | 0 | "changed the value of an object's attribute" Can lead to problems. This is pretty well known. You know it, now, also. That doesn't indict the language. It simply says that you've learned an important lesson in dynamic language programming.
Unit testing absolutely discovers this. You are not forced to mock all library classes. Some folks say it's only a unit test when it's tested in complete isolation. This is silly. You have to trust the library modules -- it's a feature of your architecture. Rather than mock them, just use them. (It is important to write mocks for your own newly-developed libraries. It's also important to mock libraries that make expensive API calls.)
In most cases, you can (and should) test your classes with the real library modules. This will find the misspelled attribute name.
Also, now that you know that attributes are dynamic, it's really easy to verify that the attribute exists. How?
Use interactive Python to explore the classes before writing too much code.
Remember, Python is not Java and it's not C. You can execute Python interactively and determine immediately if you've spelled something wrong. Writing a lot of code without doing any interactive confirmation is -- simply -- the wrong way to use Python.
A little interactive exploration will find misspelled attribute names.
Finally -- for your own classes -- you can wrap updatable attributes as properties. This makes it easier to debug any misspelled attribute names. Again, you know to check for this. You can use interactive development to confirm the attribute names.
Fussing around with __setattr__ creates problems. In some cases, we actually need to add attributes to an object. Why? It's simpler than creating a whole subclass for one special case where we have to maintain more state information.
Other things you can say:
I was burned by a C program that absolutely could not be made to work because of ______. [Insert any known C-language problem you want here. No array bounds checking, for example] Does that make C fatally flawed?
I was burned by a DBA who changed a column name and all the SQL broke. It's painful to unit test all of it. Does that make the relational database fatally flawed?
I was burned by a sys admin who changed a directory's permissions and my application broke. It was nearly impossible to find. Does that make the OS fatally flawed?
I was burned by a COBOL program where someone changed the copybook, forgot to recompile the program, and we couldn't debug it because the source looked perfect. COBOL, however, actually is fatally flawed, so this isn't a good example. | 5 | 4 | 0 | A friend was "burned" when starting to learn Python, and now sees the language as perhaps fatally flawed.
He was using a library and changed the value of an object's attribute (the class being in the library), but he used the wrong abbreviation for the attribute name. It took him "forever" to figure out what was wrong. His objection to Python is thus that it allows one to accidentally add attributes to an object.
Unit tests don't provide a solution to this. One doesn't write unit tests against an API being used. One may have a mock for the class, but the mock could have the same typo or incorrect assumption about the attribute name.
It's possible to use __setattr__() to guard against this, but (as far as I know), no one does.
The only thing I've been able to tell my friend is that after several years of writing Python code full-time, I don't recall ever being burned by this. What else can I tell him? | Protection from accidentally misnaming object attributes in Python? | 1.2 | 0 | 0 | 521 |
1,981,208 | 2009-12-30T16:59:00.000 | 1 | 1 | 1 | 0 | python,attributes | 1,981,288 | 8 | false | 0 | 0 | He's effectively ruling out an entire class of programming languages -- dynamically-typed languages -- because of one hard lesson learned. He can use only statically-typed languages if he wishes and still have a very productive career as a programmer, but he is certainly going to have deep frustrations with them as well. Will he then conclude that they are fatally-flawed? | 5 | 4 | 0 | A friend was "burned" when starting to learn Python, and now sees the language as perhaps fatally flawed.
He was using a library and changed the value of an object's attribute (the class being in the library), but he used the wrong abbreviation for the attribute name. It took him "forever" to figure out what was wrong. His objection to Python is thus that it allows one to accidentally add attributes to an object.
Unit tests don't provide a solution to this. One doesn't write unit tests against an API being used. One may have a mock for the class, but the mock could have the same typo or incorrect assumption about the attribute name.
It's possible to use __setattr__() to guard against this, but (as far as I know), no one does.
The only thing I've been able to tell my friend is that after several years of writing Python code full-time, I don't recall ever being burned by this. What else can I tell him? | Protection from accidentally misnaming object attributes in Python? | 0.024995 | 0 | 0 | 521 |
1,982,277 | 2009-12-30T20:24:00.000 | 1 | 0 | 0 | 0 | python,django,spam-prevention | 1,983,179 | 2 | false | 1 | 0 | I can't think of any critically sensitive data that could be submitted by anonymous users. If the data is really sensitive (like you mentioned patient records), it is probably submitted by known and registered user so you should do manual approval of new users and protect the registration part from spammers. | 1 | 4 | 0 | I'm curious if anyone out there knows of something perhaps like Akismet, but where content doesn't have to go off to a 3rd party server. In a situation with critically sensitive data (patient records for instance) I wouldn't necessarily want that information sent off to another server I don't have control over. I really like Akismet, it works great for the most part. However, I need something more like a local instance of Akismet that's private, and able to be updated semi-regularly. Even better if it works with Python since I need this to interface with Django applications. Should I just go the route of SpamBayes? | Spam Filtering Forms Without Akismet | 0.099668 | 0 | 0 | 394 |
1,982,442 | 2009-12-30T20:56:00.000 | -3 | 0 | 0 | 0 | python,ldap,python-3.x | 1,982,479 | 4 | true | 0 | 0 | This answer is no longer accurate; see below for other answers.
Sorry to break this on you, but I don't think there is a python-ldap for Python 3 (yet)...
That's the reason why we should keep active development at Python 2.6 for now (as long as most crucial dependencies (libs) are not ported to 3.0). | 1 | 12 | 0 | I am porting some Java code to Python and we would like to use Python 3 but I can't find LDAP module for Python 3 in Windows.
This is forcing us to use 2.6 version and it is bothersome as rest of the code is already in 3.0 format. | Does Python 3 have LDAP module? | 1.2 | 0 | 1 | 16,151 |
1,982,788 | 2009-12-30T22:06:00.000 | 2 | 1 | 0 | 1 | c++,python,linux,pexpect | 1,982,873 | 2 | true | 0 | 0 | You could just use "expect". It is very light weight and is made to do what youre describing. | 1 | 0 | 0 | Is there any way of writing pexpect like small program which can launch a process and pass the password to that process?
I don't want to install and use pexpect python library but want to know the logic behind it so that using linux system apis I can build something similar. | writing pexpect like program in c++ on Linux | 1.2 | 0 | 0 | 639 |
1,983,126 | 2009-12-30T23:42:00.000 | 1 | 0 | 1 | 0 | python,regex | 1,983,142 | 6 | false | 0 | 0 | The regex below captures everything between the $ characters non-greedily
\$(.*?)\$ | 1 | 2 | 0 | The problem:
I need to extract strings that are between $ characters from a block of text, but i'm a total n00b when it comes to regular expressions.
For instance from this text:
Li Europan lingues $es membres$ del sam familie. Lor $separat existentie es un$ myth.
i would like to get an array consisting of:
{'es membres', 'separat existentie es un'}
A little snippet in Python would be great. | Regex for getting content between $ chars from a text | 0.033321 | 0 | 0 | 1,555 |
1,983,177 | 2009-12-30T23:55:00.000 | 5 | 1 | 1 | 0 | python,comments,memory-management,docstring | 1,983,193 | 5 | false | 0 | 0 | They are getting read from the file (when the file is compiled to pyc or when the pyc is loaded -- they must be available under object.__doc__) but no --> this will not significantly impact performance under any reasonable circumstances, or are you really writing multi-megabyte doc-strings? | 3 | 15 | 0 | Are Python docstrings and comments stored in memory when a module is loaded?
I've wondered if this is true, because I usually document my code well; may this affect memory usage?
Usually every Python object has a __doc__ method. Are those docstrings read from the file, or processed otherwise?
I've done searches here in the forums, Google and Mailing-Lists, but I haven't found any relevant information.
Do you know better? | Are Python docstrings and comments stored in memory when a module is loaded? | 0.197375 | 0 | 0 | 3,713 |
1,983,177 | 2009-12-30T23:55:00.000 | 12 | 1 | 1 | 0 | python,comments,memory-management,docstring | 1,983,203 | 5 | false | 0 | 0 | Yes the docstrings are read from the file, but that shouldn't stop you writing them. Never ever compromise readability of code for performance until you have done a performance test and found that the thing you are worried about is in fact the bottleneck in your program that is causing a problem. I would think that it is extremely unlikely that a docstring will cause any measurable performance impact in any real world situation. | 3 | 15 | 0 | Are Python docstrings and comments stored in memory when a module is loaded?
I've wondered if this is true, because I usually document my code well; may this affect memory usage?
Usually every Python object has a __doc__ method. Are those docstrings read from the file, or processed otherwise?
I've done searches here in the forums, Google and Mailing-Lists, but I haven't found any relevant information.
Do you know better? | Are Python docstrings and comments stored in memory when a module is loaded? | 1 | 0 | 0 | 3,713 |
1,983,177 | 2009-12-30T23:55:00.000 | 1 | 1 | 1 | 0 | python,comments,memory-management,docstring | 1,983,288 | 5 | false | 0 | 0 | Do Python docstrings and comments are
stored in memory when module is
loaded?
Docstrings are compiled into the .pyc file, and are loaded into memory. Comments are discarded during compilation and have no impact on anything except the insignificant extra time taken to ignore them during compilation (which happens once only after any change to a .py file, except for the main script which is re-compiled every time it is run).
Also note that these strings are preserved only if they are the first thing in the module, class definition, or function definition. You can include additional strings pretty much anywhere, but they will be discarded during compilation just as comments are. | 3 | 15 | 0 | Are Python docstrings and comments stored in memory when a module is loaded?
I've wondered if this is true, because I usually document my code well; may this affect memory usage?
Usually every Python object has a __doc__ method. Are those docstrings read from the file, or processed otherwise?
I've done searches here in the forums, Google and Mailing-Lists, but I haven't found any relevant information.
Do you know better? | Are Python docstrings and comments stored in memory when a module is loaded? | 0.039979 | 0 | 0 | 3,713 |
1,983,282 | 2009-12-31T00:21:00.000 | 1 | 0 | 0 | 0 | java,.net,python,portlet | 1,985,241 | 3 | false | 1 | 0 | Why would I want to use java portlets above tomcat and gwt?
These technologies are not directly comparable. Coming from regular web page development, Portlets seem like a very restrictive technology. But then the value of Portal servers is largely the control they give to administrators and users - the fact that this makes your life more difficult is irrelevant.
Would portlets make it less- or un- necessary for me to use jsp and jsf?
You can write directly to the output, just like you would in a Servlet. You probably still want a view technology (that will have to support portlets). | 1 | 2 | 0 | Why would I want to use java portlets above tomcat and gwt?
Would portlets make it less- or un- necessary for me to use jsp and jsf?
Has Jboss been part of the portlet evolution culture? Does Jboss satisfy the portlet jsrs?
What portlet implementation/brand would run on gae java and gae python?
Are portlet specs due to peer pressure from php cms culture?
What are the equivalent of portlet and portlet jsr in .net? | Please discuss what are and why use portlets | 0.066568 | 0 | 0 | 4,765 |
1,984,445 | 2009-12-31T07:52:00.000 | 0 | 0 | 0 | 1 | java,python,interface,interaction | 1,984,457 | 6 | false | 1 | 0 | Expose one of the two as a service of some kind, web service maybe. Another option is to port the python code to Jython | 2 | 3 | 0 | I have a python application which I cant edit its a black box from my point of view. The python application knows how to process text and return processed text.
I have another application written in Java which knows how to collect non processed texts.
Current state, the python app works in batch mode every x minutes.
I want to make the python
processing part of the process: Java app collects text and request the python app to process and return processed text as part of a flow.
What do you think is the simplest solution for this?
Thanks,
Rod | Interaction between Java App and Python App | 0 | 0 | 0 | 6,004 |
1,984,445 | 2009-12-31T07:52:00.000 | 0 | 0 | 0 | 1 | java,python,interface,interaction | 1,984,650 | 6 | false | 1 | 0 | An option is making the python application work as a server, listens for request via sockets (TCP). | 2 | 3 | 0 | I have a python application which I cant edit its a black box from my point of view. The python application knows how to process text and return processed text.
I have another application written in Java which knows how to collect non processed texts.
Current state, the python app works in batch mode every x minutes.
I want to make the python
processing part of the process: Java app collects text and request the python app to process and return processed text as part of a flow.
What do you think is the simplest solution for this?
Thanks,
Rod | Interaction between Java App and Python App | 0 | 0 | 0 | 6,004 |
1,985,383 | 2009-12-31T13:23:00.000 | 2 | 0 | 0 | 0 | python,django,django-models | 1,985,415 | 8 | false | 1 | 0 | You need to drop your tables before you can recreate them with syncdb.
If you want to preserve your existing data, then you need to unload your database,
drop your tables, run syncdb to build a new database, then reload your old data into your new tables.
There are tools that help with this. However, in many cases, it's just as easy to do it manually. | 1 | 70 | 0 | I've already defined a model and created its associated database via manager.py syncdb. Now that I've added some fields to the model, I tried syncdb again, but no output appears. Upon trying to access these new fields from my templates, I get a "No Such Column" exception, leading me to believe that syncdb didn't actually update the database. What's the right command here? | update django database to reflect changes in existing models | 0.049958 | 0 | 0 | 85,597 |
1,986,060 | 2009-12-31T16:38:00.000 | 0 | 0 | 0 | 0 | python,django | 1,986,323 | 4 | false | 1 | 0 | __init__.py will be called every time the app is imported. So if you're using mod_wsgi with Apache for instance with the prefork method, then every new process created is effectively 'starting' the project thus importing __init__.py. It sounds like your best method would be to create a new management command, and then cron that up to run every so often if that's an option. Either that, or run that management command before starting the server. You could write up a quick script that runs that management command and then starts the server for instance. | 2 | 3 | 0 | I want to perform some one-time operations such as to start a background thread and populate a cache every 30 minutes as initialize action when the Django server is started, so it will not block user from visiting the website. Where should I place all this code in Django?
Put them into the setting.py file does not work. It seems it will cause a circular dependency.
Put them into the __init__.py file does not work. Django server call it many times (What is the reason?) | Where should I place the one-time operation operation in the Django framework? | 0 | 0 | 0 | 781 |
1,986,060 | 2009-12-31T16:38:00.000 | 4 | 0 | 0 | 0 | python,django | 1,986,926 | 4 | false | 1 | 0 | We put one-time startup scripts in the top-level urls.py. This is often where your admin bindings go -- they're one-time startup, also.
Some folks like to put these things in settings.py but that seems to conflate settings (which don't do much) with the rest of the site's code (which does stuff). | 2 | 3 | 0 | I want to perform some one-time operations such as to start a background thread and populate a cache every 30 minutes as initialize action when the Django server is started, so it will not block user from visiting the website. Where should I place all this code in Django?
Put them into the setting.py file does not work. It seems it will cause a circular dependency.
Put them into the __init__.py file does not work. Django server call it many times (What is the reason?) | Where should I place the one-time operation operation in the Django framework? | 0.197375 | 0 | 0 | 781 |
1,986,712 | 2009-12-31T19:10:00.000 | 14 | 1 | 1 | 0 | python,data-structures | 1,986,739 | 6 | false | 0 | 0 | For some simple data structures (eg. a stack), you can just use the builtin list to get your job done. With more complex structures (eg. a bloom filter), you'll have to implement them yourself using the primitives the language supports.
You should use the builtins if they serve your purpose really since they're debugged and optimised by a horde of people for a long time. Doing it from scratch by yourself will probably produce an inferior data structure.
If however, you need something that's not available as a primitive or if the primitive doesn't perform well enough, you'll have to implement your own type.
The details like pointer management etc. are just implementation talk and don't really limit the capabilities of the language itself. | 4 | 16 | 0 | All the books I've read on data structures so far seem to use C/C++, and make heavy use of the "manual" pointer control that they offer. Since Python hides that sort of memory management and garbage collection from the user is it even possible to implement efficient data structures in this language, and is there any reason to do so instead of using the built-ins? | Data Structures in Python | 1 | 0 | 0 | 22,283 |
1,986,712 | 2009-12-31T19:10:00.000 | 2 | 1 | 1 | 0 | python,data-structures | 1,987,211 | 6 | false | 0 | 0 | With Python you have access to a vast assortment of library modules written and debugged by other people. Odds are very good that somewhere out there, there is a module that does at least part of what you want, and odds are even good that it might be implemented in C for performance.
For example, if you need to do matrix math, you can use NumPy, which was written in C and Fortran.
Python is slow enough that you won't be happy if you try to write some sort of really compute-intensive code (example, a Fast Fourier Transform) in native Python. On the other hand, you can get a C-coded Fourier Transform as part of SciPy, and just use it.
I have never had a situation where I wanted to solve a problem in Python and said "darn, I just can't express the data structure I need."
If you are a pioneer, and you are doing something in Python for which there just isn't any library module out there, then you can try writing it in pure Python. If it is fast enough, you are done. If it is too slow, you can profile it, figure out where the slow parts are, and rewrite them in C using the Python C API. I have never needed to do this yet. | 4 | 16 | 0 | All the books I've read on data structures so far seem to use C/C++, and make heavy use of the "manual" pointer control that they offer. Since Python hides that sort of memory management and garbage collection from the user is it even possible to implement efficient data structures in this language, and is there any reason to do so instead of using the built-ins? | Data Structures in Python | 0.066568 | 0 | 0 | 22,283 |
1,986,712 | 2009-12-31T19:10:00.000 | 0 | 1 | 1 | 0 | python,data-structures | 1,986,761 | 6 | false | 0 | 0 | It's not possible to implement something like a C++ vector in Python, since you don't have array primitives the way C/C++ do. However, anything more complicated can be implemented (efficiently) on top of it, including, but not limited to: linked lists, hash tables, multisets, bloom filters, etc. | 4 | 16 | 0 | All the books I've read on data structures so far seem to use C/C++, and make heavy use of the "manual" pointer control that they offer. Since Python hides that sort of memory management and garbage collection from the user is it even possible to implement efficient data structures in this language, and is there any reason to do so instead of using the built-ins? | Data Structures in Python | 0 | 0 | 0 | 22,283 |
1,986,712 | 2009-12-31T19:10:00.000 | 10 | 1 | 1 | 0 | python,data-structures | 1,986,749 | 6 | false | 0 | 0 | C/C++ data structure books are only attempting to teach you the underlying principles behind the various structures - they are generally not advising you to actually go out and re-invent the wheel by building your own library of stacks and lists.
Whether you're using Python, C++, C#, Java, whatever, you should always look to the built in data structures first. They will generally be implemented using the same system primitives you would have to use doing it yourself, but with the advantage of having been tried and tested.
Only when the provided data structures do not allow you to accomplish what you need, and there isn't an alternative and reliable library available to you, should you be looking at building something from scratch (or extending what's provided). | 4 | 16 | 0 | All the books I've read on data structures so far seem to use C/C++, and make heavy use of the "manual" pointer control that they offer. Since Python hides that sort of memory management and garbage collection from the user is it even possible to implement efficient data structures in this language, and is there any reason to do so instead of using the built-ins? | Data Structures in Python | 1 | 0 | 0 | 22,283 |
1,989,251 | 2010-01-01T18:31:00.000 | 1 | 0 | 1 | 0 | python,file,list,memory-management,32bit-64bit | 1,989,320 | 9 | false | 0 | 0 | You might want to consider a different kind of structure: not a list, but figuring out how to do (your task) with a generator or a custom iterator. | 3 | 13 | 0 | If I have a list(or array, dictionary....) in python that could exceed the available memory address space, (32 bit python) what are the options and there relative speeds? (other than not making a list that large)
The list could exceed the memory but I have no way of knowing before hand. Once it starts exceeding 75% I would like to no longer keep the list in memory (or the new items anyway), is there a way to convert to a file based approach mid-stream?
What are the best (speed in and out) file storage options?
Just need to store a simple list of numbers. no need to random Nth element access, just append/pop type operations. | Alternatives to keeping large lists in memory (python) | 0.022219 | 0 | 0 | 21,695 |
1,989,251 | 2010-01-01T18:31:00.000 | 8 | 0 | 1 | 0 | python,file,list,memory-management,32bit-64bit | 1,989,278 | 9 | false | 0 | 0 | There are probably dozens of ways to store your list data in a file instead of in memory. How you choose to do it will depend entirely on what sort of operations you need to perform on the data. Do you need random access to the Nth element? Do you need to iterate over all elements? Will you be searching for elements that match certain criteria? What form do the list elements take? Will you only be inserting at the end of the list, or also in the middle? Is there metadata you can keep in memory with the bulk of the items on disk? And so on and so on.
One possibility is to structure your data relationally, and store it in a SQLite database. | 3 | 13 | 0 | If I have a list(or array, dictionary....) in python that could exceed the available memory address space, (32 bit python) what are the options and there relative speeds? (other than not making a list that large)
The list could exceed the memory but I have no way of knowing before hand. Once it starts exceeding 75% I would like to no longer keep the list in memory (or the new items anyway), is there a way to convert to a file based approach mid-stream?
What are the best (speed in and out) file storage options?
Just need to store a simple list of numbers. no need to random Nth element access, just append/pop type operations. | Alternatives to keeping large lists in memory (python) | 1 | 0 | 0 | 21,695 |
1,989,251 | 2010-01-01T18:31:00.000 | 6 | 0 | 1 | 0 | python,file,list,memory-management,32bit-64bit | 1,989,292 | 9 | false | 0 | 0 | The answer is very much "it depends".
What are you storing in the lists? Strings? integers? Objects?
How often is the list written to compared with being read? Are items only appended on the end, or can entries be modified or inserted in the middle?
If you are only appending to the end then writing to a flat file may be the simplest thing that could possibly work.
If you are storing objects of variable size such as strings then maybe keep an in-memory index of the start of each string, so you can read it quickly.
If you want dictionary behaviour then look at the db modules - dbm, gdbm, bsddb, etc.
If you want random access writing then maybe a SQL database may be better.
Whatever you do, going to disk is going to be orders of magnitude slower than in-memory, but without knowing how the data is going to be used it is impossible to be more specific.
edit:
From your updated requirements I would go with a flat file and keep an in-memory buffer of the last N elements. | 3 | 13 | 0 | If I have a list(or array, dictionary....) in python that could exceed the available memory address space, (32 bit python) what are the options and there relative speeds? (other than not making a list that large)
The list could exceed the memory but I have no way of knowing before hand. Once it starts exceeding 75% I would like to no longer keep the list in memory (or the new items anyway), is there a way to convert to a file based approach mid-stream?
What are the best (speed in and out) file storage options?
Just need to store a simple list of numbers. no need to random Nth element access, just append/pop type operations. | Alternatives to keeping large lists in memory (python) | 1 | 0 | 0 | 21,695 |
1,990,502 | 2010-01-02T03:19:00.000 | 0 | 0 | 0 | 0 | python,django,login,signals | 1,991,497 | 7 | false | 1 | 0 | Rough idea - you could use middleware for this. This middleware could process requests and fire signal when relevant URL is requested. It could also process responses and fire signal when given action actually succeded. | 2 | 84 | 0 | In my Django app, I need to start running a few periodic background jobs when a user logs in and stop running them when the user logs out, so I am looking for an elegant way to
get notified of a user login/logout
query user login status
From my perspective, the ideal solution would be
a signal sent by each django.contrib.auth.views.login and ... views.logout
a method django.contrib.auth.models.User.is_logged_in(), analogous to ... User.is_active() or ... User.is_authenticated()
Django 1.1.1 does not have that and I am reluctant to patch the source and add it (not sure how to do that, anyway).
As a temporary solution, I have added an is_logged_in boolean field to the UserProfile model which is cleared by default, is set the first time the user hits the landing page (defined by LOGIN_REDIRECT_URL = '/') and is queried in subsequent requests. I added it to UserProfile, so I don't have to derive from and customize the builtin User model for that purpose only.
I don't like this solution. If the user explicitely clicks the logout button, I can clear the flag, but most of the time, users just leave the page or close the browser; clearing the flag in these cases does not seem straight forward to me. Besides (that's rather data model clarity nitpicking, though), is_logged_in does not belong in the UserProfile, but in the User model.
Can anyone think of alternate approaches ? | Django: signal when user logs in? | 0 | 0 | 0 | 38,642 |
1,990,502 | 2010-01-02T03:19:00.000 | 1 | 0 | 0 | 0 | python,django,login,signals | 1,991,512 | 7 | false | 1 | 0 | The only reliable way (that also detects when the user has closed the browser) is to update some last_request field every time the user loads a page.
You could also have a periodic AJAX request that pings the server every x minutes if the user has a page open.
Then have a single background job that gets a list of recent users, create jobs for them, and clear the jobs for users not present in that list. | 2 | 84 | 0 | In my Django app, I need to start running a few periodic background jobs when a user logs in and stop running them when the user logs out, so I am looking for an elegant way to
get notified of a user login/logout
query user login status
From my perspective, the ideal solution would be
a signal sent by each django.contrib.auth.views.login and ... views.logout
a method django.contrib.auth.models.User.is_logged_in(), analogous to ... User.is_active() or ... User.is_authenticated()
Django 1.1.1 does not have that and I am reluctant to patch the source and add it (not sure how to do that, anyway).
As a temporary solution, I have added an is_logged_in boolean field to the UserProfile model which is cleared by default, is set the first time the user hits the landing page (defined by LOGIN_REDIRECT_URL = '/') and is queried in subsequent requests. I added it to UserProfile, so I don't have to derive from and customize the builtin User model for that purpose only.
I don't like this solution. If the user explicitely clicks the logout button, I can clear the flag, but most of the time, users just leave the page or close the browser; clearing the flag in these cases does not seem straight forward to me. Besides (that's rather data model clarity nitpicking, though), is_logged_in does not belong in the UserProfile, but in the User model.
Can anyone think of alternate approaches ? | Django: signal when user logs in? | 0.028564 | 0 | 0 | 38,642 |
1,991,065 | 2010-01-02T09:14:00.000 | 10 | 1 | 0 | 0 | c++,python,c,perl | 1,991,076 | 4 | true | 1 | 0 | All languages can all do basically any task any other one of them can do, as they are all Turing complete.
PHP works as a server-side scripting language, but you can also use Perl, Python, Ruby, Haskell, Lisp, Java, C, C++, assembly, or pretty much any other language that can access standard input and standard output for CGI communication with web content.
PHP is widely used because a) it's easy to learn a little and go, and b) the rather tedious CGI protocols are skipped, as the language handles them for you, so you can just plug your PHP script into an HTML page and not have to know how your program reads the information at all. This makes web programming easier for PHP, but the PHP interpreter is written in C, which does all the heavy lifting, so logically if PHP can do server-side scripting, so can C. Since most other languages are written in C, they too can do server-side scripting. (And since C compiles down to assembly, assembly can do it too, and so can any language that compiles down to assembly. Which is all of them not already covered.) | 1 | 0 | 0 | Please bear with me experts i'm a newbie in web dev.
With html,css can take care of webpages..
javascript,ajax for some dynamic content..
php for server side scripting,accessing databases,sending emails,doing all other stuf...
What role do these programming languages play?
Can they do any other important task which cannot be done by PHP? | Role of C,C++,python,perl in Web development | 1.2 | 0 | 0 | 719 |
1,991,743 | 2010-01-02T14:29:00.000 | 2 | 1 | 0 | 0 | python,django,apache,mod-wsgi | 1,991,778 | 1 | true | 1 | 0 | I think the problem is related with the permissions of that file. Check that the user running wsgi (apache user, usually) is capable of reading and writing the everything in the libs folder and specially capable of reading the file pywapi.py. | 1 | 1 | 0 | I have a simple setup with my python libraries in /domains/somedomain.com/libs/ and all my tests run fine. I start WSGI with DJANGO_SETTINGS_MODULE to "somedomain.settings" where somedomain is a package in libs/
Suddenly, when adding pywapi.py into libs/ I can't import it when hitting the site. But, if I add 'import pywapi' to my wsgi script, it fails when hit by Apache, but succeeds if I just write it. the WSGI itself is actually adding libs/ to the path, so I know it should be there when running. The path is absolute, too, so any change in CWD shouldn't be causing this.
I can't think of anything else and I've been tinkering with it half of my otherwise productive morning. | How does Django + mod_wsgi affect the python path? | 1.2 | 0 | 0 | 927 |
1,993,060 | 2010-01-02T22:14:00.000 | 0 | 1 | 0 | 0 | python,authentication,file-upload,automation | 1,993,139 | 3 | false | 0 | 0 | You mention they do not offer FTP, but I went to their site and found the following:
How to upload with FTP?
ftp.hotfile.com user: your hotfile
username pass: your hotfile password
You can upload and make folders, but
cant rename,move files
Try it. If it works, using FTP from within Python will be a very simple task. | 1 | 2 | 0 | I want to upload a file from my computer to a file hoster like hotfile.com via a Python script. Because Hotfile is only offering a web-based upload service (no ftp).
I need Python first to login with my username and password and after that to upload the file. When the file transfer is over, I need the Download and Delete-link (which is generated right after the Upload has finished).
Is this even possible? If so, can anybody tell me how the script looks like or even give my hints how to build it?
Thanks | Upload file to a website via Python script | 0 | 0 | 1 | 7,457 |
1,993,079 | 2010-01-02T22:19:00.000 | 4 | 0 | 1 | 0 | python,multithreading | 1,993,105 | 11 | false | 0 | 0 | I've run into this situation before. Just make a pool of Tasks, and spawn a fixed number of threads that run an endless loop which grabs a Task from the pool, run it, and repeat. Essentially you're implementing your own thread abstraction and using the OS threads to implement it.
This does have drawbacks, the major one being that if your Tasks block for long periods of time they can prevent the execution of other Tasks. But it does let you create an unbounded number of Tasks, limited only by memory. | 6 | 3 | 0 | The scenario: We have a python script that checks thousands of proxys simultaneously.
The program uses threads, 1 per proxy, to speed the process. When it reaches the 1007 thread, the script crashes because of the thread limit.
My solution is: A global variable that gets incremented when a thread spawns and decrements when a thread finishes. The function which spawns the threads monitors the variable so that the limit is not reached.
What will your solution be, friends?
Thanks for the answers. | A programming strategy to bypass the os thread limit? | 0.072599 | 0 | 0 | 2,695 |
1,993,079 | 2010-01-02T22:19:00.000 | 1 | 0 | 1 | 0 | python,multithreading | 1,993,107 | 11 | false | 0 | 0 | Be careful to minimize the default thread stack size. At least on Linux, the default limit puts severe restrictions on the number of created threads. Linux allocates a chunk of the process virtual address space to the stack (usually 10MB). 300 threads x 10MB stack allocation = 3GB of virtual address space dedicated to stack, and on a 32 bit system you have a 3GB limit. You can probably get away with much less. | 6 | 3 | 0 | The scenario: We have a python script that checks thousands of proxys simultaneously.
The program uses threads, 1 per proxy, to speed the process. When it reaches the 1007 thread, the script crashes because of the thread limit.
My solution is: A global variable that gets incremented when a thread spawns and decrements when a thread finishes. The function which spawns the threads monitors the variable so that the limit is not reached.
What will your solution be, friends?
Thanks for the answers. | A programming strategy to bypass the os thread limit? | 0.01818 | 0 | 0 | 2,695 |
1,993,079 | 2010-01-02T22:19:00.000 | 1 | 0 | 1 | 0 | python,multithreading | 1,993,311 | 11 | false | 0 | 0 | As mentioned in another thread, why do you spawn off a new thread for each single operation? This is a classical producer - consumer problem, isn't it? Depending a bit on how you look at it, the proxy checkers might be comsumers or producers.
Anyway, the solution is to make a "queue" of "tasks" to process, and make the threads in a loop check if there are any more tasks to perform in the queue, and if there isn't, wait a predefined interval, and check again.
You should protect your queue with some locking mechanisms, i.e. semaphores, to prevent race conditions.
It's really not that difficult. But it requires a bit of thinking getting it right. Good luck! | 6 | 3 | 0 | The scenario: We have a python script that checks thousands of proxys simultaneously.
The program uses threads, 1 per proxy, to speed the process. When it reaches the 1007 thread, the script crashes because of the thread limit.
My solution is: A global variable that gets incremented when a thread spawns and decrements when a thread finishes. The function which spawns the threads monitors the variable so that the limit is not reached.
What will your solution be, friends?
Thanks for the answers. | A programming strategy to bypass the os thread limit? | 0.01818 | 0 | 0 | 2,695 |
1,993,079 | 2010-01-02T22:19:00.000 | 2 | 0 | 1 | 0 | python,multithreading | 1,993,114 | 11 | false | 0 | 0 | My solution is: A global variable that gets incremented when a thread spawns and decrements when a thread finishes. The function which spawns the threads monitors the variable so that the limit is not reached.
The standard way is to have each thread get next tasks in a loop instead of dying after processing just one. This way you don't have to keep track of the number of threads, since you just fire a fixed number of them. As a bonus, you save on thread creation/destruction. | 6 | 3 | 0 | The scenario: We have a python script that checks thousands of proxys simultaneously.
The program uses threads, 1 per proxy, to speed the process. When it reaches the 1007 thread, the script crashes because of the thread limit.
My solution is: A global variable that gets incremented when a thread spawns and decrements when a thread finishes. The function which spawns the threads monitors the variable so that the limit is not reached.
What will your solution be, friends?
Thanks for the answers. | A programming strategy to bypass the os thread limit? | 0.036348 | 0 | 0 | 2,695 |
1,993,079 | 2010-01-02T22:19:00.000 | 2 | 0 | 1 | 0 | python,multithreading | 1,993,101 | 11 | false | 0 | 0 | Using different processes, and pipes to transfer data. Using threads in python is pretty lame. From what I heard, they don't actually run in parallel, even if you have a multi-core processor... But maybe it was fixed in python3. | 6 | 3 | 0 | The scenario: We have a python script that checks thousands of proxys simultaneously.
The program uses threads, 1 per proxy, to speed the process. When it reaches the 1007 thread, the script crashes because of the thread limit.
My solution is: A global variable that gets incremented when a thread spawns and decrements when a thread finishes. The function which spawns the threads monitors the variable so that the limit is not reached.
What will your solution be, friends?
Thanks for the answers. | A programming strategy to bypass the os thread limit? | 0.036348 | 0 | 0 | 2,695 |
1,993,079 | 2010-01-02T22:19:00.000 | 3 | 0 | 1 | 0 | python,multithreading | 1,993,093 | 11 | false | 0 | 0 | Does Python have any sort of asynchronous IO functionality? That would be the preferred answer IMO - spawning an extra thread for each outbound connection isn't as neat as having a single thread which is effectively event-driven. | 6 | 3 | 0 | The scenario: We have a python script that checks thousands of proxys simultaneously.
The program uses threads, 1 per proxy, to speed the process. When it reaches the 1007 thread, the script crashes because of the thread limit.
My solution is: A global variable that gets incremented when a thread spawns and decrements when a thread finishes. The function which spawns the threads monitors the variable so that the limit is not reached.
What will your solution be, friends?
Thanks for the answers. | A programming strategy to bypass the os thread limit? | 0.054491 | 0 | 0 | 2,695 |
1,993,238 | 2010-01-02T23:21:00.000 | 2 | 0 | 1 | 0 | python,function,recover | 1,996,024 | 1 | true | 0 | 0 | Er, trying to "recover" from a segfault or access violation is quite dangerous. There is a reason you get these in the first place, and it's that your program has tried to do something which it shouldn't have tried to do; therefore it has hit a bug or an unforeseen condition.
There is no provision in the Python interpreter to roll back to a "safe point" in cases such as those mentioned. Even finalizing and reinitializing the interpreter might still leave some static data in a inconsistent state.
If you told us why you are trying to do this we might be able to suggest an alternative. | 1 | 1 | 0 | I have a C++ app that embeds the Python interpreter. There are points in the code where the interpreter may get interrupted and I need to make sure the interpreter is in a 'safe' state to execute new code. I would just call Py_Finalize and re-initialize everything except I have a bunch of PyObject * references that I need to stay valid. Is there a function to do this or is it even necessary? When I mentioned the interpreter being interrupted above, I meant by a seg. fault or access violation which my app tries to recover from. | How to reset Python interpreter to a 'safe' state? | 1.2 | 0 | 0 | 1,091 |
1,994,355 | 2010-01-03T08:45:00.000 | 11 | 0 | 1 | 1 | python,perl,parsing | 1,994,373 | 9 | false | 0 | 0 | In the end, it really depends on how much semantics you want to identify, whether your logs fit common patterns, and what you want to do with the parsed data.
If you can use regular expressions to find what you need, you have tons of options. Perl is a popular language and has very convenient native RE facilities. I personally feel a lot more comfortable with Python and find that the little added hassle for doing REs is not significant.
If you want to do something smarter than RE matching, or want to have a lot of logic, you may be more comfortable with Python or even with Java/C++/etc. For instance, it is easy to read line-by-line in Python and then apply various predicate functions and reactions to matches, which is great if you have a ruleset you would like to apply. | 2 | 16 | 0 | I use grep to parse through my trading apps logs, but it's limited in the sense that I need to visually trawl through the output to see what happened etc.
I'm wondering if Perl is a better option? Any good resources to learn log and string parsing with Perl?
I'd also believe that Python would be good for this. Perl vs Python vs 'grep on linux'? | What's the best tool to parse log files? | 1 | 0 | 0 | 32,524 |
1,994,355 | 2010-01-03T08:45:00.000 | 1 | 0 | 1 | 1 | python,perl,parsing | 1,995,141 | 9 | false | 0 | 0 | on linux, you can use just the shell(bash,ksh etc) to parse log files if they are not too big in size. The other tools to go for are usually grep and awk. However, for more programming power, awk is usually used. If you have big files to parse, try awk.
Of course, Perl or Python or practically any other languages with file reading and string manipulation capabilities can be used as well. | 2 | 16 | 0 | I use grep to parse through my trading apps logs, but it's limited in the sense that I need to visually trawl through the output to see what happened etc.
I'm wondering if Perl is a better option? Any good resources to learn log and string parsing with Perl?
I'd also believe that Python would be good for this. Perl vs Python vs 'grep on linux'? | What's the best tool to parse log files? | 0.022219 | 0 | 0 | 32,524 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.