Q_Id
int64 2.93k
49.7M
| CreationDate
stringlengths 23
23
| Users Score
int64 -10
437
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| DISCREPANCY
int64 0
1
| Tags
stringlengths 6
90
| ERRORS
int64 0
1
| A_Id
int64 2.98k
72.5M
| API_CHANGE
int64 0
1
| AnswerCount
int64 1
42
| REVIEW
int64 0
1
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 15
5.1k
| Available Count
int64 1
17
| Q_Score
int64 0
3.67k
| Data Science and Machine Learning
int64 0
1
| DOCUMENTATION
int64 0
1
| Question
stringlengths 25
6.53k
| Title
stringlengths 11
148
| CONCEPTUAL
int64 0
1
| Score
float64 -1
1.2
| API_USAGE
int64 1
1
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 15
3.72M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11,515,944 | 2012-07-17T04:17:00.000 | 9 | 0 | 1 | 0 | 0 | python,multithreading,multiprocessing | 0 | 48,592,969 | 0 | 7 | 0 | false | 0 | 0 | in "from queue import Queue" there is no module called queue, instead multiprocessing should be used. Therefore, it should look like "from multiprocessing import Queue" | 1 | 136 | 0 | 0 | I'm having much trouble trying to understand just how the multiprocessing queue works on python and how to implement it. Lets say I have two python modules that access data from a shared file, let's call these two modules a writer and a reader. My plan is to have both the reader and writer put requests into two separate multiprocessing queues, and then have a third process pop these requests in a loop and execute as such.
My main problem is that I really don't know how to implement multiprocessing.queue correctly, you cannot really instantiate the object for each process since they will be separate queues, how do you make sure that all processes relate to a shared queue (or in this case, queues) | How to use multiprocessing queue in Python? | 0 | 1 | 1 | 0 | 0 | 221,619 |
11,517,743 | 2012-07-17T07:21:00.000 | 1 | 0 | 0 | 1 | 0 | python,process,applescript | 0 | 11,518,020 | 0 | 2 | 0 | false | 0 | 0 | I realized GUI applescript was the answer in this scenario. With it I could tell the PROCESS to get every window, and that worked. However, I'm leaving this up because I'd like to know other ways. I'm sure this GUI workaround won't work for everything. | 1 | 0 | 0 | 0 | I'd like to know how to have a program wait for another program to finish a task. I'm not sure what I'd look for for that...
Also, I'm using a mac.
I'd like to use Python or perhaps even applescript (I could just osascript python if the solution if for applescript anyway)
Basically this program "MPEGstreamclip" converts videos, and it opens what appears to be 2 new windows while it's converting. One window is a conversion progress bar, and the other window is a preview of the conversion. (Not sure if these actually count as windows)
(Also, MPEGstreamclip does not have an applescript dictionary, so as far as I know, it can't listen for certain window names existence)
But basically I want my program to listen for when MPEGstreamclip is done, and then run its tasks.
If it helps, when the conversion is done, the mpegstreamclip icon in the dock bounces once. I'm not sure what that means but I'd think you could use that to trigger something couldn't you?
Thanks! | Waiting for a program to finish its task | 1 | 0.099668 | 1 | 0 | 0 | 220 |
11,528,327 | 2012-07-17T18:07:00.000 | 1 | 0 | 1 | 1 | 0 | python,virtualenv | 0 | 11,528,399 | 0 | 3 | 0 | false | 0 | 0 | You could try pip install -U python from within the virtual environment, not sure what it would break.
You could also change the symlinks that point to the old Python, but not sure what side effects that would have.
I would recommend the safest path, that is to first pip freeze > installed.txt and then recreate your virtualenv with your new Python and pip install -r installed.txt in it. | 1 | 2 | 0 | 0 | I had installed virtualenv on the system with python2.6.
I upgraded the system python to 2.7, but the virtualenv still has affinity for python2.6.
I tried easy_install --upgrade virtualenv, but that didn't change anything.
Does anyone know how to update the system installed virtualenv to use the new python2.7 on the system? | How to upgrade virtualenv to use a new system python? | 0 | 0.066568 | 1 | 0 | 0 | 2,650 |
11,530,196 | 2012-07-17T20:16:00.000 | 2 | 0 | 0 | 0 | 0 | python,sqlalchemy,flask-sqlalchemy | 0 | 68,064,416 | 0 | 10 | 0 | false | 1 | 0 | result = ModalName.query.add_columns(ModelName.colname, ModelName.colname) | 1 | 183 | 0 | 0 | How do I specify the column that I want in my query using a model (it selects all columns by default)? I know how to do this with the sqlalchmey session: session.query(self.col1), but how do I do it with with models? I can't do SomeModel.query(). Is there a way? | Flask SQLAlchemy query, specify column names | 0 | 0.039979 | 1 | 1 | 0 | 191,024 |
11,545,759 | 2012-07-18T16:11:00.000 | 10 | 1 | 1 | 0 | 0 | python,django,testing,tdd,bootstrapping | 0 | 11,546,149 | 0 | 2 | 0 | false | 1 | 0 | Yes. One of the examples Kent Beck works through in his book "Test Driven Development: By Example" is a test runner. | 2 | 2 | 0 | 0 | I am currently writing a new test runner for Django and I'd like to know if it's possible to TDD my test runner using my own test runner. Kinda like compiler bootstrapping where a compiler compiles itself.
Assuming it's possible, how can it be done? | Is it possible to TDD when writing a test runner? | 1 | 1 | 1 | 0 | 0 | 123 |
11,545,759 | 2012-07-18T16:11:00.000 | 4 | 1 | 1 | 0 | 0 | python,django,testing,tdd,bootstrapping | 0 | 11,546,191 | 0 | 2 | 0 | true | 1 | 0 | Bootstrapping is a cool technique, but it does have a circular-definition problem. How can you write tests with a framework that doesn't exist yet?
Bootstrapping compilers can get around this problem in several ways, but it's my understanding that usually the first implementation isn't bootstrapped. Later bootstraps would be rewrites that then use the original compiler to compile themselves.
So use an existing framework to write it the first time out. Then, once you have a stable release, you can re-write the tests using your own test-runner. | 2 | 2 | 0 | 0 | I am currently writing a new test runner for Django and I'd like to know if it's possible to TDD my test runner using my own test runner. Kinda like compiler bootstrapping where a compiler compiles itself.
Assuming it's possible, how can it be done? | Is it possible to TDD when writing a test runner? | 1 | 1.2 | 1 | 0 | 0 | 123 |
11,567,357 | 2012-07-19T18:48:00.000 | 3 | 0 | 0 | 0 | 0 | python,mysql | 0 | 11,567,806 | 0 | 2 | 0 | true | 0 | 0 | Quite simple and straightforward solution will be just to poll the latest autoincrement id from your table, and compare it with what you've seen at the previous poll. If it is greater -- you have new data. This is called 'active polling', it's simple to implement and will suffice if you do this not too often. So you have to store the last id value somewhere in your GUI. And note that this stored value will reset when you restart your GUI application -- be sure to think what to do at the start of the GUI. Probably you will need to track only insertions that occur while GUI is running -- then, at the GUI startup you need just to poll and store current id value, and then poll peroidically and react on its changes. | 1 | 1 | 0 | 0 | I am creating a GUI that is dependent on information from MySQL table, what i want to be able to do is to display a message every time the table is updated with new data. I am not sure how to do this or even if it is possible. I have codes that retrieve the newest MySQL update but I don't know how to have a message every time new data comes into a table. Thanks! | Scanning MySQL table for updates Python | 0 | 1.2 | 1 | 1 | 0 | 1,004 |
11,575,398 | 2012-07-20T08:11:00.000 | 10 | 1 | 1 | 0 | 0 | python,django,git,version-control | 0 | 11,575,518 | 0 | 17 | 1 | false | 1 | 0 | I suggest using configuration files for that and to not version them.
You can however version examples of the files.
I don't see any problem of sharing development settings. By definition it should contain no valuable data. | 6 | 140 | 0 | 0 | I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository.
But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled?
I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it?
I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github.
Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place.
EDIT: For clarity, I am asking about where to store production secrets. | How can I save my secret keys and password securely in my version control system? | 1 | 1 | 1 | 0 | 0 | 36,273 |
11,575,398 | 2012-07-20T08:11:00.000 | 4 | 1 | 1 | 0 | 0 | python,django,git,version-control | 0 | 11,576,543 | 0 | 17 | 1 | false | 1 | 0 | EDIT: I assume you want to keep track of your previous passwords versions - say, for a script that would prevent password reusing etc.
I think GnuPG is the best way to go - it's already used in one git-related project (git-annex) to encrypt repository contents stored on cloud services. GnuPG (gnu pgp) provides a very strong key-based encryption.
You keep a key on your local machine.
You add 'mypassword' to ignored files.
On pre-commit hook you encrypt the mypassword file into the mypassword.gpg file tracked by git and add it to the commit.
On post-merge hook you just decrypt mypassword.gpg into mypassword.
Now if your 'mypassword' file did not change then encrypting it will result with same ciphertext and it won't be added to the index (no redundancy). Slightest modification of mypassword results in radically different ciphertext and mypassword.gpg in staging area differs a lot from the one in repository, thus will be added to the commit. Even if the attacker gets a hold of your gpg key he still needs to bruteforce the password. If the attacker gets an access to remote repository with ciphertext he can compare a bunch of ciphertexts, but their number won't be sufficient to give him any non-negligible advantage.
Later on you can use .gitattributes to provide an on-the-fly decryption for quit git diff of your password.
Also you can have separate keys for different types of passwords etc. | 6 | 140 | 0 | 0 | I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository.
But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled?
I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it?
I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github.
Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place.
EDIT: For clarity, I am asking about where to store production secrets. | How can I save my secret keys and password securely in my version control system? | 1 | 0.047024 | 1 | 0 | 0 | 36,273 |
11,575,398 | 2012-07-20T08:11:00.000 | 2 | 1 | 1 | 0 | 0 | python,django,git,version-control | 0 | 11,666,554 | 0 | 17 | 1 | false | 1 | 0 | Encrypt the passwords file, using for example GPG. Add the keys on your local machine and on your server. Decrypt the file and put it outside your repo folders.
I use a passwords.conf, located in my homefolder. On every deploy this file gets updated. | 6 | 140 | 0 | 0 | I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository.
But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled?
I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it?
I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github.
Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place.
EDIT: For clarity, I am asking about where to store production secrets. | How can I save my secret keys and password securely in my version control system? | 1 | 0.023525 | 1 | 0 | 0 | 36,273 |
11,575,398 | 2012-07-20T08:11:00.000 | 3 | 1 | 1 | 0 | 0 | python,django,git,version-control | 0 | 11,689,937 | 0 | 17 | 1 | false | 1 | 0 | Provide a way to override the config
This is the best way to manage a set of sane defaults for the config you checkin without requiring the config be complete, or contain things like hostnames and credentials. There are a few ways to override default configs.
Environment variables (as others have already mentioned) are one way of doing it.
The best way is to look for an external config file that overrides the default config values. This allows you to manage the external configs via a configuration management system like Chef, Puppet or Cfengine. Configuration management is the standard answer for the management of configs separate from the codebase so you don't have to do a release to update the config on a single host or a group of hosts.
FYI: Encrypting creds is not always a best practice, especially in a place with limited resources. It may be the case that encrypting creds will gain you no additional risk mitigation and simply add an unnecessary layer of complexity. Make sure you do the proper analysis before making a decision. | 6 | 140 | 0 | 0 | I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository.
But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled?
I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it?
I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github.
Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place.
EDIT: For clarity, I am asking about where to store production secrets. | How can I save my secret keys and password securely in my version control system? | 1 | 0.035279 | 1 | 0 | 0 | 36,273 |
11,575,398 | 2012-07-20T08:11:00.000 | 2 | 1 | 1 | 0 | 0 | python,django,git,version-control | 0 | 49,701,069 | 0 | 17 | 1 | false | 1 | 0 | This is what I do:
Keep all secrets as env vars in $HOME/.secrets (go-r perms) that $HOME/.bashrc sources (this way if you open .bashrc in front of someone, they won't see the secrets)
Configuration files are stored in VCS as templates, such as config.properties stored as config.properties.tmpl
The template files contain a placeholder for the secret, such as:
my.password=##MY_PASSWORD##
On application deployment, script is ran that transforms the template file into the target file, replacing placeholders with values of environment variables, such as changing ##MY_PASSWORD## to the value of $MY_PASSWORD. | 6 | 140 | 0 | 0 | I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository.
But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled?
I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it?
I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github.
Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place.
EDIT: For clarity, I am asking about where to store production secrets. | How can I save my secret keys and password securely in my version control system? | 1 | 0.023525 | 1 | 0 | 0 | 36,273 |
11,575,398 | 2012-07-20T08:11:00.000 | 0 | 1 | 1 | 0 | 0 | python,django,git,version-control | 0 | 11,713,674 | 0 | 17 | 1 | false | 1 | 0 | You could use EncFS if your system provides that. Thus you could keep your encrypted data as a subfolder of your repository, while providing your application a decrypted view to the data mounted aside. As the encryption is transparent, no special operations are needed on pull or push.
It would however need to mount the EncFS folders, which could be done by your application based on an password stored elsewhere outside the versioned folders (eg. environment variables). | 6 | 140 | 0 | 0 | I keep important settings like the hostnames and ports of development and production servers in my version control system. But I know that it's bad practice to keep secrets (like private keys and database passwords) in a VCS repository.
But passwords--like any other setting--seem like they should be versioned. So what is the proper way to keep passwords version controlled?
I imagine it would involve keeping the secrets in their own "secrets settings" file and having that file encrypted and version controlled. But what technologies? And how to do this properly? Is there a better way entirely to go about it?
I ask the question generally, but in my specific instance I would like to store secret keys and passwords for a Django/Python site using git and github.
Also, an ideal solution would do something magical when I push/pull with git--e.g., if the encrypted passwords file changes a script is run which asks for a password and decrypts it into place.
EDIT: For clarity, I am asking about where to store production secrets. | How can I save my secret keys and password securely in my version control system? | 1 | 0 | 1 | 0 | 0 | 36,273 |
11,585,494 | 2012-07-20T19:04:00.000 | 0 | 0 | 0 | 0 | 0 | mysql,python-2.7 | 0 | 11,585,571 | 0 | 2 | 0 | false | 0 | 0 | Without doing something like replicating database A onto the same server as database B and then doing the JOIN, this would not be possible. | 2 | 0 | 0 | 0 | Database A resides on server server1, while database B resides on server server2.
Both servers {A, B} are physically close to each other, but are on different machines and have different connection parameters (different username, different password etc).
In such a case, is it possible to perform a join between a table that is in database A, to a table that is in database B?
If so, how do I go about it, programatically, | MySQL Joins Between Databases On Different Servers Using Python? | 1 | 0 | 1 | 1 | 0 | 214 |
11,585,494 | 2012-07-20T19:04:00.000 | 0 | 0 | 0 | 0 | 0 | mysql,python-2.7 | 0 | 11,585,697 | 0 | 2 | 0 | false | 0 | 0 | I don't know python, so I'm going to assume that when you do a query it comes back to python as an array of rows.
You could query table A and after applying whatever filters you can, return that result to the application. Same to table B. Create a 3rd Array, loop through A, and if there is a joining row in B, add that joined row to the 3rd array. In the end the 3rd array would have the equivalent of a join of the two tables. It's not going to be very efficient, but might work okay for small recordsets. | 2 | 0 | 0 | 0 | Database A resides on server server1, while database B resides on server server2.
Both servers {A, B} are physically close to each other, but are on different machines and have different connection parameters (different username, different password etc).
In such a case, is it possible to perform a join between a table that is in database A, to a table that is in database B?
If so, how do I go about it, programatically, | MySQL Joins Between Databases On Different Servers Using Python? | 1 | 0 | 1 | 1 | 0 | 214 |
11,590,082 | 2012-07-21T06:48:00.000 | 2 | 0 | 0 | 0 | 0 | python,sqlite,timezone,dst | 0 | 21,014,456 | 0 | 2 | 0 | false | 0 | 0 | you can try this code, I am in Taiwan , so I add 8 hours:
DateTime('now','+8 hours') | 1 | 1 | 0 | 0 | I am using python and sqlite3 to handle a website. I need all timezones to be in localtime, and I need daylight savings to be accounted for. The ideal method to do this would be to use sqlite to set a global datetime('now') to be +10 hours.
If I can work out how to change sqlite's 'now' with a command, then I was going to use a cronjob to adjust it (I would happily go with an easier method if anyone has one, but cronjob isn't too hard) | sqlite timezone now | 0 | 0.197375 | 1 | 1 | 0 | 3,357 |
11,592,433 | 2012-07-21T13:05:00.000 | 6 | 0 | 0 | 1 | 0 | python,terminal,osx-lion,infinite-loop | 0 | 11,592,469 | 0 | 1 | 0 | true | 0 | 0 | CTRL + Z sends to background, CTRL + C is to kill. However I am talking Linux here and Mac might be something different. | 1 | 1 | 0 | 0 | I am testing a Log-parser that does a infinite loop (on purpose) with a cool down of 3 seconds every recurrence.
Eventually I will link all the data to a GUI front-end so I can call a stop to the loop when the user is ready with parsing.
The (small) problem now is, when testing the output in the Terminal (in OSX) when I do CTRL + Z to cancel the process my activity monitor keeps showing the process as active (probably because of the loop?).
So the question: How can I call (without extra non-native libraries, if possible) to stop the whole process when calling a CTRL + Z in Terminal? When I quit the Terminal, all python processes get killed, but I would like to know how to do it while the Terminal is still running :). | keyboard interrupt doesn't stop my interpreter | 1 | 1.2 | 1 | 0 | 0 | 6,153 |
11,594,044 | 2012-07-21T16:47:00.000 | 0 | 0 | 0 | 0 | 1 | python,wxpython | 0 | 11,613,711 | 0 | 1 | 0 | true | 0 | 1 | Not to my knowledge. Just keep track of the currently selected item's placement in the list and update it by incrementing or decrementing it. | 1 | 0 | 0 | 0 | I am making a music player in python/wxpython. I have a listbox with all the songs, and the music player plays the selected song. I am now trying to make a "next button" to select the next item in the listbox and play it. How would I go about doing that? Is there something like GetNextString() or something along those lines? | how to select next item of wxlistbox in wxpython? | 0 | 1.2 | 1 | 0 | 0 | 58 |
11,609,943 | 2012-07-23T09:37:00.000 | 0 | 0 | 0 | 0 | 1 | python,django,forms,model | 0 | 11,610,267 | 0 | 2 | 0 | false | 1 | 0 | Generally models represent business entities which may be stored in some persistent storage (usually relational DB). Forms are used to render HTML forms which may retreive data from users.
Django supports creating forms on the basis of models (using ModelForm class). Forms may be used to fetch data which should be saved in persistent storage, but that's not only the case - one may use forms just to get data to be searched in persistent storage or passed to external service, feed some application counters, test web browser engines, render some text on the basis of data entered by user (e.g. "Hello USERNAME"), login user etc.
Calling save() on model instance should guarantee that data will be saved in persistent storage if and only data is valid - that will provide consistent mechanism of validation of data before saving to persistent storage, regardless whether business entity is to be saved after user clicks "Save me" button on web page or in django interactive shell user will execute save() method of model instance. | 1 | 2 | 0 | 0 | I'm currently working on a model that has been already built and i need to add some validation managment. (accessing to two fields and checking data, nothing too dramatic)
I was wondering about the exact difference between models and forms at a validation point of view and if i would be able to just make a clean method raising errors as in a formview in a model view ?
for extra knowledge, why are thoses two things separated ?
And finnaly, what would you do ? There are already some methods written for the model and i don't know yet if i would rewrite it to morph it into a form and simply add the clean() method + i don't exactly know how they work.
Oh, and everything is in the admin interface, havn't yet worked a lot on it since i started django not so long ago.
Thanks in advance, | Difference between Model and Form validation | 0 | 0 | 1 | 0 | 0 | 1,664 |
11,611,351 | 2012-07-23T11:10:00.000 | 0 | 1 | 1 | 0 | 0 | python,module,daemon | 0 | 11,614,012 | 0 | 1 | 1 | true | 0 | 0 | You could perhaps consider to create a pool of python daemon processes?
Their purpose would be to serve one request and to die afterwards.
You would have to write a pool-manager that ensures that there are always X daemon processes waiting for an incoming request. (X being the number of waiting daemon processes: depending on the required workload). The pool-manager would have to observe the pool of daemon processes and start new instances every time a process was finished. | 1 | 0 | 0 | 0 | I am working on a web service that requires user input python code to be executed on my server (we have checks for code injection). I have to import a rather large module so I would like to make sure that I am not starting up python and importing the module from scratch each time something runs (it takes about 4-6s).
To do this I was planning to create a python (3.2) deamon that imports the user input code as a module, executes it and then delete/garbage collect that module. I need to make sure that that module is completely gone from RAM since this process will continue until the server is restarted. I have read a bunch of things that say this is a very difficult thing to do in python.
What is the best way to do this? Would it be better to use exec to define a function with the user input code (for variable scoping) and then execute that function and somehow remove the function? Or is there a better way to do this process that I have missed? | User Input Python Script Executing Daemon | 0 | 1.2 | 1 | 0 | 0 | 477 |
11,627,846 | 2012-07-24T09:23:00.000 | 1 | 0 | 1 | 1 | 1 | python,registry,cmd | 0 | 11,628,345 | 0 | 5 | 0 | false | 0 | 0 | Either right-click your script and remove Program->Close on exit checkbox in its properties, or use cmd /k as part of its calling line.
Think twice before introducing artificial delays or need to press key - this will make your script mostly unusable in any unattended/pipe calls. | 2 | 6 | 0 | 0 | I have a python script which I have made dropable using a registry-key, but it does not seem to work. The cmd.exe window just flashes by, can I somehow make the window stay up, or save the output?
EDIT: the problem was that it gave the whole path not only the filename. | Keep cmd.exe open | 0 | 0.039979 | 1 | 0 | 0 | 5,043 |
11,627,846 | 2012-07-24T09:23:00.000 | 1 | 0 | 1 | 1 | 1 | python,registry,cmd | 0 | 11,628,145 | 0 | 5 | 0 | false | 0 | 0 | Another possible option is to create a basic TKinter GUI with a textarea and a close button. Then run that with subprocess or equiv. and have that take the stdout from your python script executed with pythonw.exe so that no CMD prompt appears to start with.
This keeps it purely using Python stdlib's and also means you could use the GUI for setting options or entering parameters...
Just an idea. | 2 | 6 | 0 | 0 | I have a python script which I have made dropable using a registry-key, but it does not seem to work. The cmd.exe window just flashes by, can I somehow make the window stay up, or save the output?
EDIT: the problem was that it gave the whole path not only the filename. | Keep cmd.exe open | 0 | 0.039979 | 1 | 0 | 0 | 5,043 |
11,638,579 | 2012-07-24T20:10:00.000 | 1 | 0 | 1 | 0 | 0 | python | 0 | 12,689,109 | 0 | 3 | 0 | false | 0 | 0 | If some IDE interprets '####' as a half-baked string plus a comment, change the IDE! | 1 | 2 | 0 | 0 | I want to have a string containing 4 pound signs..how can I accomplish this in Python without commenting the string out due to the pound signs? | Using POUND SIGN as a string in PYTHON? | 0 | 0.066568 | 1 | 0 | 0 | 17,557 |
11,647,810 | 2012-07-25T10:33:00.000 | 1 | 1 | 0 | 1 | 0 | python,debugging | 0 | 11,648,687 | 0 | 1 | 0 | false | 0 | 0 | You can compile a debug-enabled version python in your home folder without having root access and develop the C extension against that version. | 1 | 1 | 0 | 0 | How do I debug a Python extension written in C? I found some links that said we need to get the Python debug built, but how do we do that if we don't have root access? I have Python 2.7 installed. | How do I debug a Python extension written in C? | 0 | 0.197375 | 1 | 0 | 0 | 262 |
11,661,053 | 2012-07-26T01:22:00.000 | 0 | 0 | 1 | 1 | 0 | python,python-idle | 1 | 11,661,081 | 0 | 3 | 0 | false | 0 | 0 | python.exe is Python, the python interpreter specifically. | 1 | 0 | 0 | 0 | I am running Windows 7 currently, and I remember when using Linux at the school computers I was able to type "gedit &" into the terminal for example to open up the gedit text editor. I was wondering whether there is a similar process to open IDLE, and for that matter a Python program/script by typing it into the "terminal-equivalent." I'm a complete newbie, so I may be off-base a bit...anyways, so there is this terminal-like program called python.exe, and it seems like it should be able to open Python-related software (like IDLE), and I was wondering 1) what python.exe is for, 2) whether it can be treated like a Linux terminal, and 3) how to do stuff in it. I've tried various commands and I get a syntax error for virtually everything. Much appreciated! | How to start Python IDLE from python.exe window if possible, and if not, what is python.exe even used for? | 0 | 0 | 1 | 0 | 0 | 2,121 |
11,668,748 | 2012-07-26T11:55:00.000 | 5 | 0 | 0 | 0 | 0 | python,chess | 0 | 11,668,833 | 0 | 2 | 0 | false | 0 | 0 | Generally the unmake approach is used more as it avoids unnecessary copying of the board. This is especially true if you want to keep your pieces in two different forms in your Position class; as a traditional 8x8 array and as a set of 64-bit unsigned integers (bitboards).
You need to create a class to store your UnmakeMoveInfo which holds:
The from/to of the move.
The previous position hash.
The 'halfmove clock'.
The en-passant mask.
The captured piece.
The previous position flags (check, ep_move, castling rights).
So that you have all the info required to unmake the move. | 1 | 4 | 0 | 0 | I'm at the start of creating a chess engine. When I created a function that checks if a move is legal, first I had to make the move and then check if the move had put my king in check and then unmake it.
After giving some thought on how to make a function that unmakes a move, I decided it's much simpler to just copy the board and make the hypothetical move on the copied board, so it doesn't change the original board structure at all.
But I'm worried this might be a bad idea because when I get to the AI part, as I have to copy the board completely, and it might slow down my engine. Is it so? Could you please share your thoughts about this, since I don't know much about algorithm complexity and that kind of stuff.
Thank you. | Unmake move vs copy board in chess programming | 0 | 0.462117 | 1 | 0 | 0 | 1,935 |
11,688,397 | 2012-07-27T13:08:00.000 | 0 | 0 | 0 | 0 | 0 | php,python,node.js,asynchronous,real-time | 0 | 11,688,432 | 0 | 4 | 0 | false | 1 | 0 | You could use a poll, long-poll or if you want a push system. Easiest would be a poll. However, all solutions require client side coding.
Performance impact depends on your solution. Easiest to implement would be a poll. A poll with short frequency does effectively a request every, say 100ms ot simulate real time. A long-poll would be of less impact, but it would keep open a lot of request during a more or less time. | 1 | 8 | 0 | 0 | How to show continuous real time updates in browser like facebook ticker, meetup.com home page does? In python, PHP, node.js and what would be the performance impact at the server side ?
Also how could we achieve the same update thing if the page is cached by an CDN like akamai? | How to show continuous real time updates like facebook ticker, meetup.com home page does? | 0 | 0 | 1 | 0 | 1 | 5,085 |
11,689,223 | 2012-07-27T13:53:00.000 | 0 | 0 | 1 | 1 | 0 | python,unicode,utf-8,filesystems,iso-8859-1 | 1 | 11,835,980 | 0 | 2 | 0 | false | 0 | 0 | Use character encoding detection, chardet modules for python work well for determining actual encoding with some confidence. "as appropriate" -- You either know the encoding or you have to guess at it. If with chardet you guess wrong, at least you tried. | 1 | 0 | 0 | 0 | I have an application written in Python 2.7 that reads user's file from the hard-drive using os.walk.
The application requires a UTF-8 system locale (we check the env variables before it starts) because we handle files with Unicode characters (audio files with the artist name in it for example), and want to make sure we can save these files with the correct file name to the filesystem.
Some of our users have UTF-8 locales (therefore a UTF-8 fs), but still somehow manage to have ISO-8859-1 files stored on their drive. This causes problems when our code tries to os.walk() these directories as Python throws an exception when trying to decode this sequence of ISO-8859-1 bytes using UTF-8.
So my question is, how do I get python to ignore this file and move on to the next one instead of aborting the entire os.walk(). Should I just roll my own os.walk() function?
Edit: Until now we've been telling our users to use the convmv linux command to correct their filenames, however many users have various different types of encodings (8859-1, 8859-2, etc.), and using convmv requires the user to make an educated guess on what files have what encoding before they run convmv on each one individually. | Python, UTF-8 filesystem, iso-8859-1 files | 0 | 0 | 1 | 0 | 0 | 1,707 |
11,693,047 | 2012-07-27T17:50:00.000 | 55 | 0 | 1 | 0 | 0 | c++,python,visual-studio-2010,swig | 0 | 11,732,619 | 0 | 1 | 0 | true | 0 | 0 | Step-by-step instructions. This assumes you have the source and are building a single DLL extension that links the source directly into it. I didn't go back through it after creating a working project, so I may have missed something. Comment on this post if you get stuck on a step. If you have an existing DLL and want to create a Python extension DLL that wraps it, this steps are slightly different. If you need help with that comment on this post and I will extend it.
Edit 8/19/2012: If starting with a C example, don't use -c++ in step 13 and use .c instead of .cxx for the wrap file extension in steps 14 and 19.
Start Visual Studio 2010
File, New, Project from Existing Code...
Select "Visual C++" project type and click Next.
Enter project file location where the .cpp/.h/.i files are.
For Project Name, choose the name used in %module statement in your .i file (case matters).
Select project type "Dynamically linked library (DLL) project" and click Next.
Add to Include search paths the path to the Python.h file, usually something like "C:\Python27\include" and click Next.
Click Finish.
Right-click the Project in Solution Explorer, Add, Existing Item..., and select your .i file.
Right-click the .i file, Properties, and select Configuration "All Configurations".
Change Item Type to "Custom Build Tool" and click Apply.
Select "Custom Build Tool" in Properties (it will appear after Apply above).
Enter Command Line of "swig -c++ -python -outdir $(Outdir) %(Identity)" (this assumes SWIG is in your path and redirects the generated .py file to the Debug or Release directory as needed).
In Outputs enter "%(Filename)_wrap.cxx;$(Outdir)%(Filename).py".
Click OK.
Right-click the .i file, and select Compile.
Right-click the project, Add, New Filter, name it "Generated Files".
Right-click "Generated Files", click Properties, and set "SCC Files" to "False" (if you use source-control, this prevents VS2010 trying to check in the generated files in this filter).
Right-click "Generated Files", Add, Exiting Item and select the _wrap.cxx file that was generated by the compile.
Right-click the project, Properties.
Select Configuration "All Configurations".
Select Configuration Properties, Linker, General, Additional Library Directories and add the path to the python libraries, typically "C:\Python27\libs".
Select Configuration Properties, General and set TargetName to "_$(ProjectName)".
Set Target Extension to ".pyd".
Build the "Release" version of the project. You can't build the Debug version unless you build a debug version of Python itself.
Open a console, go to the Release directory of the project, run python, import your module, and call a function! | 1 | 22 | 0 | 0 | I've been trying for weeks to get Microsoft Visual Studio 2010 to create a DLL for me with SWIG. If you have already gone through this process, would you be so kind as to give a thoughtful step-by-step process explanation? I've looked everywhere online and have spent many many hours trying to do this; but all of the tutorials that I have found are outdated or badly explained.
I have succeeded in going through this process with cygwin; but as some of you know, a cygwin DLL is not very practical.
As a result, I have .i, .cpp, and .h files that I know can create a DLL together. I just need to know how to do this with Visual Studio C++ 2010. The language that I am targeting is Python. | How to create a DLL with SWIG from Visual Studio 2010 | 0 | 1.2 | 1 | 0 | 0 | 14,880 |
11,695,649 | 2012-07-27T21:06:00.000 | 1 | 0 | 0 | 0 | 0 | python,pyqt4,qthread,python-multithreading | 0 | 11,695,734 | 0 | 2 | 0 | false | 0 | 1 | This is a really bad plan. Split up the 'before thread action' and 'after thread action'. The 'after thread action' should be a slot fired by a QueuedConnection that the thread can signal.
Do not wait in GUI event handlers! | 2 | 1 | 0 | 0 | I have successfully outsourced an expensive routine in my PyQT4 GUI to a worker QThread to prevent the GUI from going unresponsive. However, I would like the GUI to wait until the worker thread is finished processing to continue executing its own code.
The solution that immediately comes to my mind is to have the thread emit a signal when complete (as I understand, QThreads already do this), and then look for this signal in the main window before the rest of the code is executed. Is this hacked?
I know QThread provides the wait() function described here , but the usage is unclear to me. I think I want to call this on the main thread, but I'm not sure how to call that in my app...? | How do I tell my main GUI to wait on a worker thread? | 0 | 0.099668 | 1 | 0 | 0 | 330 |
11,695,649 | 2012-07-27T21:06:00.000 | 0 | 0 | 0 | 0 | 0 | python,pyqt4,qthread,python-multithreading | 0 | 11,695,743 | 0 | 2 | 0 | false | 0 | 1 | If the GUI thread called the wait() function on the worker thread object, it would not return from it until the worker thread returns from its main function. This is not what you want to do.
The solution you describe using signal and slots seems to make plenty of sense to me. Or you could just use a boolean. | 2 | 1 | 0 | 0 | I have successfully outsourced an expensive routine in my PyQT4 GUI to a worker QThread to prevent the GUI from going unresponsive. However, I would like the GUI to wait until the worker thread is finished processing to continue executing its own code.
The solution that immediately comes to my mind is to have the thread emit a signal when complete (as I understand, QThreads already do this), and then look for this signal in the main window before the rest of the code is executed. Is this hacked?
I know QThread provides the wait() function described here , but the usage is unclear to me. I think I want to call this on the main thread, but I'm not sure how to call that in my app...? | How do I tell my main GUI to wait on a worker thread? | 0 | 0 | 1 | 0 | 0 | 330 |
11,697,289 | 2012-07-28T00:47:00.000 | 5 | 0 | 0 | 0 | 0 | python,try-catch,urllib2,urllib | 1 | 11,697,336 | 0 | 3 | 0 | true | 0 | 0 | There are only two exceptions you'll see, HTTPError (HTTP status codes) and URLError (everything that can go wrong), so it's not like it's overkill handling both of them. You can even just catch URLError if you don't care about status codes, since HTTPError is a subclass of it. | 1 | 5 | 0 | 0 | I'm working with urllib and urllib2 in python and am using them to retrieve images from urls.
Using something similar to :
try:
buffer=urllib2.url_open(urllib2.Request(url))
f.write(buffer)
f.close
except (Errors that could occur): #Network Errors(?)
print "Failed to retrieve "+url
pass
Now what happens often is that the image does not load/is broken when using the site via a normal web browser this is presumably because of high server load or because the image does not exist or could not be retrieved by the server.
Whatever the reason may be, the image does not load and a similar situation can also/is likely to occur when using the script. Since I do not know what error it might it throw up how do I handle it?
I think mentioning all possible errors in the urllib2,urllib library in the except statement might be overkill so I need a better way.
(I also might need to/have to handle broken Wi-Fi, unreachable server and the like at times so more errors) | Python: trying to raise multi-purpose exception for multiple error types | 0 | 1.2 | 1 | 0 | 1 | 1,816 |
11,705,835 | 2012-07-29T00:57:00.000 | 0 | 0 | 0 | 0 | 1 | python,regex,beautifulsoup,twill | 0 | 11,712,834 | 0 | 2 | 0 | false | 1 | 0 | I'd rather user CSS selectors or "real" regexps on page source. Twill is AFAIK not being worked on. Have you tried BS or PyQuery with CSS selectors? | 1 | 3 | 0 | 1 | I'm currently using urllib2 and BeautifulSoup to open and parse html data. However I've ran into a problem with a site that uses javascript to load the images after the page has been rendered (I'm trying to find the image source for a certain image on the page).
I'm thinking Twill could be a solution, and am trying to open the page and use a regular expression with 'find' to return the html string I'm looking for. I'm having some trouble getting this to work though, and can't seem to find any documentation or examples on how to use regular expressions with Twill.
Any help or advice on how to do this or solve this problem in general would be much appreciated. | Using Regular Expression with Twill | 0 | 0 | 1 | 0 | 1 | 225 |
11,713,284 | 2012-07-29T21:53:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,image | 0 | 11,748,617 | 0 | 3 | 0 | true | 1 | 0 | You'll first need to parse the html content for img src urls with something like lxml or BeautifulSoup. Then, you can feed one of those img src urls into sorl-thumbnail or easy-thumbnails as Edmon suggests. | 1 | 2 | 0 | 0 | I am pretty new to Django so I am creating a project to learn more about how it works. Right now I have a model that contains a URL field. I want to automatically generate a thumbnail from this url field by taking an appropriate image from the webite like facebook or reddit does. I'm guessing that I should store this image in an image field. What would be a good way to select an ideal image from the website and how can I accomplish this?
EDIT- I'm trying to take actual images from the website rather than a picture of the website | Need to create thumbnail from user submitted url like reddit/facebook | 0 | 1.2 | 1 | 0 | 0 | 1,630 |
11,716,677 | 2012-07-30T06:58:00.000 | 0 | 1 | 0 | 0 | 0 | python,testing,selenium | 0 | 11,718,057 | 0 | 1 | 0 | true | 0 | 0 | Yes, use different browser configurations in the hub, and use two or more programs to contact the grid with different browsers | 1 | 0 | 0 | 0 | Can each node of selenium grid run different python script/test?
- how to setup? | Can each node of selenium grid run different script/test? - how to setup? | 0 | 1.2 | 1 | 0 | 1 | 382 |
11,719,619 | 2012-07-30T10:26:00.000 | 1 | 0 | 0 | 0 | 1 | python,user-interface,64-bit,pywinauto | 0 | 12,949,711 | 0 | 4 | 0 | false | 0 | 0 | Azurin, you always can use python32bit + pywinauto on your x64 OS. If you realy need python64 you also can use py2exe to compile a test in .exe and use it everywhere, even on OSes where python is not installed. :) | 2 | 3 | 0 | 0 | I am using pywinauto for gui automation for quite a long time already. Now I have to move to x64 OS. There seems to be no official version for 64 bit OS. There are clones which claim they support 64, but in practice they don't.
On installation there are several assertions about wrong win structures size. If commented out, the lib manages to install, but some API doesn't work. E.g. win32functions.GetMenuItemInfo() returns Error 87: wrong parameter. This API depends on struct MENUITEMINFOW (which size initially didn't pass the assertion).
Does anybody know how to handle this situation?
Is there a pure pywinauto version to work without additional patches?
And finally, if no answer, is there a powerful Python lib you may suggest for gui automation? With a support of 64 bit?
Thanks in advance. | Looking for a way to use Pywinauto on Win x64 | 1 | 0.049958 | 1 | 0 | 0 | 2,889 |
11,719,619 | 2012-07-30T10:26:00.000 | -2 | 0 | 0 | 0 | 1 | python,user-interface,64-bit,pywinauto | 0 | 22,605,499 | 0 | 4 | 0 | false | 0 | 0 | win32structure.py
ensure HANDLE is c_void_p
redefine other handles like HBITMAP to HANDLE
ensure pointers are pointers and not long | 2 | 3 | 0 | 0 | I am using pywinauto for gui automation for quite a long time already. Now I have to move to x64 OS. There seems to be no official version for 64 bit OS. There are clones which claim they support 64, but in practice they don't.
On installation there are several assertions about wrong win structures size. If commented out, the lib manages to install, but some API doesn't work. E.g. win32functions.GetMenuItemInfo() returns Error 87: wrong parameter. This API depends on struct MENUITEMINFOW (which size initially didn't pass the assertion).
Does anybody know how to handle this situation?
Is there a pure pywinauto version to work without additional patches?
And finally, if no answer, is there a powerful Python lib you may suggest for gui automation? With a support of 64 bit?
Thanks in advance. | Looking for a way to use Pywinauto on Win x64 | 1 | -0.099668 | 1 | 0 | 0 | 2,889 |
11,725,192 | 2012-07-30T16:03:00.000 | 1 | 0 | 1 | 0 | 0 | python,multithreading,sockets,asynchronous | 0 | 11,725,633 | 0 | 1 | 0 | true | 0 | 0 | For a group chat application, the general approach will be:
Server side (accept process):
Create the socket, bind it to a well known port (and on appropriate interface) and listen
While (app_running)
Client_socket = accept (using serverSocket)
Spawn a new thread and pass this socket to the thread. That thread handles the client that just connected.
Continue, so that server can continue to accept more connections.
Server-side client mgmt Thread:
while app_running:
read the incoming message, and store to a queue or something.
continue
Server side (group chat processing):
For all connected clients:
check their queues. If any message present, send that to ALL the connected clients (including the client that sent this message -- serves as ACK sort of)
Client side:
create a socket
connect to server via IP-address, and port
do send/receive.
There can be lots of improvement on the above. Like the server could poll the sockets or use "select" operation on a group of sockets. That would make it efficient in the sense that having a separate thread for each connected client will be an overdose when there are many. (Think ~1MB per thread for stack).
PS: I haven't really used asyncore module. But I am just guessing that you would notice some performance improvement when you have lots of connected clients and very less processing. | 1 | 2 | 0 | 0 | I am developing a group chat application to learn how to use sockets, threads (maybe), and asycore module(maybe).
What my thought was have a client-server architecture so that when a client connects to the server the server sends the client a list of other connects (other client 'user name', ip addres) and then a person can connect to one or more people at a time and the server would set up a P2P connection between the client(s). I have the socket part working, but the server can only handle one client connection at a time.
What would be the best, most common, practical way to go about handling multiple connections?
Do I create a new process/thread whenever I new connection comes into the server and then connect the different client connections together, or use the asycore module which from what I understand makes the server send the same data to multiple sockets(connection) and I just have to regulate where the data goes.
Any help/thoughts/advice would be appreciated. | Group chat application in python using threads or asycore | 0 | 1.2 | 1 | 0 | 1 | 1,547 |
11,725,340 | 2012-07-30T16:11:00.000 | 5 | 0 | 1 | 0 | 0 | python,multithreading,orphan | 0 | 11,725,777 | 0 | 2 | 1 | true | 0 | 0 | If you can get a handle to the main thread, you can call is_alive() on it.
Alternatively, you can call threading.enumerate() to get a list of all currently living threads, and check to see if the main thread is in there.
Or if even that is impossible, then you might be able to check to see if the child thread is the only remaining non-daemon thread. | 2 | 6 | 0 | 0 | I have a program using a thread. When my program is closed, my thread is still running and that's normal. I would like to know how my thread can detect that the main program is terminated; by itself ONLY. How would I do that?
My thread is in an infinite loop and process many object in a Queue. I can't define my thread as a daemon, else I can lose some data at the end of the main program. I don't want that my main program set a boolean value when it closed. | python: How to detect when my thread become orphan? | 0 | 1.2 | 1 | 0 | 0 | 2,406 |
11,725,340 | 2012-07-30T16:11:00.000 | 0 | 0 | 1 | 0 | 0 | python,multithreading,orphan | 0 | 11,725,681 | 0 | 2 | 1 | false | 0 | 0 | Would it work if your manager tracked how many open threads there were, then the children killed themselves when starved of input? So the parent would start pushing data on to the queue, and the workers would consume data from the queue. If a worker found nothing on the queue for a certain timeout period, it would kill itself. The main thread would then track how many workers were operating and periodically start new workers if the number of active workers were under a given threshold. | 2 | 6 | 0 | 0 | I have a program using a thread. When my program is closed, my thread is still running and that's normal. I would like to know how my thread can detect that the main program is terminated; by itself ONLY. How would I do that?
My thread is in an infinite loop and process many object in a Queue. I can't define my thread as a daemon, else I can lose some data at the end of the main program. I don't want that my main program set a boolean value when it closed. | python: How to detect when my thread become orphan? | 0 | 0 | 1 | 0 | 0 | 2,406 |
11,729,368 | 2012-07-30T20:51:00.000 | 2 | 0 | 0 | 0 | 0 | c++,python,build,sublimetext2 | 0 | 14,229,213 | 0 | 4 | 0 | false | 0 | 1 | windows(install minigw, python2.7 and added to the system path)
cpp:
build: ctrl+b
run: ctrl+shift+b
python:
build and run: ctrl+b
you may try to learn the the .sublime-build files in your Tools -> Build system -> New build system | 1 | 25 | 0 | 0 | I'm just beginning to learn programming (on C++ and Python), and by beginning I mean total beginning ("hello world" beginning...). Not wanting to use multiple IDE's, I would like to be able to code and build–simple–programs with my text editor, Sublime Text 2. Could someone indicate me, with a step-by-step tutorial, how to implement C++ and Python compiling and executing capabilities in Sublime Text.
I've searched Sublime Text build systems on the site, but the answers are very specific and can't help a rookie like me (but they'll probably help me later).
Thanks | Build systems in Sublime Text | 0 | 0.099668 | 1 | 0 | 0 | 79,860 |
11,730,723 | 2012-07-30T22:36:00.000 | 0 | 0 | 1 | 0 | 0 | python | 0 | 11,730,883 | 0 | 3 | 0 | false | 0 | 0 | You should probably calculate the total dynamically. Also, you need some kind of way to store the money of each individual player. Right now, there is no way of knowing the distribution of money since you only have one total. | 1 | 1 | 0 | 0 | (Cards numbered 2-10 should be valued from 2-10, respectively. J,Q, and K should be 10, and A should be either 1 or 11, depending on the value of the hand).
How do I assign the deck these values? Also, the game needs to be 3 rounds. The way I did it is only one round. How do I make the game go three times, while keeping track of the players wins/losses?
Could someone please explain how I can do this a simple way? | Trouble giving values to deck of cards | 1 | 0 | 1 | 0 | 0 | 938 |
11,755,474 | 2012-08-01T08:34:00.000 | 1 | 0 | 0 | 0 | 0 | javascript,python,html,forms,cgi | 0 | 11,755,603 | 0 | 1 | 0 | true | 1 | 0 | If every select should be the only value that's needed, then every select is basically a form on its own.
You could either remove all other selects when you activate a single select (which is prone to errors), or simply put every select in its own form instead of using one giant form. Otherwise all data is going to be send. | 1 | 0 | 0 | 0 | In a python cgi script I have many selects in a form (100 or so), and each select has 5 or 6 options to choose from. I don't want to have a separate submit button, so I am using onchange="submit();" to submit the form as soon as an option is selected from one of the many selects. When I read the form data with form.keys() the name of every select on the form is listed instead of just the one that was changed. This requires me to compare the value selected in each select with the starting value to find out which one changed and this of course is very slow. How can I just get the new value of the one select that was changed? | many selects (dropdowns) on html form, how to get just the value of the select that was changed | 0 | 1.2 | 1 | 0 | 1 | 130 |
11,756,115 | 2012-08-01T09:13:00.000 | 0 | 0 | 0 | 0 | 0 | python,django | 0 | 11,758,203 | 0 | 2 | 0 | false | 1 | 0 | I finally get the tests running, here's what I did:
disabled DATABASE_ROUTERS settings when running tests
maintain B alias in DATABASES settings but the name is the same as A
append B's INSTALLED_APPS that aren't present to A's INSTALLED_APPS | 1 | 0 | 0 | 0 | I have 2 sites: A and B. A relies on some tables from B so it has an entry in its DATABASES settings pointing to B together with some entries under its DATABASE_ROUTERS settings to route certain model access to B's database.
Now I'm trying to write a test on A but just running manage.py test immediately fails because some of A's models relies on some models covered by the tables coming from B, and B's complete database tables hasn't been created yet.
So my question is, how do I tweak my TEST_RUNNER to first run syncdb on B against B's test db so then when I run manage.py test on A it can find the tables from B that it relies on?
I hope that makes sense. | Django: writing test for multiple database site | 0 | 0 | 1 | 0 | 0 | 1,276 |
11,761,889 | 2012-08-01T14:53:00.000 | 19 | 0 | 1 | 0 | 0 | python,image | 0 | 11,761,906 | 0 | 3 | 0 | true | 0 | 0 | Multiply the length of the data by 3/4, since encoding turns 6 bytes into 8. If the result is within a few bytes of 4MB then you'll need to count the number of = at the end. | 1 | 12 | 0 | 0 | I'm working on a python web service.
It calls another web service to change the picture of a profile.
It connects to another web service.
This web service can only accept pictures that are 4 MB or smaller.
I will put the checking in the first web service.
It uses PIL to check if the base64 string is a valid image.
However, how do I check if the base64 string will create a 4 MB or smaller image? | Get Image File Size From Base64 String | 0 | 1.2 | 1 | 0 | 0 | 12,113 |
11,762,290 | 2012-08-01T15:15:00.000 | 3 | 0 | 0 | 0 | 0 | python,django,image-processing,heroku,django-imagekit | 0 | 11,834,793 | 0 | 1 | 0 | true | 1 | 0 | Try to change image size with PIL from console and see if memory usage is ok. Image resize is a simple task, I don't believe you should use side applications. Besides, split your task into 3 tasks(3 images?). | 1 | 3 | 0 | 0 | Django-imagekit, which I'm using to process user uploaded images on a social media website, uses an unacceptably high level of memory. I'm looking for ideas on how to get around this problem.
We are using django-imagekit to copy user uploaded images it into three predefined sizes, and saves the four copies (3 processed plus 1 original) into our AmazonS3 bucket.
This operation is quickly causing us to go over our memory limit on our Heroku dynos. On the django-imagekit github page, I've seen a few suggestions for hacking the library to use less memory.
I see three options:
Try to hack django-imagekit, and deal with the ensuing update problems from using a modified third party library
Use a different imaging processing library
Do something different entirely -- resize the images on in the browser perhaps? Or use a third party service? Or...?
I'm looking for advice on which of these routes to take. In particular, if you are familiar with django-imagekit, or if you know of / are using a different image processing library in a Django app, I'd love to hear your thoughts.
Thanks a lot!
Clay | Memory usage in django-imagekit is unacceptable -- ideas on fixes? | 0 | 1.2 | 1 | 0 | 0 | 353 |
11,762,629 | 2012-08-01T15:32:00.000 | 1 | 0 | 0 | 0 | 0 | javascript,python,ajax,django | 0 | 11,762,988 | 0 | 3 | 0 | false | 1 | 0 | Yes, it is possible. If you pass the id as a parameter to the view you will use inside your app, like:
def example_view (request,id)
and in urls.py, you can use something like this:
url(r'^example_view/(?P<id>\d+)/', 'App.views.example_view').
The id in the url /example_view_template/8 will get access to the result using the id which is related to the number 8. Like the 8th record of a specific table in your database, for example. | 1 | 13 | 0 | 1 | So I'm trying to basically set up a webpage where a user chooses an id, the webpage then sends the id information to python, where python uses the id to query a database, and then returns the result to the webpage for display.
I'm not quite sure how to do this. I know how to use an ajax call to call the data generated by python, but I'm unsure of how to communicate the initial id information to the django app. Is it possible to say, query a url like ./app/id (IE /app/8), and then use the url information to give python the info? How would I go about editing urls.py and views.py to do that?
Thanks, | Pass information from javascript to django app and back | 1 | 0.066568 | 1 | 0 | 0 | 10,912 |
11,764,579 | 2012-08-01T17:37:00.000 | 0 | 0 | 0 | 0 | 0 | python,google-app-engine,profile | 0 | 11,764,721 | 0 | 1 | 0 | true | 1 | 0 | I've used django on most of my webapps, but the concept should be the same; I use ajax to send the data to the backend whenever the user hits submit (and the form returns false) so the user can keep editing it. With ajax, you can send the data to different handlers on the backend. Also, using jQuery, you can set flags to see if fields have been changed, to avoid sending the ajax message in the first place. Ajax requests behave almost exactly like standard HTTP requests, but I believe the header indicates AJAX.
If you're looking at strictly backend, then you will need to do multiple "if" statements on the backend and check one field at a time to see if it has been changed. On the backend you should still be able to call other handlers (passing them the same request). | 1 | 0 | 0 | 0 | My question I suppose is rather simple. Basically, I have a profile. It has many variables being passed in. For instance, name, username, profile picture, and many others that are updated by their own respective pages. So one page would be used to update the profile picture, and that form would submit data from the form to the handler, and put() it to the database. What i'm trying to do here, is put all of the forms used to edit the profile on one single page at the same time.
Would I need one huge handler to deal with that page? When I hit 'save' at the bottom of the page, how do I avoid overwriting data that hasn't been modified? Currently, say I have 5 profile variables, they map to 5 handlers, and 5 separate pages that contain their own respective form.
Thanks. | Submitting Multiple Forms At The Same Time (Edit Profile Page) | 0 | 1.2 | 1 | 0 | 0 | 146 |
11,764,777 | 2012-08-01T17:51:00.000 | 1 | 1 | 0 | 0 | 0 | javascript,python,ajax,bash,terminal | 0 | 11,764,917 | 0 | 1 | 0 | true | 0 | 0 | Once you have logged in as a non-root user you can just su to the root user | 1 | 0 | 0 | 0 | I am trying to use Ajaxterm and I remember that when I used it for the first time about a year ago, there was something about logging in as root.
Can anyone tell me how to enable root login or point me to a guide? Many different google searches have returned no results.
P.S. My question is NOT whether or not I should login as root, but how to login as root. | Login as root in Ajaxterm | 0 | 1.2 | 1 | 0 | 0 | 249 |
11,765,123 | 2012-08-01T18:14:00.000 | 1 | 0 | 0 | 0 | 1 | python,django | 0 | 11,765,289 | 1 | 2 | 0 | true | 1 | 0 | You can have two copies of your settings.py file, one for production and one for development. Whatever you need to be the default, name it as settings.py
Just set DJANGO_SETTINGS_MODULE to the python path for the file that you would like to use.
So, if your settings files are myproject/settings.py, myproject/settings_dev.py; you can then do:
$ DJANGO_SETTINGS_MODULE=settings_dev python manage.py shell
From the myproject directory. | 1 | 1 | 0 | 0 | I'm new to Python and Django and have over the past few weeks managed to set up my first deployment - a very basic site with user authentication and a few pages, which I hope to fill with content in the next couple of weeks.
I have managed to find the answer to probably 40+ questions I have encountered so far by searching Google / StackOverflow / Django docs etc., but now I have one I can't seem to find a good answer to (perhaps because I don't know how best to search for it): when I develop on my local machine I need my settings.py file to point to the remote database ('HOST': 'www.mysite.com',) but when I deploy to a shared hosting service provider they require the use of localhost ('HOST': '', in settings.py).
Since I host my code on GitHub and want to mirror it to the server, is there a way to resolve this so I don't have to make a manual edit to settings.py each time after uploading changes to the server? | How to use a different database host in development vs deployment? | 0 | 1.2 | 1 | 0 | 0 | 120 |
11,769,471 | 2012-08-02T00:30:00.000 | 3 | 0 | 0 | 0 | 0 | python,rpy2 | 0 | 52,399,670 | 0 | 4 | 0 | false | 0 | 0 | In the latest version of rpy2, you can simply do this in a direct way:
import numpy as np
array=np.array(vector_R) | 1 | 9 | 1 | 0 | I'm using rpy2 and I have this issue that's bugging me: I know how to convert a Python array or list to a FloatVector that R (thanks to rpy2) can handle within Python, but I don't know if the opposite can be done, say, I have a FloatVector or Matrix that R can handle and convert it back to a Python array or list...can this be done?
Thanks in advance! | rpy2: Convert FloatVector or Matrix back to a Python array or list? | 0 | 0.148885 | 1 | 0 | 0 | 5,269 |
11,770,312 | 2012-08-02T02:36:00.000 | 0 | 0 | 0 | 0 | 1 | python,html,post,bottle | 0 | 11,770,627 | 0 | 1 | 0 | true | 1 | 0 | You could add a hidden input field to each form on the page with a specific value. On the server side, check the value of this field to detect which form the post request came from. | 1 | 0 | 0 | 0 | So, what issue im running into is how do i know what element of my page made a post request? I have multiple elements that can make the post request on the page, but how do i get the values from the element that created the request? It seems like this would be fairly trivial,but i have come up with nothing, and when doing quite a few google searches i have come up with nothing again.
Is there any way to do this using Bottle?
I had an idea to an a route for an sql page (with authentication of course) for providing the action for the form and use the template to render the id in the action, but i was thinking there had to be a better way to do this without routing another page. | Distinguishing post request's from possible poster elements | 0 | 1.2 | 1 | 0 | 1 | 53 |
11,786,318 | 2012-08-02T21:49:00.000 | 1 | 0 | 0 | 0 | 0 | gstreamer,python-gstreamer | 0 | 11,802,443 | 0 | 1 | 0 | true | 0 | 0 | You can use appsrc. You can pass chunks of your data to app source as needed. | 1 | 0 | 0 | 0 | The HTTP file and its contents are already downloaded and are present in memory. I just have to pass on the content to a decoder in gstreamer and play the content. However, I am not able to find the connecting link between the two.
After reading the documentation, I understood that gstreamer uses httpsoupsrc for downloading and parsing of http files. But, in my case, I have my own parser as well as file downloader to do the same. It takes the url and returns the data in parts to be used by the decoder. I am not sure howto bypass httpsoupsrc and use my parser instead also how to link it to the decoder.
Please let me know if anyone knows how things can be done. | How to hook custom file parser to Gstreamer Decoder? | 0 | 1.2 | 1 | 0 | 1 | 241 |
11,792,531 | 2012-08-03T08:58:00.000 | 1 | 0 | 0 | 1 | 0 | python,ibm-mq,twisted,pymqi | 0 | 11,794,331 | 0 | 1 | 0 | true | 0 | 0 | If you're going to use this functionality a lot, then having a native Twisted implementation is probably worth the effort. A wrapper based on deferToThread will be less work, but it will also be harder to test and debug, perform less well, and have problems on certain platforms where Python threads don't work extremely well (eg FreeBSD).
The approach to take for a native Twisted implementation is probably to implement a protocol that can speak to MQ servers and give it a rich API for interacting with channels, queues, queue managers, etc, and then build a layer on top of that which abstracts the actual network connection away from the application (as I believe mqi/pymqi largely do). | 1 | 0 | 0 | 0 | I'm trying to work out how to approach building a "machine" to send and receive messages to WebSphere MQ, via Twisted. I want it to be as generic as possible, so I can reuse it for many different situations that interface with MQ.
I've used Twisted before, but many years ago now and I'm trying to resurrect the knowledge I once had...
The specific problem I'm having is how to implement the MQ IO using Twisted. There's a pymqi Python library that interfaces with MQ, and it provides all the interfaces I need. The MQ calls I need to implement are:
initiate a connection to a specific MQ server/port/channel/queue-manager/queue combination
take content and post it as a message to the desired queue
poll a queue and return the content of the next message in the queue
send a request to a queue manager to find the number of messages currently in a queue
All of these involve blocking calls to MQ.
As I'm intending to reuse the Twisted/MQ interface many times across a range of projects, should I be looking to implement the MQ IO as a Twisted protocol, as a Twisted transport, or just call the pymqi methods via deferToThread() calls? I realise this is a very broad question with possibly no definitive answer; I'm really after advice from those who may have encountered similar challenges before (i.e. working with queueing interfaces that will always block) and found a way that works well. | Using WebSphere MQ with Twisted | 1 | 1.2 | 1 | 0 | 0 | 245 |
11,793,895 | 2012-08-03T10:27:00.000 | 1 | 0 | 1 | 0 | 0 | python,vim,omnicomplete | 0 | 11,794,082 | 0 | 1 | 0 | true | 0 | 0 | Your settings won't be lost if you recompile vim: recompilation will simply create a new vim executable. If you are using a common Linux distribution, though, you might not need to compile anything: Archlinux, for example, bundles "vim compiled with +python" in the gvim package. Your distro might do something similar. | 1 | 0 | 0 | 0 | When I try to use omnicomplete in a .py file vim says that I need to compile vim with +python support. I already have a bunch of plugins downloaded in my vimfiles with pathogen so how do I recompile vim 7.3 with +python support without losing my settings? Thanks | recompile vim with +python | 0 | 1.2 | 1 | 0 | 0 | 1,308 |
11,809,438 | 2012-08-04T14:39:00.000 | 0 | 0 | 1 | 0 | 0 | python,concurrency,process,monitor,sharing | 0 | 11,809,544 | 0 | 2 | 0 | false | 0 | 0 | shared memory between processes is usually a poor idea; when calling os.fork(), the operating system marks all of the memory used by parent and inherited by the child as copy on write; if either process attempts to modify the page, it is instead copied to a new location that is not shared between the two processes.
This means that your usual threading primitives (locks, condition variables, et-cetera) are not useable for communicating across process boundaries.
There are two ways to resolve this; The preferred way is to use a pipe, and serialize communication on both ends. Brian Cain's answer, using multiprocessing.Queue, works in this exact way. Because pipes do not have any shared state, and use a robust ipc mechanism provided by the kernel, it's unlikely that you will end up with processes in an inconsistent state.
The other option is to allocate some memory in a special way so that the os will allow you to use shared memory. the most natural way to do that is with mmap. cPython won't use shared memory for native python object's though, so you would still need to sort out how you will use this shared region. A reasonable library for this is numpy, which can map the untyped binary memory region into useful arrays of some sort. Shared memory is much harder to work with in terms of managing concurrency, though; since there's no simple way for one process to know how another processes is accessing the shared region. The only time this approach makes much sense is when a small number of processes need to share a large volume of data, since shared memory can avoid copying the data through pipes. | 1 | 2 | 0 | 0 | I'm new here and I'm Italian (forgive me if my English is not so good).
I am a computer science student and I am working on a concurrent program project in Python.
We should use monitors, a class with its methods and data (such as condition variables). An instance (object) of this class monitor should be shared accross all processes we have (created by os.fork o by multiprocessing module) but we don't know how to do. It is simpler with threads because they already share memory but we MUST use processes. Is there any way to make this object (monitor) shareable accross all processes?
Hoping I'm not saying nonsenses...thanks a lot to everyone for tour attention.
Waiting answers.
Lorenzo | Monitor concurrency (sharing object across processes) in Python | 0 | 0 | 1 | 0 | 0 | 619 |
11,809,856 | 2012-08-04T15:38:00.000 | 6 | 0 | 0 | 0 | 0 | python,graph,networkx,igraph | 0 | 11,810,286 | 0 | 5 | 0 | false | 0 | 0 | A very simple way to approach (and solve entirely) this problem is to use the adjacency matrix A of the graph. The (i,j) th element of A^L is the number of paths between nodes i and j of length L. So if you sum these over all j keeping i fixed at n, you get all paths emanating from node n of length L.
This will also unfortunately count the cyclic paths. These, happily, can be found from the element A^L(n,n), so just subtract that.
So your final answer is: Σj{A^L(n,j)} - A^L(n,n).
Word of caution: say you're looking for paths of length 5 from node 1: this calculation will also count the path with small cycles inside like 1-2-3-2-4, whose length is 5 or 4 depending on how you choose to see it, so be careful about that. | 1 | 3 | 0 | 0 | Given a graph G, a node n and a length L, I'd like to collect all (non-cyclic) paths of length L that depart from n.
Do you have any idea on how to approach this?
By now, I my graph is a networkx.Graph instance, but I do not really care if e.g. igraph is recommended.
Thanks a lot! | All paths of length L from node n using python | 0 | 1 | 1 | 0 | 1 | 6,172 |
11,813,555 | 2012-08-05T02:36:00.000 | 0 | 0 | 1 | 0 | 0 | python,pdf | 0 | 44,475,810 | 0 | 6 | 0 | false | 0 | 0 | A simpler approach would be to use the csv package to write the two columns to a .csv file, then read it into a spreadsheet & print to pdf. Not 100% python but maybe 90% less work ... | 1 | 23 | 0 | 0 | I'm looking for a way to output a VERY simple pdf file from Python. Basically it will consist of two columns of words, one in Russian (so utf-8 characters) and the other in English.
I've been googling for about an hour, and the packages I've found are either massive overkill (and still don't provide useful examples) such as ReportLab, or seem to assume that the only thing anyone would ever do with pdfs is concatenate several of them together (PyPdf, pdfrw).
Maybe I'm just missing something obvious, but all the ones I've seen seem to launch into some massive discussion about taking 17 pdf files and converting them into a 60 foot wide poster with 23 panes (slight exaggeration maybe), and leave me wondering how to get the "Hello World" program working. Any help would be appreciated. | How do I create a simple pdf file in python? | 0 | 0 | 1 | 0 | 0 | 14,227 |
11,822,111 | 2012-08-06T03:10:00.000 | 2 | 0 | 1 | 0 | 0 | python,file | 0 | 11,822,131 | 0 | 1 | 0 | false | 0 | 0 | Opening a file in a+ positions the pointer at the end of the file; truncation from there results in no change to the file. | 1 | 0 | 0 | 0 | python file object how to remove byte from current seed postion to end
f = open(filename, "a+")
truncate_pos = f.tell()
f.truncate(truncate_pos)
seems not work,how could i do? | python file object how to remove byte from current seed postion to end | 0 | 0.379949 | 1 | 0 | 0 | 138 |
11,823,586 | 2012-08-06T06:42:00.000 | 0 | 0 | 1 | 0 | 0 | python,django,multiprocessing | 0 | 11,823,901 | 0 | 2 | 0 | false | 1 | 0 | You can try celery as it's django friendly.
But if to be honest i'm not fond of it (bugs :)
We are going to switch to Gearman.
Writing your own job producers and consumers (workers) are a kind of a fun! | 1 | 1 | 0 | 0 | I'm trying to write a Web app in Python, which is to consist of two parts:
A Django-based user interface, which allows each user to set up certain tasks
Worker processes (one per user), which, when started by the user, perform the tasks in the background without freezing the UI.
Since any object I create in a view is not persistent, I have no way of keeping tracks of worker processes. I'm not even sure how to approach this task. Any ideas? | Python multiprocessing and Django - I'm confused | 0 | 0 | 1 | 0 | 0 | 685 |
11,824,589 | 2012-08-06T08:06:00.000 | 4 | 0 | 1 | 1 | 0 | python,visual-studio-2010,ptvs | 0 | 11,826,101 | 0 | 4 | 0 | true | 0 | 0 | I found that if :
main.py is set as Startup File,
in the Properties of the project -> Debug tab -> Interpreter Path field, I put the path C:...\env\Scripts\python.exe (i.e. the python executable of the virtualenv)
It works ! | 1 | 6 | 0 | 0 | I don't know how to run the activate.bat in a Python Tools for Visual Studio Project. I have a directory environment in my project with my virtualenv. But, I don't know how I can run ./env/Scripts/activate.bat before the project run my main python script. | How to run a python script with Python Tools for Visual Studio in a virtualenv? | 0 | 1.2 | 1 | 0 | 0 | 30,422 |
11,837,330 | 2012-08-06T23:24:00.000 | 2 | 0 | 0 | 1 | 0 | python,buildbot | 0 | 12,306,901 | 0 | 1 | 0 | true | 0 | 0 | If you don't care about the name of the directory, just the name of the builder, you can set the builddir attribute of the builder to be whatever it currently is, then name you builder however you want.
The data stored in the builder directory is in pickles. Looking at the code, I think the only data could cause issues is the builder name. If you done care about non-build events, you could probably just delete the builder file from each directory. Otherwise, rewriting the pickle with the update builder name should work. | 1 | 22 | 0 | 0 | Is there a way to rename a build in buildbot without losing all of the logs?
For instance I have several windows slaves which all might build: "Windows 2008+ DEBUG" but I want to rename this build to: "Windows 2008R2+ DEBUG".
How do I set compare_attr (if that's even what I need to do) so that all of the logs/etc... are included from the previous builds in the new one.
Can I manually rename the directories and expect everything to work? Experimentation has told me that will not work but maybe I can write a command to change certain things? | rename a build in buildbot | 0 | 1.2 | 1 | 0 | 0 | 812 |
11,848,650 | 2012-08-07T14:55:00.000 | 4 | 0 | 1 | 1 | 1 | python,ubuntu,compilation,nautilus | 0 | 11,848,690 | 0 | 3 | 0 | false | 0 | 0 | You should make the .py files executable and click on them. The .pyc files cannot be run directly. | 2 | 6 | 0 | 0 | Sorry, for the vague question, don't know actually how to ask this nor the rightful terminologies for it.
How to run a python script/bytecode/.pyc (any compiled python code) without going through the terminal. Basically on Nautilus: "on double click of the python script, it'll run" or "on select then [Enter], it'll run!". That's my goal at least.
When i check the "Allow executing of file as a program" then press [enter] on the file. It gives me this message:
Could not display "/home/ghelo/Music/arrange.pyc".
There is no application installed for Python bytecode files.
Do you want to search for an application to open this file?
Using Ubuntu 12.04, by the way and has to be python 2, one of the packages doesn't work on python 3. If there's a difference between how to do it on the two version, include it, if it's not to much t ask, thank you.
I know it doesn't matter, but it's a script auto renaming & arranging my music files. Guide me accordingly, stupid idiot here. :) | How to run Python script with one icon click? | 0 | 0.26052 | 1 | 0 | 0 | 15,947 |
11,848,650 | 2012-08-07T14:55:00.000 | 1 | 0 | 1 | 1 | 1 | python,ubuntu,compilation,nautilus | 0 | 12,015,527 | 0 | 3 | 0 | true | 0 | 0 | Adding " #!/usr/bin/env python " at the top of the .py file works! Hmm, although don't appreciate the pop-up, but nevermind. :P
From PHPUG:
You do not invoke the pyc file. It's the .py file that's invoked. Python is an interpreted language.
A simpler way to make a python exectuable (explained):
1) Add #!/usr/bin/env python at the top of your python executable file (eg. main.py) (it uses the default python - eg. if using arch, that's py3 instead of py2. You can explicitly tell it to run python2/python3 by replacing python with it's version: ex. python2.7)
2) Write the code. If the script is directly invoked, __name__ variable becomes equal to the string '__main__' thus the idiom: if __name__ == '__main__':. You can add all the logic that relates to your script being directly invoked in this if-block. This keeps your executable importable.
3) Make it executable 'chmod +x main.py'
4) Call the script: ./main.py args args | 2 | 6 | 0 | 0 | Sorry, for the vague question, don't know actually how to ask this nor the rightful terminologies for it.
How to run a python script/bytecode/.pyc (any compiled python code) without going through the terminal. Basically on Nautilus: "on double click of the python script, it'll run" or "on select then [Enter], it'll run!". That's my goal at least.
When i check the "Allow executing of file as a program" then press [enter] on the file. It gives me this message:
Could not display "/home/ghelo/Music/arrange.pyc".
There is no application installed for Python bytecode files.
Do you want to search for an application to open this file?
Using Ubuntu 12.04, by the way and has to be python 2, one of the packages doesn't work on python 3. If there's a difference between how to do it on the two version, include it, if it's not to much t ask, thank you.
I know it doesn't matter, but it's a script auto renaming & arranging my music files. Guide me accordingly, stupid idiot here. :) | How to run Python script with one icon click? | 0 | 1.2 | 1 | 0 | 0 | 15,947 |
11,854,528 | 2012-08-07T21:41:00.000 | 0 | 0 | 0 | 0 | 0 | python,opencv | 0 | 25,125,143 | 0 | 1 | 0 | false | 0 | 0 | I do something similar with C++ in OpenCV. There are a couple of ways to go about this.
I use TCP/IP protocols - make sure I don't have packet loss.
Next, To test quality, I send and receive a Video file (that I recorded from the camera) instead of stream "new" video from the camera. Now, I can check the quality by checking the bytes received in every frame.
This may not be optimal, but it is a starting point. | 1 | 0 | 0 | 0 | I am new to python and programming in general. I was wondering if there is a way to validate that you are getting a video feed and not just a black screen from an incoming call. I have automated a script in Python that makes a call and answers the call, but some of the issues we are testing is how often we get black screen instead of the video call. I have been reading up on OpenCV and played around with it some, but am not getting anywhere near the results I am looking for. Is there another way in python to detect video? If so I would greatly appreciate being pointed in the right direction.
Thanks | Checking the quality of an Incoming Video with python | 1 | 0 | 1 | 0 | 0 | 1,107 |
11,868,963 | 2012-08-08T16:27:00.000 | 0 | 0 | 1 | 0 | 1 | python,python-imaging-library,tiff | 0 | 11,877,012 | 0 | 1 | 0 | true | 0 | 1 | It appears that TiffImagePlugin does not easily allow me to hook in additional decompressors. Replacing TiffImageFile._decoder with a dictionary of decoders might work, but you would have to examine and test each release of PIL to ensure it hasn't broken. This level of maintenance is just as bad as a custom PIL. I appreciate the design of tifffile.py for using a dictionary of decoders. It made it very easy.
Final solution? I couldn't hook my code into PIL. I had to use PIL.Image.fromarray() to using my decompressed images. | 1 | 1 | 0 | 0 | I wrote a pure python TIFF G4 decompress for use with tifffile.py. I know there are ways to add libtiff to a custom PIL, but I never could get that working very well in a mixed virtualenv. I want to manipulate the image in PIL. I am looking for pointers in hooking my decompressor to stock PIL for TiffImagePlugin.py.
Any ideas? | Using TIFF G4 image in PIL | 0 | 1.2 | 1 | 0 | 0 | 973 |
11,876,545 | 2012-08-09T03:52:00.000 | 1 | 0 | 1 | 0 | 1 | windows,python-2.7 | 1 | 11,876,572 | 0 | 3 | 0 | true | 0 | 0 | Run your program from a Windows command prompt. That will not automatically close when the program finishes.
If you run your program by double-clicking on the .py file icon, then Windows will close the window when your program finishes (whether it was successful or not). | 1 | 1 | 0 | 0 | I'm making a application in python from Windows. When I run it in the console, it stops, shows an error, and closes. I can't see the error becase its too fast, and I can't read it. I'm editing the code with IDLE (the program that came with python when I instaled it), and when I run it with the python shell, there are no errors. I would run it from IDLE, but when I use the console, it has more features.
I don't know why this is happening. I need your help. | how to make my console in python not to close? | 0 | 1.2 | 1 | 0 | 0 | 7,808 |
11,894,210 | 2012-08-10T01:18:00.000 | 3 | 1 | 0 | 0 | 0 | python,pyramid,production | 0 | 11,898,284 | 0 | 1 | 0 | true | 1 | 0 | Well the big difference between python setup.py develop and python setup.py install. Is that install will install the package in your site-packages directory. While develop will install an egg-link that point to the directory for development.
So yeah you can technically use both method. But depending on how you did your project, installing in site-package might be a bad idea.
Why? FileUpload or anything your app might generate like dynamic files etc... If your app doesn't use config files to find where to save your files. Installing your app and running your app may try to write file in your site-packages directory.
In other words, you have to make sure that all files and directories that may be generated, etc can be located using config files.
Then if all dynamic directories are pointed out in the configs, then installing is good...
All you'll have to do is create a folder with a production.ini file and run pserve production.ini.
Code can be saved anywhere on your comp that way and you can also use uWSGI or any other WSGI server you like.
I think installing the code isn't a bad thing, and having data appart from the application is a good thing.
It has some advantage for deployment I guess. | 1 | 3 | 0 | 0 | So as I near the production phase of my web project, I've been wondering how exactly to deploy a pyramid app. In the docs, it says to use ../bin/python setup.py develop to put the app in development mode. Is there another mode that is designed for production. Or do I just use ../bin/python setup.py install. | Preparing a pyramid app for production | 0 | 1.2 | 1 | 0 | 0 | 2,476 |
11,894,333 | 2012-08-10T01:37:00.000 | 1 | 1 | 1 | 0 | 0 | python,memory,virtualenv,web-frameworks | 0 | 12,218,779 | 0 | 3 | 0 | false | 1 | 0 | It depends on how you're going to run the application in your environment. There are many different ways to run Python web apps. Recently popular methods seem to be Gunicorn and uWSGI. So you'd be best off running the application as you would in your environment and you could simply use a process monitor to see how much memory and CPU is being used by the process running your applicaiton. | 1 | 1 | 0 | 0 | I'm creating an app in several different python web frameworks to see which has the better balance of being comfortable for me to program in and performance. Is there a way of reporting the memory usage of a particular app that is being run in virtualenv?
If not, how can I find the average, maximum and minimum memory usage of my web framework apps? | Testing memory usage of python frameworks in Virtualenv | 0 | 0.066568 | 1 | 0 | 0 | 1,578 |
11,898,451 | 2012-08-10T09:03:00.000 | 0 | 0 | 1 | 0 | 0 | python,loops,cron,infinite-loop | 0 | 11,898,609 | 0 | 3 | 0 | false | 0 | 0 | I guess one way to work around the issue is having a script for one loop run, that would:
check no other instance of the script is running
look into the queue and process everything found there
Now, then you can run this script from cron every minute between 8 a.m. and 8 p.m. The only downside is that new items may some time to get processed. | 1 | 3 | 0 | 0 | I'm working on a Python script that will constantly scrape data, but it will take quite a long time. Is there a safe way to stop a long running python script? The loop will run for more than 10 minutes and I need a way to stop it if I want, after it's already running.
If I execute it from a cron job, then I'm assuming it'll just run until it's finished, so how do I stop it?
Also, if I run it from a browser and just call the file. I'm assuming stopping the page from loading would halt it, correct?
Here's the scenario: I have one python script that is gather info from pages and put it into a queue. Then I want to have another python script that is in an infinite loop that just checks for new items in the queue. Lets say I want the infinite loop to begin at 8am and end at 8pm. How do I accomplish this? | Stop python script in infinite loop | 0 | 0 | 1 | 0 | 0 | 10,068 |
11,903,310 | 2012-08-10T14:02:00.000 | 3 | 1 | 0 | 0 | 0 | python,unit-testing,tdd | 0 | 11,903,386 | 0 | 1 | 0 | true | 0 | 0 | Bit old on Python.
But this is how I would approach it.
Grab the image doing a manual test. Compute a check sum (MD5 perhaps). Then the automated tests need to compare it by computing the MD5 (in this example) with the one done on the manual test.
Hope this helps. | 1 | 1 | 0 | 0 | I have an array of pixels which I wish to save to an image file. Python appears to have a few libraries which can do this for me, so I'm going to use one of them, passing in my pixel array and using functions I didn't write to write the image headers and data to disk.
How do I do unit testing for this situation?
I can:
Test that the pixel array I'm passing to the external library is what I expect it to be.
Test that the external library functions I call give me the expected return values.
Manually verify that the image looks like I'm expecting (by opening the image and eyeballing it).
I can't:
Test that the image file is correct. To do that I'd have to either generate an image to compare to (but how do I generate that 'trustworthy' image?), or write a unit-testable image-writing module (so I wouldn't need to bother with the external library at all).
Is this enough to provide coverage for my code? Is testing the interface between my code and the external library sufficient, leaving me to trust that the output of the external library (the image file) is correct through manual eyeballing?
How do you write unit tests to ensure that the external libraries you use do what you expect them to? | Unittest binary file output | 0 | 1.2 | 1 | 0 | 0 | 641 |
11,926,023 | 2012-08-12T21:28:00.000 | 0 | 0 | 1 | 0 | 0 | python | 0 | 11,926,214 | 0 | 3 | 0 | false | 0 | 0 | The TSV file has already lost all the type information.
If the pickle module had been used to write out the file, you would have been able to easily unpickle it, however it looks like you just get to read the damaged file, so pickle is no use to you here
The best you can do is attempt to convert each field to int and handle the exception if it fails | 1 | 0 | 0 | 0 | I have a TSV file that consists of integers along with some false data that could be anything such as floats or characters etc.
The idea is to read the contents of the file and find out which ones are bad (containing data other than integers)
Each line can be read using the readline method once the file has been opened for reading. Off course, the readline() method returns each line read as a string and not it's constituent data types. My understanding is, that I could use the pickle module somehow to ensure that i retain the original data type by representing it as it's serialized version carrying out dump and load methods.
The question is, how do I do this?
By reading each line and pickling it, would not help since readline by default reads it as a string. Thereby upon pickling, it's really just pickling a string into a serialized python object representation and unpickling would only return it as a string. Thus the actual data in the line, such as integers or chars are being represented as strings irrespective.
So I assume the question is, how do I pickle things the right way OR how do I process each line of a file ensuring that it's data types are being maintained? | Read a file of mixed items and retain their data types | 0 | 0 | 1 | 0 | 0 | 374 |
11,929,073 | 2012-08-13T06:20:00.000 | 1 | 0 | 0 | 0 | 0 | python,openerp,accounting | 0 | 11,929,906 | 0 | 2 | 0 | true | 1 | 0 | You can override "pay_and_reconcile" function to write in account field, this function is called at time of Pay.
action_date_assign()
action_move_create()
action_number()
this 3 function are called at time of validating invoice.
You can override any one from this or you can add your own function . in workflow for the "open" activity. | 1 | 1 | 0 | 0 | I'm new to OpenERP and python and I need some help saving an amount into a particular account.
I have created a field in the invoice form that calculates a specific amount based on some code and displays that amount in the field. What I want to do is to associate an account with this field, so when the invoice is validated and/or payed, this amount is saved into an account and later on I can see it in the journal entries and/or chart of account. Any idea how to do that ? | OpenERP : Saving field value (amount) into an account | 0 | 1.2 | 1 | 0 | 0 | 226 |
11,970,079 | 2012-08-15T13:23:00.000 | 1 | 0 | 0 | 1 | 0 | php,python,api,sync,icloud | 0 | 11,973,429 | 0 | 4 | 0 | false | 1 | 0 | To the best of my knowledge, there is no way to interface with iCloud directly; it can only be done through an iOS or Mac OS app, and by calling the correct iCloud Objective-C APIs with UI/NSDocument classes. Since you are not using Cocoa, let alone Objective-C, you will most likely not be able to do this. I may be wrong of course, as I haven't conducted an in-depth search into this. | 2 | 4 | 0 | 0 | I'm building custom CRM web based system and have integrated synchronization of contacts and reminders with Google apps and need do the same with Apple iCloud. Is there any way how to do it? I haven't find any official API for this purpose, CRM is written in PHP, but I'm able to use python for this purpose as well. | Website sync of contacts and reminders with iCloud | 0 | 0.049958 | 1 | 0 | 0 | 1,657 |
11,970,079 | 2012-08-15T13:23:00.000 | 0 | 0 | 0 | 1 | 0 | php,python,api,sync,icloud | 0 | 12,255,882 | 0 | 4 | 0 | false | 1 | 0 | I would recommend that you sync using the google contacts api. Then, you can tell iPhone people to use that instead of iCloud. | 2 | 4 | 0 | 0 | I'm building custom CRM web based system and have integrated synchronization of contacts and reminders with Google apps and need do the same with Apple iCloud. Is there any way how to do it? I haven't find any official API for this purpose, CRM is written in PHP, but I'm able to use python for this purpose as well. | Website sync of contacts and reminders with iCloud | 0 | 0 | 1 | 0 | 0 | 1,657 |
11,970,246 | 2012-08-15T13:35:00.000 | 2 | 0 | 0 | 0 | 0 | python,wxpython | 0 | 11,975,689 | 0 | 2 | 0 | false | 0 | 1 | If you're talking about doing this stuff inside of a wxPython program, then it's all pretty simple. There's a PopupMenu widget for the first one and an AcceratorTable for the second one. If you're wanting to catch mouse and keyboard events outside of a wxPython program, then you have to go very low-level and hook into the OS itself, which means that there really isn't any good way to do it cross-platform. You'll probably want to look at ctypes and similar libraries for that sort of thing. | 1 | 0 | 0 | 0 | I am thinking of writing a python program that runs in the background and can inspect user's GUI events.
My requirements is very simple:
1) When user right click the mouse, it can show an option; and when this option is chosen, my program should know this event.
2) When user select a file and click some predefined key combination, my program should know this event.
What should I do? Is this a GUI program? I am also thinking that, this program maybe a daemon on the machine and can inspect the user's GUI event, but I am not sure how can I do this.
Thanks. | Background python program inspect GUI events | 0 | 0.197375 | 1 | 0 | 0 | 367 |
11,988,636 | 2012-08-16T13:49:00.000 | 2 | 0 | 0 | 1 | 1 | python,django,pycharm | 0 | 38,212,424 | 0 | 4 | 0 | false | 0 | 0 | To give PyCharm permissions, one has to run as Administor (Windows) or using sudo if on OSX/Linux: sudo /Applications/PyCharm.app/Contents/MacOS/pycharm. Note that this truly runs PyCharm as a new user, so you'll have to register the app again and set up your customizations again if you have any (ie theme, server profiles etc) | 2 | 8 | 0 | 0 | I can't run my PyCharm IDE using port 80.
I need to use PayPal that requires me to use port 80.
But using Mac OS X 10.8 I can't have it working because of permission issues.
I've already tried running PyCharm with SUDO command.
Does anyone know how to run Pycharm using port 80, or any other solution?
Thanks. | How to run PyCharm using port 80 | 0 | 0.099668 | 1 | 0 | 0 | 8,103 |
11,988,636 | 2012-08-16T13:49:00.000 | 1 | 0 | 0 | 1 | 1 | python,django,pycharm | 0 | 22,296,578 | 0 | 4 | 0 | false | 0 | 0 | For the ones who are looking for the answer of this question, please check your Pycharm Run/Debug Configurations. Run->Edit Configurations ->Port | 2 | 8 | 0 | 0 | I can't run my PyCharm IDE using port 80.
I need to use PayPal that requires me to use port 80.
But using Mac OS X 10.8 I can't have it working because of permission issues.
I've already tried running PyCharm with SUDO command.
Does anyone know how to run Pycharm using port 80, or any other solution?
Thanks. | How to run PyCharm using port 80 | 0 | 0.049958 | 1 | 0 | 0 | 8,103 |
11,989,408 | 2012-08-16T14:29:00.000 | 3 | 0 | 0 | 0 | 1 | python,mongodb,pymongo | 0 | 11,989,459 | 0 | 1 | 0 | true | 0 | 0 | You can use one pymongo connection across different modules. You can open it in a separate module and import it to other modules on demand. After program finished working, you are able to close it. This will be the best option.
About other questions:
You can leave like this (all connections will be closed when script finishes execution), but leaving something unclosed is a bad form.
You can open/close connection for each operation (but establishing connection is a time-expensive operation.
That what I'd advice you (see this answer's first paragraph)
I think this point can be merged with 3. | 1 | 3 | 0 | 0 | I am fairly new to databases and have just figured out how to use MongoDB in python2.7 on Ubuntu 12.04. An application I'm writing uses multiple python modules (imported into a main module) that connect to the database. Basically, each module starts by opening a connection to the DB, a connection which is then used for various operations.
However, when the program exits, the main module is the only one that 'knows' about the exiting, and closes its connection to MongoDB. The other modules do not know this and have no chance of closing their connections. Since I have little experience with databases, I wonder if there are any problems leaving connections open when exiting.
Should I:
Leave it like this?
Instead open the connection before and close it after each operation?
Change my application structure completely?
Solve this in a different way? | When to disconnect from mongodb | 1 | 1.2 | 1 | 1 | 0 | 1,207 |
11,994,515 | 2012-08-16T19:54:00.000 | 2 | 0 | 0 | 0 | 0 | python,numpy,scipy | 0 | 11,995,122 | 0 | 2 | 0 | true | 0 | 0 | You might want to consider doing this in Cython, instead of as a C extension module. Cython is smart, and lets you do things in a pretty pythonic way, even though it at the same time lets you use C datatypes and python datatypes.
Have you checked out the array module? It allows you to store lots of scalar, homogeneous types in a single collection.
If you're truly "logging" these, and not just returning them to CPython, you might try opening a file and fprintf'ing them.
BTW, realloc might be your friend here, whether you go with a C extension module or Cython. | 2 | 2 | 0 | 0 | I'm using python to set up a computationally intense simulation, then running it in a custom built C-extension and finally processing the results in python. During the simulation, I want to store a fixed-length number of floats (C doubles converted to PyFloatObjects) representing my variables at every time step, but I don't know how many time steps there will be in advance. Once the simulation is done, I need to pass back the results to python in a form where the data logged for each individual variable is available as a list-like object (for example a (wrapper around a) continuous array, piece-wise continuous array or column in a matrix with a fixed stride).
At the moment I'm creating a dictionary mapping the name of each variable to a list containing PyFloatObject objects. This format is perfect for working with in the post-processing stage but I have a feeling the creation stage could be a lot faster.
Time is quite crucial since the simulation is a computationally heavy task already. I expect that a combination of A. buying lots of memory and B. setting up your experiment wisely will allow the entire log to fit in the RAM. However, with my current dict-of-lists solution keeping every variable's log in a continuous section of memory would require a lot of copying and overhead.
My question is: What is a clever, low-level way of quickly logging gigabytes of doubles in memory with minimal space/time overhead, that still translates to a neat python data structure?
Clarification: when I say "logging", I mean storing until after the simulation. Once that's done a post-processing phase begins and in most cases I'll only store the resulting graphs. So I don't actually need to store the numbers on disk.
Update: In the end, I changed my approach a little and added the log (as a dict mapping variable names to sequence types) to the function parameters. This allows you to pass in objects such as lists or array.arrays or anything that has an append method. This adds a little time overhead because I'm using the PyObject_CallMethodObjArgs function to call the Append method instead of PyList_Append or similar. Using arrays allows you to reduce the memory load, which appears to be the best I can do short of writing my own expanding storage type. Thanks everyone! | Logging an unknown number of floats in a python C extension | 0 | 1.2 | 1 | 0 | 0 | 186 |
11,994,515 | 2012-08-16T19:54:00.000 | 1 | 0 | 0 | 0 | 0 | python,numpy,scipy | 0 | 11,995,857 | 0 | 2 | 0 | false | 0 | 0 | This is going to be more a huge dump of ideas rather than a consistent answer, because it sounds like that's what you're looking for. If not, I apologize.
The main thing you're trying to avoid here is storing billions of PyFloatObjects in memory. There are a few ways around that, but they all revolve on storing billions of plain C doubles instead, and finding some way to expose them to Python as if they were sequences of PyFloatObjects.
To make Python (or someone else's module) do the work, you can use a numpy array, a standard library array, a simple hand-made wrapper on top of the struct module, or ctypes. (It's a bit odd to use ctypes to deal with an extension module, but there's nothing stopping you from doing it.) If you're using struct or ctypes, you can even go beyond the limits of your memory by creating a huge file and mmapping in windows into it as needed.
To make your C module do the work, instead of actually returning a list, return a custom object that meets the sequence protocol, so when someone calls, say, foo.getitem(i) you convert _array[i] to a PyFloatObject on the fly.
Another advantage of mmap is that, if you're creating the arrays iteratively, you can create them by just streaming to a file, and then use them by mmapping the resulting file back as a block of memory.
Otherwise, you need to handle the allocations. If you're using the standard array, it takes care of auto-expanding as needed, but otherwise, you're doing it yourself. The code to do a realloc and copy if necessary isn't that difficult, and there's lots of sample code online, but you do have to write it. Or you may want to consider building a strided container that you can expose to Python as if it were contiguous even though it isn't. (You can do this directly via the complex buffer protocol, but personally I've always found that harder than writing my own sequence implementation.) If you can use C++, vector is an auto-expanding array, and deque is a strided container (and if you've got the SGI STL rope, it may be an even better strided container for the kind of thing you're doing).
As the other answer pointed out, Cython can help for some of this. Not so much for the "exposing lots of floats to Python" part; you can just move pieces of the Python part into Cython, where they'll get compiled into C. If you're lucky, all of the code that needs to deal with the lots of floats will work within the subset of Python that Cython implements, and the only things you'll need to expose to actual interpreted code are higher-level drivers (if even that). | 2 | 2 | 0 | 0 | I'm using python to set up a computationally intense simulation, then running it in a custom built C-extension and finally processing the results in python. During the simulation, I want to store a fixed-length number of floats (C doubles converted to PyFloatObjects) representing my variables at every time step, but I don't know how many time steps there will be in advance. Once the simulation is done, I need to pass back the results to python in a form where the data logged for each individual variable is available as a list-like object (for example a (wrapper around a) continuous array, piece-wise continuous array or column in a matrix with a fixed stride).
At the moment I'm creating a dictionary mapping the name of each variable to a list containing PyFloatObject objects. This format is perfect for working with in the post-processing stage but I have a feeling the creation stage could be a lot faster.
Time is quite crucial since the simulation is a computationally heavy task already. I expect that a combination of A. buying lots of memory and B. setting up your experiment wisely will allow the entire log to fit in the RAM. However, with my current dict-of-lists solution keeping every variable's log in a continuous section of memory would require a lot of copying and overhead.
My question is: What is a clever, low-level way of quickly logging gigabytes of doubles in memory with minimal space/time overhead, that still translates to a neat python data structure?
Clarification: when I say "logging", I mean storing until after the simulation. Once that's done a post-processing phase begins and in most cases I'll only store the resulting graphs. So I don't actually need to store the numbers on disk.
Update: In the end, I changed my approach a little and added the log (as a dict mapping variable names to sequence types) to the function parameters. This allows you to pass in objects such as lists or array.arrays or anything that has an append method. This adds a little time overhead because I'm using the PyObject_CallMethodObjArgs function to call the Append method instead of PyList_Append or similar. Using arrays allows you to reduce the memory load, which appears to be the best I can do short of writing my own expanding storage type. Thanks everyone! | Logging an unknown number of floats in a python C extension | 0 | 0.099668 | 1 | 0 | 0 | 186 |
11,996,987 | 2012-08-16T23:43:00.000 | 2 | 0 | 1 | 0 | 0 | python,artificial-intelligence,pygame,game-engine,game-physics | 0 | 11,997,071 | 0 | 3 | 0 | false | 0 | 0 | basically its
Default Behavior:Random Walk
if player is within X distance: Melee Attack
if player is within Y distance: Charge Player
if player is within Z distance: Cast spell
if player is outside range and MOB has agro move toward player
thats the extent of most AI... at least game AI
its too cpu intensive to do things like neural networks and machine learning for game mobs
you may want to look at fuzzy logic AI ... thats largely what i described up there but it can be more than one simultaneuosly | 1 | 0 | 0 | 0 | I am amateur Programmer looking to develop a game. I've decided to use Python and pygame. (I know, there are better options out there, but I really don't know C++ or java that well.) The issue I'm having is that I really have no idea how to create a decent AI. I'm talking about the sort of AI that has monsters move this way at this point, use a bow and arrow at that point, and use a long-range magic attack at another point (yes, its a top-down 2-d fantasy game). I really don't understand how it makes those decisions and how you program it to make those decisions. I've looked around everywhere, and either the resource gets so technical that I can't understand it at all, or it gives me no information whatsoever. I'm hoping someone here can give me some clear suggestions, or at least point me to some decent resources. Right now my bots just sort of wander randomly around the screen... | How to develop an AI script | 0 | 0.132549 | 1 | 0 | 0 | 3,243 |
11,999,147 | 2012-08-17T02:48:00.000 | 2 | 0 | 0 | 0 | 0 | python,numpy,scipy,scikit-learn | 1 | 12,011,024 | 0 | 3 | 0 | true | 0 | 0 | So far I discovered that most classifiers, like linear regressors, will automatically convert complex numbers to just the real part.
kNN and RadiusNN regressors, however, work well - since they do a weighted average of the neighbor labels and so handle complex numbers gracefully.
Using a multi-target classifier is another option, however I do not want to decouple the x and y directions since that may lead to unstable solutions as Colonel Panic mentions, when both results come out close to 0.
I will try other classifiers with complex targets and update the results here. | 3 | 7 | 1 | 1 | I am trying to use sklearn to predict a variable that represents rotation. Because of the unfortunate jump from -pi to pi at the extremes of rotation, I think a much better method would be to use a complex number as the target. That way an error from 1+0.01j to 1-0.01j is not as devastating.
I cannot find any documentation that describes whether sklearn supports complex numbers as targets to classifiers. In theory the distance metric should work just fine, so it should work for at least some regression algorithms.
Can anyone suggest how I can get a regression algorithm to operate with complex numbers as targets? | Is it possible to use complex numbers as target labels in scikit learn? | 1 | 1.2 | 1 | 0 | 0 | 2,406 |
11,999,147 | 2012-08-17T02:48:00.000 | 1 | 0 | 0 | 0 | 0 | python,numpy,scipy,scikit-learn | 1 | 12,003,586 | 0 | 3 | 0 | false | 0 | 0 | Good question. How about transforming angles into a pair of labels, viz. x and y co-ordinates. These are continuous functions of angle (cos and sin). You can combine the results from separate x and y classifiers for an angle? $\theta = \sign(x) \arctan(y/x)$. However that result will be unstable if both classifiers return numbers near zero. | 3 | 7 | 1 | 1 | I am trying to use sklearn to predict a variable that represents rotation. Because of the unfortunate jump from -pi to pi at the extremes of rotation, I think a much better method would be to use a complex number as the target. That way an error from 1+0.01j to 1-0.01j is not as devastating.
I cannot find any documentation that describes whether sklearn supports complex numbers as targets to classifiers. In theory the distance metric should work just fine, so it should work for at least some regression algorithms.
Can anyone suggest how I can get a regression algorithm to operate with complex numbers as targets? | Is it possible to use complex numbers as target labels in scikit learn? | 1 | 0.066568 | 1 | 0 | 0 | 2,406 |
11,999,147 | 2012-08-17T02:48:00.000 | 4 | 0 | 0 | 0 | 0 | python,numpy,scipy,scikit-learn | 1 | 12,004,759 | 0 | 3 | 0 | false | 0 | 0 | Several regressors support multidimensional regression targets. Just view the complex numbers as 2d points. | 3 | 7 | 1 | 1 | I am trying to use sklearn to predict a variable that represents rotation. Because of the unfortunate jump from -pi to pi at the extremes of rotation, I think a much better method would be to use a complex number as the target. That way an error from 1+0.01j to 1-0.01j is not as devastating.
I cannot find any documentation that describes whether sklearn supports complex numbers as targets to classifiers. In theory the distance metric should work just fine, so it should work for at least some regression algorithms.
Can anyone suggest how I can get a regression algorithm to operate with complex numbers as targets? | Is it possible to use complex numbers as target labels in scikit learn? | 1 | 0.26052 | 1 | 0 | 0 | 2,406 |
12,000,219 | 2012-08-17T05:25:00.000 | 1 | 0 | 1 | 0 | 1 | python,eclipse-plugin,pydev | 0 | 12,003,951 | 0 | 5 | 0 | false | 0 | 0 | As for me, all I do with Eclipse for working with Python:
Install pydev
Set tabs to be replaced by spaces
Set tab length to 4
you can make tabs and spaces visual by displaying non-printable symbols.
Hopefully, this is what you meant. | 1 | 10 | 0 | 0 | Recently, I use Eclipse to edit my python code. But lacking indentation guides, I feel not very well. So how to add the auto indentation guides for Eclipse? Is there certain plugin?
What's more, I have tried the EditBox. But, you know, that is not very natural under some themes............... | Does Eclipse have indentation guides? | 0 | 0.039979 | 1 | 0 | 0 | 12,501 |
12,005,515 | 2012-08-17T12:09:00.000 | 3 | 0 | 1 | 0 | 0 | python | 1 | 12,005,547 | 0 | 2 | 0 | false | 0 | 0 | Instead of using threads, use different processes and use some sort of IPC to communicate between each process. | 2 | 0 | 0 | 0 | I have implemented tool to extract the data from clear quest server using python. I need to do lot of searches in clearquest so I have implemented it using threading.
To do that i try to open individual clearquest session for each thread. When I try to run this I am getting Run Time error and none of the clearquest session opened correctly.
I did bit of research on internet and found that it's because of Global Interpretor Lock in python. I would like to know how to overcome this GIL...Any idea would be much appreciated | python how to overcome global interpretor lock | 0 | 0.291313 | 1 | 0 | 0 | 229 |
12,005,515 | 2012-08-17T12:09:00.000 | 2 | 0 | 1 | 0 | 0 | python | 1 | 12,005,614 | 0 | 2 | 0 | false | 0 | 0 | I don't think you'll have RuntimeErrors because of the GIL. Can you paste the traceback? If you have some critical parts of the code that are not re entrant, you'll have to isolate them using some concurrency primitives.
The main issue with the GIL is that it will forcibly serialise computation. The result is reduced throughput and scaling. | 2 | 0 | 0 | 0 | I have implemented tool to extract the data from clear quest server using python. I need to do lot of searches in clearquest so I have implemented it using threading.
To do that i try to open individual clearquest session for each thread. When I try to run this I am getting Run Time error and none of the clearquest session opened correctly.
I did bit of research on internet and found that it's because of Global Interpretor Lock in python. I would like to know how to overcome this GIL...Any idea would be much appreciated | python how to overcome global interpretor lock | 0 | 0.197375 | 1 | 0 | 0 | 229 |
12,014,203 | 2012-08-17T23:03:00.000 | 0 | 0 | 1 | 0 | 0 | python,sockets,nat | 0 | 12,020,661 | 0 | 3 | 0 | false | 0 | 0 | Redis, could work but not the exact same functionality. | 1 | 6 | 0 | 0 | I want to send and receive messages between two Python programs using sockets. I can do this using the private IPs when the computers are connected to the same router, but how do I do it when there are 2 NATs separating them?
Thanks (my first SO question) | How do I communicate between 2 Python programs using sockets that are on separate NATs? | 0 | 0 | 1 | 0 | 1 | 668 |
12,016,443 | 2012-08-18T06:32:00.000 | 1 | 0 | 1 | 0 | 0 | python,pygame,game-engine,pyglet | 0 | 12,016,497 | 0 | 3 | 0 | false | 0 | 1 | Pygame should suffice for what you want to do. Pygame is stable and if you look around the websites you will find games which have been coded in pygame. What type of game are you looking to implement? | 1 | 1 | 0 | 0 | I'm looking to make a 2d side scrolling game in Python, however I'm not sure what library to use. I know of PyGame (Hasn't been updated in 3 years), Pyglet, and PyOpenGL. My problem is I can't find any actually shipped games that were made with Python, let alone these libraries - so I don't know how well they perform in real world situations, or even if any of them are suitable for use in an actual game and not a competition.
Can anyone please shed some light on these libraries? Is PyGame still used effectively? Is Pyglet worth while? Have either of them been used to make a game? Are there any other libraries I'm forgetting?
Honestly I'm not even sure I want to use Python, it seems too slow, unproven (For games written solely in Python), etc.... I have not found any game that was made primarily in Python, for sure, that has been sold. If I don't end up going with Python, what would a second/better choice be? Java? LUA? | Making a 2d game in Python - which library should be used? | 0 | 0.066568 | 1 | 0 | 0 | 6,320 |
12,023,402 | 2012-08-19T00:55:00.000 | 0 | 1 | 0 | 0 | 0 | python,selenium,hyperlink | 0 | 12,023,574 | 0 | 2 | 0 | false | 0 | 0 | In the future you need to pastebin a representative snippet of your code, and certainly a traceback. I'm going to assume that when you say "the code does not compile" that you mean that you get an exception telling you you haven't declared an encoding.
You need a line at the top of your file that looks like # -*- coding: utf-8 -*- or whatever encoding the literals you've put in your file are in. | 1 | 1 | 0 | 0 | I want to find a link by its text but it's written in non-English characters (Hebrew to be precise, if that matters). The "find_element_by_link_text('link_text')" method would have otherwise suited my needs, but here it fails. Any idea how I can do that? Thanks. | Selenium in Python: how to click non-English link? | 0 | 0 | 1 | 0 | 1 | 258 |
12,034,390 | 2012-08-20T08:25:00.000 | 1 | 0 | 0 | 0 | 0 | python,django,mongodb,database-migration,django-postgresql | 0 | 15,858,338 | 0 | 2 | 0 | false | 1 | 0 | Whether the migration is easy or hard depends on a very large number of things including how many different versions of data structures you have to accommodate. In general you will find it a lot easier if you approach this in stages:
Ensure that all the Mongo data is consistent in structure with your RDBMS model and that the data structure versions are all the same.
Move your data. Expect that problems will be found and you will have to go back to step 1.
The primary problems you can expect are data validation problems because you are moving from a less structured data platform to a more structured one.
Depending on what you are doing regarding MapReduce you may have some work there as well. | 1 | 2 | 0 | 0 | Could any one shed some light on how to migrate my MongoDB to PostgreSQL? What tools do I need, what about handling primary keys and foreign key relationships, etc?
I had MongoDB set up with Django, but would like to convert it back to PostgreSQL. | From MongoDB to PostgreSQL - Django | 0 | 0.099668 | 1 | 1 | 0 | 1,475 |
12,036,620 | 2012-08-20T11:11:00.000 | 0 | 1 | 0 | 1 | 0 | python,jenkins | 0 | 70,767,921 | 0 | 4 | 0 | false | 0 | 0 | I came across this as a noob and found the accepted answer is missing something if you're running python scripts through a Windows batch shell in Jenkins.
In this case, Jenkins will only fail if the very last command in the shell fails. So your python command may fail but if there is another line after it which changes directory or something then Jenkins will believe the shell was successful.
The solution is to check the error level after the python line:
if %ERRORLEVEL% NEQ 0 (exit)
This will cause the shell to exit immediately if the python line fails, causing Jenkins to be marked as a fail because the last line on the shell failed. | 1 | 8 | 0 | 0 | This question might sound weird, but how do I make a job fail?
I have a python script that compiles few files using scons, and which is running as a jenkins job. The script tests if the compiler can build x64 or x86 binaries, I want the job to fail if it fails to do one of these.
For instance: if I'm running my script on a 64-bit system and it fails to compile a 64-bit. Is there something I can do in the script that might cause to fail? | Making a job fail in jenkins | 0 | 0 | 1 | 0 | 0 | 15,709 |
12,046,760 | 2012-08-20T23:51:00.000 | 1 | 0 | 0 | 0 | 0 | python,concurrency,sqlite | 0 | 12,047,988 | 0 | 2 | 0 | false | 0 | 0 | generally, it is safe if there is only one program writing the sqlite db at one time.
(If not, it will raise exception like "database is locked." while two write operations want to write at the same time.)
By the way, it is no way to guarantee the program will never have errors. using Try ... catch to handle exception will make the program much safer. | 1 | 3 | 0 | 0 | I have two programs: the first only write to sqlite db, and the second only read. May I be sure that there are never be some errors? Or how to avoid from it (in python)? | sqlite3: safe multitask read & write - how to? | 0 | 0.099668 | 1 | 1 | 0 | 383 |
12,053,633 | 2012-08-21T11:14:00.000 | 1 | 0 | 1 | 0 | 0 | python,date,weekday | 0 | 12,053,730 | 0 | 4 | 0 | false | 0 | 0 | in datetime module you can do something like this: a = date.today() - timedelta(days=1)
and then a.weekday(). Where monday is 0 and sunday is 6. | 1 | 21 | 0 | 0 | In Python, given a date, how do I find the preceding weekday? (Weekdays are Mon to Fri. I don't care about holidays) | Previous weekday in Python | 0 | 0.049958 | 1 | 0 | 0 | 20,747 |
12,072,506 | 2012-08-22T11:53:00.000 | 0 | 0 | 0 | 0 | 0 | python,c,networking | 0 | 12,075,452 | 0 | 4 | 0 | false | 0 | 0 | Using pcap you cannot stop the packets, if you are under windows you must go down to the driver level... but you can stop only packets that your machine send.
A solution is act as a pipe to the destination machine: You need two network interfaces (without address possibly), when you get a packet that you does not found interesting on the source network card you simply send it on the destination network card. If the packet is interesting you does not send it, so you act as a filter. I have done it for multimedia performance test (adding jitter, noise, etc.. to video streaming) | 1 | 0 | 0 | 0 | This is the problem I'm trying to solve,
I want to write an application that will read outbound http request packets on the same machine's network card. This would then be able to extract the GET url from it.On basis of this information, I want to be able to stop the packet, or redirect it , or let it pass.
However I want my application to be running in promiscuous mode (like wireshark does), and yet be able to eat up (stop) the outbound packet.
I have searched around a bit on this..
libpcap / pcap.h allows to me read packets at the network card, however I haven't yet been able to figure out a way to stop these packets or inject new ones into the network.
Certain stuff like twisted or scapy in python, allows me set up a server that is listening on some local port, I can then configure my browser to connect to it, using proxy configurations. This app can then do the stuff.. but my main purpose of being promiscuous is defeated here..
Any help on how I could achieve this would be greatly appreciated .. | Stop packets at the network card | 0 | 0 | 1 | 0 | 1 | 1,109 |
12,094,148 | 2012-08-23T14:38:00.000 | 4 | 0 | 0 | 0 | 0 | python,sip,decode,pcap | 0 | 12,708,904 | 0 | 3 | 0 | true | 0 | 0 | Finally I did this with help of pyshark from the sharktools (http://www.mit.edu/~armenb/sharktools/). In order to sniff IP packages I used scapy instead of libpcap. | 1 | 3 | 0 | 0 | I need python script that can sniff and decode SIP messages in order to check their correctness.
As a base for this script I use python-libpcap library for packets sniffing. I can catch UDP packet and extract SIP payload from it, but I don't know how to decode it. Does python has any libraries for packets decoding? I've found only dpkt, but as I understood it can't decode SIP.
If there are no such libraries how can I do this stuff by hands?
Thank you in advance! | Decode SIP messages with Python | 0 | 1.2 | 1 | 0 | 1 | 5,300 |
12,098,358 | 2012-08-23T19:01:00.000 | 0 | 0 | 0 | 1 | 0 | python,google-app-engine | 0 | 12,116,756 | 0 | 1 | 0 | false | 1 | 0 | I am trying to be pretty general here as I don't know whether you are using the default users service or not and I don't know how you are uniquely linking your SessionSupplemental entities to users or whether you even have a way to identify users at this point. I am also assuming you are using some version of webapp as that is the standard request handling library on App Engine. Let me know a bit more and I can update the answer to be more specific.
Subclass the default RequestHandler in webapp with a new class (such as MyRequestHandler).
In your subclass override the initialize() method.
In your new initialize() method get the current user from your session system (or the users service or whatever you are using). Test to see if a SessionSupplemental entity already exists for this user and if not create a new one.
For all your other request handlers you now want to subclass MyRequestHandler (instead of the default RequestHandler).
Whenever a request happens webapp will automatically call the initialize() method.
This is going to cost you a read for every request and also a write for every request by a new user. If you use the ndb library (instead of db) then a lot of the requests will just hit memcache instead of the datastore.
Now if you are just starting creating a new AppEngine app I would recommend using the Python27 runtime and webapp2 and trying to leverage as much of the webapp2 Auth module as you can so you don't have to write so much session stuff yourself. Also, ndb can be much nicer than the default db library. | 1 | 0 | 0 | 1 | I am a newbie to Google App Engine and Python.
I want to create an entry in a SessionSupplemental table (Kind) anytime a new user accesses the site (regardless of what page they access initially).
How can I do this?
I can imagine that there is a list of standard event triggers in GAE; where would I find these documented? I can also imagine that there are a lot of system/application attributes; where can I find these documented and how to use them?
Thanks. | How can I trigger a function anytime there is a new session in a GAE/Python Application? | 0 | 0 | 1 | 0 | 0 | 412 |
12,098,554 | 2012-08-23T19:17:00.000 | 0 | 0 | 0 | 0 | 0 | python,c,api | 1 | 12,098,669 | 0 | 1 | 0 | false | 0 | 1 | Try this: create a 'template' PyTypeObject, and use struct copying (or memcpy) to clone the basic template. Then you can fill it in with the requisite field definitions after that. This solves (2), since you only have to declare the full PyTypeObject once.
For your first point, you just set the static variable from your module init instead of doing it in the static variable declaration. So, it won't be set until your module actually initializes.
If you plan on doing this often, it may be worth looking at Boost::Python, which simplifies the process of generating CPython wrappers from C++ classes. | 1 | 0 | 0 | 0 | I am using the Python C API and trying to create a function that will allocate new instances of PyTypeObjects to use in several C++ classes. The idea is that each class would have a pointer to a PyTypeObject that would get instantiated with this factory. The pointers must be static.
However, I'm having issues with this approach.
In the class that contains the pointer to the PyTypeObject, I get the "undefined reference" linker error when I try to set that static variable equal to the result of the factory function (which is in another class but is static). I suppose this makes sense because the function wouldn't happen until runtime but I don't know another way to do this.
I don't know how to set the PyTypeObject fields dynamically because the first field is always a macro: PyObject_VAR_HEAD.
Hope this makes sense. Basically, I'm trying to make it so several classes don't have to redefine PyTypeObject statically, but can instead instantiate their PyTypeObject variables from a factory function. | Static factory for PyTypeObject | 0 | 0 | 1 | 0 | 0 | 461 |
12,102,061 | 2012-08-24T01:15:00.000 | 1 | 0 | 1 | 1 | 0 | python,windows-7 | 0 | 12,102,132 | 0 | 1 | 0 | false | 0 | 0 | The IDLE context menu plug-in is registered when you install Python and points to the version of IDLE supplied with the Python installed. (IDLE itself has significant code changes between Python 2 and 3 because it's written in Python and the language changed a lot.) To change it, simply re-install the version of Python you wish the IDLE context menu to invoke. | 1 | 5 | 0 | 0 | I installed python 3.2 and later installed python 2.7. Somehow the IDLE, which I open it by right-click on python file -> Edit with IDLE, are using python 2.7 instead of python 3.2.
It seems that python 2.7 was set as default with IDLE. Even if I changed the PATH environment variable in windows advance setting back to python 3.2, the default python shell is still 2.7. I am sure that there was no more python 2.7 in the path.
Later I have to uninstall python 2.7 and reinstall python 3.2. | how to set python IDLE's default python? | 0 | 0.197375 | 1 | 0 | 0 | 1,564 |
Subsets and Splits