Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
11,946,144
2012-08-14T05:17:00.000
0
0
0
0
python,django,migration
11,947,415
4
false
1
0
If you're using virtualenv, just create a new virtualenv with the versions of django/python that you want, workon this virtualenv and run your testsuite against it. BTW, you might want to be careful with the word 'migration' whilst in a Django context. Migration normally refers to model migrations with South when you're making changes to the tables in the database.
3
1
0
We developed project with django version 1.2 and python 2.4. Now we want to migrate the projects into latest version (Django1.4 and python2.7). I am very new for migration, Can anyone please advise on this. What things do I need to take careof? Do we need to rewrite all the code again?
Migrate django1.2 project to django1.4
0
0
0
168
11,946,144
2012-08-14T05:17:00.000
2
0
0
0
python,django,migration
11,955,904
4
true
1
0
This is what we are doing (we're upgrading ~60Kloc from Django 0.97 to 1.4): create an upgrade branch of your code create a virtualenv for working on the upgrade download the "next" version of Django (if you prefer small steps), or the Django version you want to end up with, and place it into your own version control system (VCS). check out Django from your VCS to the root of your virtualenv. repeat until done: run your testsuite (and coverage). fix any problems add a comment in your root __init__.py file indicating which Django version your code works with (this will save you a lot of time one day :-) merge your trunk out to your upgrade branch (to get all the changes that have happened while you were working on the upgrade). run your testsuite, fix any problems, then check-in the merge. finally: reintegrate your upgrade branch back into trunk. Now you've upgraded your code (you'll still have to plan the deployment of the upgrade, but that's another question). ps: we store Django in our VCS so we can keep track of any changes we need to make to Django itself (especially needed if you don't want to go to 1.4, but still might need one or two fixes from that version).
3
1
0
We developed project with django version 1.2 and python 2.4. Now we want to migrate the projects into latest version (Django1.4 and python2.7). I am very new for migration, Can anyone please advise on this. What things do I need to take careof? Do we need to rewrite all the code again?
Migrate django1.2 project to django1.4
1.2
0
0
168
11,946,144
2012-08-14T05:17:00.000
-1
0
0
0
python,django,migration
11,946,791
4
false
1
0
Python does not support backward compatibility,consider that you may get some issues on migrating to 2.7 from 2.4.
3
1
0
We developed project with django version 1.2 and python 2.4. Now we want to migrate the projects into latest version (Django1.4 and python2.7). I am very new for migration, Can anyone please advise on this. What things do I need to take careof? Do we need to rewrite all the code again?
Migrate django1.2 project to django1.4
-0.049958
0
0
168
11,949,240
2012-08-14T09:15:00.000
1
0
0
0
python,django,dynamic,heroku
11,950,901
3
false
1
0
It's not built into the platform but should be pretty easy to implement via scheduler and using your API token.
1
15
0
Is there a way to use the Heroku scheduler to start and stop web dynos for specific periods of the day? Like say during business hours 2 dynos and at night only 1 dyno? I really would like to avoid putting the normal user/pass credentials into the app itself, so I'm looking for a secure way to do this (apart from doing it manually each day for each app). Using the "heroku ps:scale web=2" directly would naturally be nice but as far as I know this is not supported. Thanks for any feedback in advance...
schedule number of web dynos by time of day
0.066568
0
0
1,533
11,949,777
2012-08-14T09:51:00.000
2
0
1
0
javascript,python
11,953,490
1
true
0
0
If you like Python, I would use Django or even Ruby on Rails. Both are great MVC (Model, View, Controller) frameworks which have manageable learning curves. I suggest Ruby on Rails because I was able to transition into it from Python and I really enjoyed its conventions and ease of use. Check them out.
1
2
0
I have written a local search engine in Python, which I feel was a good idea. It requires constant little changes and Python appears to be always readable when I go back. And it is good with regular expressions too. But now the engine is in demand online. Should I stick with python? Is there a good module/library (I know urllib superficially, but I mean something more specialized) for wrapping a local search engine (as simple as a method taking the string/query) with a method that can communicate with Javascript and keep/sort/order the incoming queries?
Python search engine going online
1.2
0
1
208
11,962,749
2012-08-15T00:41:00.000
2
0
0
0
python,django,django-templates
11,962,907
1
true
1
0
It's a good idea to have all pages extend a base template. So you can have one template (e.g. base.html) that contains the basic structure of your site (headers, footers, boilerplate). Then you can extend this template for each page of your site. (i.e. {% extends 'base.html' %}). Following this structure, you should be able to put your search form in the base template and have it appear on all pages.
1
0
0
I'm creating a site like craigslist and need to implement a search feature where customers can search for a key term and see results. For example, searching for "lamp" will create a result page with all the posts that are related to lamps. I'm using Haystack / Solr to search the contents. However, at the moment, users have to go to a specific search page where they can then narrow their results. How do I implement it in such a way that the search bar can appear in my header on every page? I'm using Django.
How to implement Haystack across all pages in website?
1.2
0
0
138
11,963,020
2012-08-15T01:33:00.000
0
0
1
0
python,mp3
11,963,096
2
false
0
0
Python has wave and the Wave_read object which has function named readframes(n). It will return series of hexadecimal characters (these are basically loudness/amplitude of sound wave at particular time). You can compare two mp3 series of hexadecimal characters but you need to take care of bit depth and number of channels, as the stream output is dependent upon them. Something like- One character for an 8-bit mono signal, two from 8-bit stereo etc.
1
0
0
I am looking for a python library that is able to extract the actually data of a mp3 (the actual voices/sounds we listen to). I want to be able to use the data to compare with another mp3 file without the bitrate/encoding affecting the process. How do i go about it?
python - Extract data from mp3 file
0
0
1
1,041
11,963,323
2012-08-15T02:23:00.000
1
0
0
0
python,import,jira,libmagic
11,963,361
1
false
0
0
If it is asking for magic and libmagic it means libmagic is not installed, not just the python bindings for it. You need to install that via your package manager from your OS. Also, if you need help, we can't really help you without tracebacks, so if you're getting errors you need to include those.
1
0
0
I'm Python newbie, Recently, I am working a project on JIRA. I need to access JIRA api to retreive some info about issues. But it always indicates as follows: WARNING: Couldn't import magic library (is libmagic present?) Autodetection of avatar image content types will not work; for create_avatar methods, specify the 'contentType' parameter explicitly. And in fact, when I download the magic package with easy_install or pip, it always fail. And then, I download the libmagic and magic package manually, copy them to the directory C:\Python27\Lib\site-packages, but when executing the clause jira = JIRA(), it still produces the warning as mentioned above
Python: Couldn't import magic library
0.197375
0
0
3,206
11,967,721
2012-08-15T10:26:00.000
2
0
1
0
python,sockets
11,967,911
2
false
0
0
You would get bad file descriptor if you close a socket and then try to read from/write to it. Broken pipe is when you try to write to a socket that has been closed on the other end.
2
0
0
Both this exceptions are thrown when socket's already closed. But I haven't understood what the differences between them yet. someone can help me? thank you so much!
What's the differences between broken pipe vs bad file descriptor from closed socket error in python
0.197375
0
0
1,080
11,967,721
2012-08-15T10:26:00.000
1
0
1
0
python,sockets
11,968,243
2
false
0
0
Both this exceptions are thrown when socket's already closed No. Broken pipe, connection reset, etc., occur when the peer has closed the connection. Bad file descriptors, socket closed, etc., occur when you have already closed the socket.
2
0
0
Both this exceptions are thrown when socket's already closed. But I haven't understood what the differences between them yet. someone can help me? thank you so much!
What's the differences between broken pipe vs bad file descriptor from closed socket error in python
0.099668
0
0
1,080
11,967,847
2012-08-15T10:35:00.000
6
0
1
0
python,installation,boa-constructor
12,807,485
2
true
0
0
I think Boa Constructor isn't alive. The last news posted on his site dates from 2006. So maybe it wont recognize recent python versions.
2
0
0
On the install screen for Boa Constructor, it says Python 2.2 and 3.1 were found in the registry, however I use 2.7 as my main version. How can I get it to recognise it?
Boa Constructor can't find Python 2.7
1.2
0
0
633
11,967,847
2012-08-15T10:35:00.000
0
0
1
0
python,installation,boa-constructor
40,295,492
2
false
0
0
It's depend on the python version. I already tested on Windows OS. Boa Constructor can detect python-2.5.4 Boa Constructor can detect python 2.6.6. Boa Constructor can also detect python 3.4.4 but Boa Constructor can't detect python 3.4.3 Boa Constructor can't detect python 2.7.12. I am still doing try and error with others version than begin with 2.7.x. I think it will consume so many time for me to get the 2.7.x correct version for BOA Constructor. But (I just share my Experience) If you want Boa Constructor on Windows working very good you need this correct versions of them so I have downloaded and tested and it's working, here come the list of correct version that not conflict each other. python-2.5.4.msi. boa-constructor-0.6.1.src.win32.exe. wxPython2.8-win32-unicode-2.8.10.1-py25.exe (optional) as GUI. MySQL-python-1.2.2.win32-py2.5.exe (optional) as mySQL Connector. xlutils-1.4.1.win32.exe (optional) for Ms Excel access. xlwt-0.7.2.win32.exe (optional) for Ms Excel access. xlrd-0.7.9.win32.exe (optional) for Ms Excel access.
2
0
0
On the install screen for Boa Constructor, it says Python 2.2 and 3.1 were found in the registry, however I use 2.7 as my main version. How can I get it to recognise it?
Boa Constructor can't find Python 2.7
0
0
0
633
11,970,079
2012-08-15T13:23:00.000
1
0
0
1
php,python,api,sync,icloud
11,973,429
4
false
1
0
To the best of my knowledge, there is no way to interface with iCloud directly; it can only be done through an iOS or Mac OS app, and by calling the correct iCloud Objective-C APIs with UI/NSDocument classes. Since you are not using Cocoa, let alone Objective-C, you will most likely not be able to do this. I may be wrong of course, as I haven't conducted an in-depth search into this.
2
4
0
I'm building custom CRM web based system and have integrated synchronization of contacts and reminders with Google apps and need do the same with Apple iCloud. Is there any way how to do it? I haven't find any official API for this purpose, CRM is written in PHP, but I'm able to use python for this purpose as well.
Website sync of contacts and reminders with iCloud
0.049958
0
0
1,657
11,970,079
2012-08-15T13:23:00.000
0
0
0
1
php,python,api,sync,icloud
12,255,882
4
false
1
0
I would recommend that you sync using the google contacts api. Then, you can tell iPhone people to use that instead of iCloud.
2
4
0
I'm building custom CRM web based system and have integrated synchronization of contacts and reminders with Google apps and need do the same with Apple iCloud. Is there any way how to do it? I haven't find any official API for this purpose, CRM is written in PHP, but I'm able to use python for this purpose as well.
Website sync of contacts and reminders with iCloud
0
0
0
1,657
11,970,246
2012-08-15T13:35:00.000
2
0
0
0
python,wxpython
11,975,689
2
false
0
1
If you're talking about doing this stuff inside of a wxPython program, then it's all pretty simple. There's a PopupMenu widget for the first one and an AcceratorTable for the second one. If you're wanting to catch mouse and keyboard events outside of a wxPython program, then you have to go very low-level and hook into the OS itself, which means that there really isn't any good way to do it cross-platform. You'll probably want to look at ctypes and similar libraries for that sort of thing.
1
0
0
I am thinking of writing a python program that runs in the background and can inspect user's GUI events. My requirements is very simple: 1) When user right click the mouse, it can show an option; and when this option is chosen, my program should know this event. 2) When user select a file and click some predefined key combination, my program should know this event. What should I do? Is this a GUI program? I am also thinking that, this program maybe a daemon on the machine and can inspect the user's GUI event, but I am not sure how can I do this. Thanks.
Background python program inspect GUI events
0.197375
0
0
367
11,972,592
2012-08-15T15:46:00.000
2
1
1
0
python
11,972,623
3
false
0
0
Get a project you are interested in, start hacking (i.e. extend it, fix small bugs you encounter). There are a lot of opensource projects out there you can checkout. You need experience, and experience comes from failing, failing is a result of trying. That's your way to go. If you get stuck somewhere, always check back to SO or google - that will aid you fixing 99.9% of your issues.
1
0
0
If someone now studies the basics of the Python what should he do after that? Are there specific books he must read? Or, what exactly? In other words, what is the pathway to mastering Python? Thanks
What do I do after studying the basics of Python?
0.132549
0
0
6,353
11,979,276
2012-08-16T00:51:00.000
1
0
0
0
python,mysql,mysql-python
11,979,334
2
true
0
0
Instead of logging out and logging back in, user 2 could simply commit their transaction. MySQL InnoDB tables use transactions, requiring a BEGIN before one or more SQL statements, and either COMMIT or ROLLBACK afterwards, resulting in all your updates/inserts/deletes either happening or not. But there's a "feature" that results in an automatic BEGIN if not explicitly issued, and an automatic COMMIT when the connection is closed. This is why you see the changes after the other user closes the connection. You should really get into the habit of explicitly beginning and committing your transactions, but there's also another way: set connection.autocommit = True, which will result in every sql update/insert/delete being wrapped in its own implicit transaction, resulting in the behavior you originally expected. Don't take what I said above to be entirely factually correct, but it suffices to explain the fundamentals of what's going on and how to control it.
1
2
0
I've searched and I can't seem to find anything. Here is the situation: t1 = table 1 t2 = table 2 v = view of table 1 and table 2 joined 1.) User 1 is logged into database. Does SELECT * FROM v; 2.) User 2 is logged into same database and does INSERT INTO t1 VALUES(1, 2, 3); 3.) User 1 does another SELECT * FROM v; User 1 can't see the inserted row from User 2 until logging out and logging back in. Seems like views don't get sync'd across "sessions"? How can I make it so User 1 can see the INSERT? FYI I'm using python and mysqldb.
MySQL view doesn't update when underlaying table changes across different users
1.2
1
0
542
11,979,316
2012-08-16T00:57:00.000
1
0
0
0
python,numpy,h5py
11,998,662
2
false
0
0
NumPy arrays are not designed to be resized. It's doable, but wasteful in terms of memory (because you need to create a second array larger than your first one, then fill it with your data... That's two arrays you have to keep) and of course in terms of time (creating the temporary array). You'd be better off starting with lists (or regular arrays, as suggested by @HYRY), then convert to ndarrays when you have a chunk big enough. The question is, when do you need to do the conversion ?
1
3
1
I want to understand the effect of resize() function on numpy array vs. an h5py dataset. In my application, I am reading a text file line by line and then after parsing the data, write into an hdf5 file. What would be a good approach to implement this. Should I add each new row into a numpy array and keep resizing (increasing the axis) for numpy array (eventually writing the complete numpy array into h5py dataset) or should I just add each new row data into h5py dataset directly and thus resizing the h5py dataset in memory. How does resize() function affects the performance if we keep resizing after each row? Or should I resize after every 100 or 1000 rows? There can be around 200,000 lines in each dataset. Any help is appreciated.
efficient way to resize numpy or dataset?
0.099668
0
0
1,987
11,979,898
2012-08-16T02:26:00.000
2
0
1
0
google-app-engine,python-2.7,google-cloud-datastore,gae-search
11,983,057
1
true
1
0
It is true that the Search API's documents can include numeric data, and can easily be updated, but as you say, if you're doing a lot of updates, it could be non-optimal to be modifying the documents so frequently. One design you might consider would store the numeric data in Datastore entities, but make heavy use of a cache as well-- either memcache or a backend in-memory cache. Cross-reference the docs and their associated entities (that is, design the entities to include a field with the associated doc id, and the docs to include a field with the associated entity key). If your application domain is such that the doc id and the datastore entity key name can be the same string, then this is even more straightforward. Then, in the cache, index the numeric field information by doc id. This would let you efficiently fetch the associated numeric information for the docs retrieved by your queries. You'd of course need to manage the cache on updates to the datastore entities. This could work well as long as the size of your cache does not need to be prohibitively large. If your doc id and associated entity key name can be the same string, then I think you may be able to leverage ndb's caching support to do much of this.
1
1
0
I have an application which requires very flexible searching functionality. As part of this, users will need have the ability to do full-text searching of a number of text fields but also filter by a number of numeric fields which record data which is updated on a regular basis (at times more than once or twice a minute). This data is stored in an NDB datastore. I am currently using the Search API to create document objects and indexes to search the text-data and I am aware that I can also add numeric values to these documents for indexing. However, with the dynamic nature of these numeric fields I would be constantly updating (deleting and recreating) the documents for the search API index. Even if I allowed the search API to use the older data for a period it would still need to be updated a few times a day. To me, this doesn't seem like an efficient way to store this data for searching, particularly given the number of search queries will be considerably less than the number of updates to the data. Is there an effective way I can deal with this dynamic data that is more efficient than having to be constantly revising the search documents? My only thoughts on the idea is to implement a two-step process where the results of a full-text search are then either used in a query against the NDB datastore or manually filtered using Python. Neither seems ideal, but I'm out of ideas. Thanks in advance for any assistance.
Regularly updated data and the Search API
1.2
0
0
519
11,982,638
2012-08-16T07:37:00.000
1
1
0
0
python,django,unit-testing,testing,django-testing
11,982,939
2
false
1
0
The way my company organises tests is to split them into two broad categories. Unit and functional. The unit tests live inside the Django test discovery. manage.py test will run them. The functional tests live outside of that directory. They are run either manually or by the CI. Buildbot in this case. They are still run with the unittest textrunner. We also have a subcategory of functional tests called stress tests. These are tests that can't be run in parallel because they are doing rough things to the servers. Like switching off the database and seeing what happens. The CI can then run each test type as a different step. Tests can be decorated with skipif. It's not a perfect solution but it is quite clear and easy to understand.
1
1
0
I have a series of tests in Django that are categorised into various "types", such as "unit", "functional", "slow", "performance", ... Currently I'm annotating them with a decorator that is used to only run tests of a certain type (similar to @skipIf(...)), but this doesn't seem like an optimal approach. I'm wondering if there is a better way to do separation of tests into types? I'm open to using different test runners, extending the existing django testing framework, building suites or even using another test framework if that doesn't sacrifice other benefits. The underlying reason for wanting to do this is to run an efficient build pipeline, and as such my priorities are to: ensure that my continuous integration runs check the unit tests first, possibly parallelise some test runs skip some classes of test altogether
How to separate test types using Django
0.099668
0
0
545
11,983,110
2012-08-16T08:14:00.000
1
0
0
0
python,testing,qwidget,pywinauto
12,048,666
1
true
0
1
Pywinauto uses standard windows API calls. Unfortunately many UI libraries (like Swing/QT/GTK) do not respond in a typical way to the API calls used - so unfortunately pywinauto often cannot get the control information. (P.s. I am the Author of pywinauto).
1
1
0
I work as a test engineer. I have to test an application(softphone) which is done by using QWidget. I'm using python - pywinauto. I can click buttons and make calls. There is a qwidget object named with statusLabel. At the beginning of the test, "Ready" text written on it. When I make a call, this text is changed like "Calling..", "Call Established" and so on. I want to check the text of that widget. Do you have any idea?
find qwidget object text by using pywinauto
1.2
0
0
874
11,984,831
2012-08-16T10:01:00.000
0
0
0
0
python,qt,pyqt4
11,985,065
1
false
0
0
Connect the button's clicked() signal to a custom slot on the widget that paints your ellipse. Then in your custom slot, set the new colour, and call update() - this will trigger paintEvent(..) to be called when the event queue gets to the request.
1
0
0
I want to paint an ellipse using the click button. But I can't connect the click button to paintEvent. e.g., If the button is pressed the ellipse should be green otherwise it should be red.
how to paint an ellipse by pressing the button
0
0
0
143
11,988,636
2012-08-16T13:49:00.000
2
0
0
1
python,django,pycharm
38,212,424
4
false
0
0
To give PyCharm permissions, one has to run as Administor (Windows) or using sudo if on OSX/Linux: sudo /Applications/PyCharm.app/Contents/MacOS/pycharm. Note that this truly runs PyCharm as a new user, so you'll have to register the app again and set up your customizations again if you have any (ie theme, server profiles etc)
2
8
0
I can't run my PyCharm IDE using port 80. I need to use PayPal that requires me to use port 80. But using Mac OS X 10.8 I can't have it working because of permission issues. I've already tried running PyCharm with SUDO command. Does anyone know how to run Pycharm using port 80, or any other solution? Thanks.
How to run PyCharm using port 80
0.099668
0
0
8,103
11,988,636
2012-08-16T13:49:00.000
1
0
0
1
python,django,pycharm
22,296,578
4
false
0
0
For the ones who are looking for the answer of this question, please check your Pycharm Run/Debug Configurations. Run->Edit Configurations ->Port
2
8
0
I can't run my PyCharm IDE using port 80. I need to use PayPal that requires me to use port 80. But using Mac OS X 10.8 I can't have it working because of permission issues. I've already tried running PyCharm with SUDO command. Does anyone know how to run Pycharm using port 80, or any other solution? Thanks.
How to run PyCharm using port 80
0.049958
0
0
8,103
11,989,408
2012-08-16T14:29:00.000
3
0
0
0
python,mongodb,pymongo
11,989,459
1
true
0
0
You can use one pymongo connection across different modules. You can open it in a separate module and import it to other modules on demand. After program finished working, you are able to close it. This will be the best option. About other questions: You can leave like this (all connections will be closed when script finishes execution), but leaving something unclosed is a bad form. You can open/close connection for each operation (but establishing connection is a time-expensive operation. That what I'd advice you (see this answer's first paragraph) I think this point can be merged with 3.
1
3
0
I am fairly new to databases and have just figured out how to use MongoDB in python2.7 on Ubuntu 12.04. An application I'm writing uses multiple python modules (imported into a main module) that connect to the database. Basically, each module starts by opening a connection to the DB, a connection which is then used for various operations. However, when the program exits, the main module is the only one that 'knows' about the exiting, and closes its connection to MongoDB. The other modules do not know this and have no chance of closing their connections. Since I have little experience with databases, I wonder if there are any problems leaving connections open when exiting. Should I: Leave it like this? Instead open the connection before and close it after each operation? Change my application structure completely? Solve this in a different way?
When to disconnect from mongodb
1.2
1
0
1,207
11,991,114
2012-08-16T15:56:00.000
1
1
0
0
python,django,zodb
11,996,422
1
true
1
0
IOBucket is part of the persistence structure of a BTree; it exists to try and reduce conflict errors, and it does try and resolve conflicts where possible. That said, conflicts are not always avoidable, and you should restart your transaction. In Zope, for example, the whole request is re-run up to 5 times if a ConflictError is raised. Conflicts are ZODB's way of handling the (hopefully rare) occasion where two different requests tried to change the exact same data structure. Restarting your transaction means calling transaction.begin() and applying the same changes again. The .begin() will fetch any changes made by the other process and your commit will be based on the fresh data.
1
2
0
I do run parallel write requests on my ZODB. I do have multiple BTree instances inside my ZODB. Once the server accesses the same objects inside such a BTree, I get a ConflictError for the IOBucket class. For all my Django bases classes I do have _p_resolveconflict set up, but can't implement it for IOBucket 'cause its a C based class. I did a deeper analysis, but still don't understand why it complains about the IOBucket class and what it writes into it. Additionally, what would be the right strategy to resolve it? Thousand thanks for any help!
Conflict resolution in ZODB
1.2
0
0
565
11,991,368
2012-08-16T16:11:00.000
5
0
1
0
python,ipython,workspace
12,046,400
5
false
0
0
This isn't a direct answer, but may be useful to you anyway. At least on the system I'm on, Ctrl-a will position the cursor at the beginning of the line Ctrl-k will 'kill' the line (think cut) type whos Ctrl-y will 'yank' the line as it was back (think paste) These are emacs keybindings, BTW, and show in many places like Bash and anywhere that uses the readline library.
5
10
0
Is there a way to view a list of the IPython variables currently in the workspace without having to send the command 'whos'. I often find myself not remembering what variable names I want to use while typing a command. In IPython, I have to erase the current line I was typing and send a 'whos' statement to see which variables are currently available. Normally, in GUI based tools like MATLAB I would just look to the right at my Workspace Variable window.
Display IPython variables without entering using whos
0.197375
0
0
11,432
11,991,368
2012-08-16T16:11:00.000
1
0
1
0
python,ipython,workspace
11,991,442
5
false
0
0
Are you asking if you can access the variables of IPython from another instance of the shell? Because the way it is now, you have a single command window where you interact with the shell by issuing commands, so unlike MATLAB there is no other window to view additional information, so I don't see how this would be possible unless you have another instance of IPython somehow accessing the information from your current shell. So the answer would seemingly be no. (If it is somehow possible to do this, I'll be just as happy as you to find out though)
5
10
0
Is there a way to view a list of the IPython variables currently in the workspace without having to send the command 'whos'. I often find myself not remembering what variable names I want to use while typing a command. In IPython, I have to erase the current line I was typing and send a 'whos' statement to see which variables are currently available. Normally, in GUI based tools like MATLAB I would just look to the right at my Workspace Variable window.
Display IPython variables without entering using whos
0.039979
0
0
11,432
11,991,368
2012-08-16T16:11:00.000
1
0
1
0
python,ipython,workspace
22,002,763
5
false
0
0
In iPython notebook, call the magic function, "%qtconsole", and a console will appear with the same kernel. Alternatively, in the Terminal, you can type, "ipython qtconsole --existing" to launch the most recent kernel in the qtconsole. If you know the name of the kernel (as shown in the terminal output when launched), then you can explicitly tell it like so, "ipython qtconsole --existing 87f7d2c0"
5
10
0
Is there a way to view a list of the IPython variables currently in the workspace without having to send the command 'whos'. I often find myself not remembering what variable names I want to use while typing a command. In IPython, I have to erase the current line I was typing and send a 'whos' statement to see which variables are currently available. Normally, in GUI based tools like MATLAB I would just look to the right at my Workspace Variable window.
Display IPython variables without entering using whos
0.039979
0
0
11,432
11,991,368
2012-08-16T16:11:00.000
4
0
1
0
python,ipython,workspace
11,991,872
5
false
0
0
You can have as many IPython frontends as you like on a single IPy kernel, so yes, if you wanted another front end you could do that, but it seems heavy-handed. Can you not use IPython Notebook?
5
10
0
Is there a way to view a list of the IPython variables currently in the workspace without having to send the command 'whos'. I often find myself not remembering what variable names I want to use while typing a command. In IPython, I have to erase the current line I was typing and send a 'whos' statement to see which variables are currently available. Normally, in GUI based tools like MATLAB I would just look to the right at my Workspace Variable window.
Display IPython variables without entering using whos
0.158649
0
0
11,432
11,991,368
2012-08-16T16:11:00.000
10
0
1
0
python,ipython,workspace
12,003,785
5
true
0
0
As others said, you can have as many frontends as you like on the same Ipython kernel, i.e 2 command windows for one kernel for example. If you are using the Qt console, shortcuts can get you close to what you want. Start a second tab with the same kernel with Ctrl+Shift+T. Then you just write your code on the first tab, and, when you need the output of whos, press Ctrl+PageDown to get to the other tab, and you can run whos without deleting your code in the first tab.
5
10
0
Is there a way to view a list of the IPython variables currently in the workspace without having to send the command 'whos'. I often find myself not remembering what variable names I want to use while typing a command. In IPython, I have to erase the current line I was typing and send a 'whos' statement to see which variables are currently available. Normally, in GUI based tools like MATLAB I would just look to the right at my Workspace Variable window.
Display IPython variables without entering using whos
1.2
0
0
11,432
11,992,275
2012-08-16T17:14:00.000
0
0
1
0
python,linux,multithreading,ctypes
11,992,419
3
false
0
0
What operating system are you using? With the exception of very old versions of Linux, each thread should have the same PID (Process ID). The identifier on thread.start_new_thread() is internal to python and is used to identify a particular thread of execution. For more information on linux threading, see the pthreads man page on Linux.
1
15
0
I am starting a bunch of different threads in my Python script. I want to keep track of the memory and CPU usage of each of these threads. I use top and ps -eLf for that. But it turns out that the identifier returned by thread.start_new_thread() is different from the thread PID displayed by top and other similar programs. Is there a way to obtain this PID from with in the Python script? This way, I can determine which PID belongs to which thread.
ID of a Python thread as reported by top
0
0
0
6,288
11,993,290
2012-08-16T18:30:00.000
7
0
0
0
python,fonts,tkinter
11,995,850
7
false
0
1
There is no way to load an external font file into Tkinter without resorting to platform-specific hacks. There's nothing built-in to Tkinter to support it.
1
22
0
I am making an interface in Tkinter and I need to have custom fonts. Not just, say, Helvetica at a certain size or whatever, but fonts other than what would normally be available on any given platform. This would be something that would be kept with the program as an image file or (preferably) Truetype font file or similar. I don't want to have to install the desired fonts on every machine that is going to use the program, I just want to carry them around with the program in the same directory. The tkFont module looks like it ought to do something like this, but I can't see where it would take a filename for a font not normally accessible to the system running the program. Thanks in advance for your help.
Truly custom font in Tkinter
1
0
0
20,715
11,994,515
2012-08-16T19:54:00.000
2
0
0
0
python,numpy,scipy
11,995,122
2
true
0
0
You might want to consider doing this in Cython, instead of as a C extension module. Cython is smart, and lets you do things in a pretty pythonic way, even though it at the same time lets you use C datatypes and python datatypes. Have you checked out the array module? It allows you to store lots of scalar, homogeneous types in a single collection. If you're truly "logging" these, and not just returning them to CPython, you might try opening a file and fprintf'ing them. BTW, realloc might be your friend here, whether you go with a C extension module or Cython.
2
2
0
I'm using python to set up a computationally intense simulation, then running it in a custom built C-extension and finally processing the results in python. During the simulation, I want to store a fixed-length number of floats (C doubles converted to PyFloatObjects) representing my variables at every time step, but I don't know how many time steps there will be in advance. Once the simulation is done, I need to pass back the results to python in a form where the data logged for each individual variable is available as a list-like object (for example a (wrapper around a) continuous array, piece-wise continuous array or column in a matrix with a fixed stride). At the moment I'm creating a dictionary mapping the name of each variable to a list containing PyFloatObject objects. This format is perfect for working with in the post-processing stage but I have a feeling the creation stage could be a lot faster. Time is quite crucial since the simulation is a computationally heavy task already. I expect that a combination of A. buying lots of memory and B. setting up your experiment wisely will allow the entire log to fit in the RAM. However, with my current dict-of-lists solution keeping every variable's log in a continuous section of memory would require a lot of copying and overhead. My question is: What is a clever, low-level way of quickly logging gigabytes of doubles in memory with minimal space/time overhead, that still translates to a neat python data structure? Clarification: when I say "logging", I mean storing until after the simulation. Once that's done a post-processing phase begins and in most cases I'll only store the resulting graphs. So I don't actually need to store the numbers on disk. Update: In the end, I changed my approach a little and added the log (as a dict mapping variable names to sequence types) to the function parameters. This allows you to pass in objects such as lists or array.arrays or anything that has an append method. This adds a little time overhead because I'm using the PyObject_CallMethodObjArgs function to call the Append method instead of PyList_Append or similar. Using arrays allows you to reduce the memory load, which appears to be the best I can do short of writing my own expanding storage type. Thanks everyone!
Logging an unknown number of floats in a python C extension
1.2
0
0
186
11,994,515
2012-08-16T19:54:00.000
1
0
0
0
python,numpy,scipy
11,995,857
2
false
0
0
This is going to be more a huge dump of ideas rather than a consistent answer, because it sounds like that's what you're looking for. If not, I apologize. The main thing you're trying to avoid here is storing billions of PyFloatObjects in memory. There are a few ways around that, but they all revolve on storing billions of plain C doubles instead, and finding some way to expose them to Python as if they were sequences of PyFloatObjects. To make Python (or someone else's module) do the work, you can use a numpy array, a standard library array, a simple hand-made wrapper on top of the struct module, or ctypes. (It's a bit odd to use ctypes to deal with an extension module, but there's nothing stopping you from doing it.) If you're using struct or ctypes, you can even go beyond the limits of your memory by creating a huge file and mmapping in windows into it as needed. To make your C module do the work, instead of actually returning a list, return a custom object that meets the sequence protocol, so when someone calls, say, foo.getitem(i) you convert _array[i] to a PyFloatObject on the fly. Another advantage of mmap is that, if you're creating the arrays iteratively, you can create them by just streaming to a file, and then use them by mmapping the resulting file back as a block of memory. Otherwise, you need to handle the allocations. If you're using the standard array, it takes care of auto-expanding as needed, but otherwise, you're doing it yourself. The code to do a realloc and copy if necessary isn't that difficult, and there's lots of sample code online, but you do have to write it. Or you may want to consider building a strided container that you can expose to Python as if it were contiguous even though it isn't. (You can do this directly via the complex buffer protocol, but personally I've always found that harder than writing my own sequence implementation.) If you can use C++, vector is an auto-expanding array, and deque is a strided container (and if you've got the SGI STL rope, it may be an even better strided container for the kind of thing you're doing). As the other answer pointed out, Cython can help for some of this. Not so much for the "exposing lots of floats to Python" part; you can just move pieces of the Python part into Cython, where they'll get compiled into C. If you're lucky, all of the code that needs to deal with the lots of floats will work within the subset of Python that Cython implements, and the only things you'll need to expose to actual interpreted code are higher-level drivers (if even that).
2
2
0
I'm using python to set up a computationally intense simulation, then running it in a custom built C-extension and finally processing the results in python. During the simulation, I want to store a fixed-length number of floats (C doubles converted to PyFloatObjects) representing my variables at every time step, but I don't know how many time steps there will be in advance. Once the simulation is done, I need to pass back the results to python in a form where the data logged for each individual variable is available as a list-like object (for example a (wrapper around a) continuous array, piece-wise continuous array or column in a matrix with a fixed stride). At the moment I'm creating a dictionary mapping the name of each variable to a list containing PyFloatObject objects. This format is perfect for working with in the post-processing stage but I have a feeling the creation stage could be a lot faster. Time is quite crucial since the simulation is a computationally heavy task already. I expect that a combination of A. buying lots of memory and B. setting up your experiment wisely will allow the entire log to fit in the RAM. However, with my current dict-of-lists solution keeping every variable's log in a continuous section of memory would require a lot of copying and overhead. My question is: What is a clever, low-level way of quickly logging gigabytes of doubles in memory with minimal space/time overhead, that still translates to a neat python data structure? Clarification: when I say "logging", I mean storing until after the simulation. Once that's done a post-processing phase begins and in most cases I'll only store the resulting graphs. So I don't actually need to store the numbers on disk. Update: In the end, I changed my approach a little and added the log (as a dict mapping variable names to sequence types) to the function parameters. This allows you to pass in objects such as lists or array.arrays or anything that has an append method. This adds a little time overhead because I'm using the PyObject_CallMethodObjArgs function to call the Append method instead of PyList_Append or similar. Using arrays allows you to reduce the memory load, which appears to be the best I can do short of writing my own expanding storage type. Thanks everyone!
Logging an unknown number of floats in a python C extension
0.099668
0
0
186
11,996,807
2012-08-16T23:22:00.000
2
0
1
0
python,pip,artifactory
12,025,244
2
false
0
0
I didn't find any specific information about the PIP's wire-protocol (frankly, I didn't search too much, just Googled it), but if it uses plain HTTP you should be just fine. Figure out what's the layout that PIP uses for its artifacts, upload them in that way, and it should work.
1
19
0
Can I pull Python artifacts from Artifactory using Pip? I see that I can place artifacts in Artifactory using Python, but what if I want to pull artifacts from it using pip?
Can I use Artifactory with Python PIP?
0.197375
0
0
31,518
11,996,963
2012-08-16T23:40:00.000
1
0
0
0
python,django,sorting,django-models,django-forms
12,001,435
6
false
1
0
If you want to ensure your data is consistent, I'm not sure that capitalizing at the form / view level is the best way to go. What happens when you add a Product through the admin where you're not using that form / save method? If you forget the capital, you're in for data inconsistency. You could instead use your model's save method, or even use the pre_save signal that django sends. This way, data is always treated the same, regardless of where it came from.
2
7
0
I have a ProductForm where users can add a Product to the database with information like title, price, and condition. How do I make it so that when the user submits the form, the first letter of the title field is automatically capitalized? For example, if a user types "excellent mattress" in the form, django saves it as "Excellent mattress" to the database. Just for reference, the reason I ask is because when I display all the product objects on a page, Django's sort feature by title is case-sensitive. As such, "Bravo", "awful", "Amazing" would be sorted as "Amazing", "Bravo", "awful" when as users, we know that is not alphabetical. Thanks for the help!
How to automatically capitalize field on form submission in Django?
0.033321
0
0
11,826
11,996,963
2012-08-16T23:40:00.000
0
0
0
0
python,django,sorting,django-models,django-forms
57,403,421
6
false
1
0
if you need all first letters of all words to capitalize use val.title() in Jeremy Lewis' answer. If you use val.capitalize() then "hello world" would be "Hello world", with title() you can get "Hello World"
2
7
0
I have a ProductForm where users can add a Product to the database with information like title, price, and condition. How do I make it so that when the user submits the form, the first letter of the title field is automatically capitalized? For example, if a user types "excellent mattress" in the form, django saves it as "Excellent mattress" to the database. Just for reference, the reason I ask is because when I display all the product objects on a page, Django's sort feature by title is case-sensitive. As such, "Bravo", "awful", "Amazing" would be sorted as "Amazing", "Bravo", "awful" when as users, we know that is not alphabetical. Thanks for the help!
How to automatically capitalize field on form submission in Django?
0
0
0
11,826
11,996,987
2012-08-16T23:43:00.000
2
0
1
0
python,artificial-intelligence,pygame,game-engine,game-physics
11,997,071
3
false
0
0
basically its Default Behavior:Random Walk if player is within X distance: Melee Attack if player is within Y distance: Charge Player if player is within Z distance: Cast spell if player is outside range and MOB has agro move toward player thats the extent of most AI... at least game AI its too cpu intensive to do things like neural networks and machine learning for game mobs you may want to look at fuzzy logic AI ... thats largely what i described up there but it can be more than one simultaneuosly
1
0
0
I am amateur Programmer looking to develop a game. I've decided to use Python and pygame. (I know, there are better options out there, but I really don't know C++ or java that well.) The issue I'm having is that I really have no idea how to create a decent AI. I'm talking about the sort of AI that has monsters move this way at this point, use a bow and arrow at that point, and use a long-range magic attack at another point (yes, its a top-down 2-d fantasy game). I really don't understand how it makes those decisions and how you program it to make those decisions. I've looked around everywhere, and either the resource gets so technical that I can't understand it at all, or it gives me no information whatsoever. I'm hoping someone here can give me some clear suggestions, or at least point me to some decent resources. Right now my bots just sort of wander randomly around the screen...
How to develop an AI script
0.132549
0
0
3,243
11,999,147
2012-08-17T02:48:00.000
2
0
0
0
python,numpy,scipy,scikit-learn
12,011,024
3
true
0
0
So far I discovered that most classifiers, like linear regressors, will automatically convert complex numbers to just the real part. kNN and RadiusNN regressors, however, work well - since they do a weighted average of the neighbor labels and so handle complex numbers gracefully. Using a multi-target classifier is another option, however I do not want to decouple the x and y directions since that may lead to unstable solutions as Colonel Panic mentions, when both results come out close to 0. I will try other classifiers with complex targets and update the results here.
3
7
1
I am trying to use sklearn to predict a variable that represents rotation. Because of the unfortunate jump from -pi to pi at the extremes of rotation, I think a much better method would be to use a complex number as the target. That way an error from 1+0.01j to 1-0.01j is not as devastating. I cannot find any documentation that describes whether sklearn supports complex numbers as targets to classifiers. In theory the distance metric should work just fine, so it should work for at least some regression algorithms. Can anyone suggest how I can get a regression algorithm to operate with complex numbers as targets?
Is it possible to use complex numbers as target labels in scikit learn?
1.2
0
0
2,406
11,999,147
2012-08-17T02:48:00.000
1
0
0
0
python,numpy,scipy,scikit-learn
12,003,586
3
false
0
0
Good question. How about transforming angles into a pair of labels, viz. x and y co-ordinates. These are continuous functions of angle (cos and sin). You can combine the results from separate x and y classifiers for an angle? $\theta = \sign(x) \arctan(y/x)$. However that result will be unstable if both classifiers return numbers near zero.
3
7
1
I am trying to use sklearn to predict a variable that represents rotation. Because of the unfortunate jump from -pi to pi at the extremes of rotation, I think a much better method would be to use a complex number as the target. That way an error from 1+0.01j to 1-0.01j is not as devastating. I cannot find any documentation that describes whether sklearn supports complex numbers as targets to classifiers. In theory the distance metric should work just fine, so it should work for at least some regression algorithms. Can anyone suggest how I can get a regression algorithm to operate with complex numbers as targets?
Is it possible to use complex numbers as target labels in scikit learn?
0.066568
0
0
2,406
11,999,147
2012-08-17T02:48:00.000
4
0
0
0
python,numpy,scipy,scikit-learn
12,004,759
3
false
0
0
Several regressors support multidimensional regression targets. Just view the complex numbers as 2d points.
3
7
1
I am trying to use sklearn to predict a variable that represents rotation. Because of the unfortunate jump from -pi to pi at the extremes of rotation, I think a much better method would be to use a complex number as the target. That way an error from 1+0.01j to 1-0.01j is not as devastating. I cannot find any documentation that describes whether sklearn supports complex numbers as targets to classifiers. In theory the distance metric should work just fine, so it should work for at least some regression algorithms. Can anyone suggest how I can get a regression algorithm to operate with complex numbers as targets?
Is it possible to use complex numbers as target labels in scikit learn?
0.26052
0
0
2,406
11,999,324
2012-08-17T03:19:00.000
2
0
1
0
python,database,web-services,web-applications,memcached
12,001,835
2
false
0
0
Yes and no, if you're exposing your information from the local API through a lock to prevent mutex depending on how the lock is setup your implementation might be that of exactly what a distributed lock is trying to accomplish, but if you haven't developed the API then you'll have to dig into the source of the API to find out if it's a localized or distributed locking system. Honestly a lock is a lock is a lock, it's attempting to do the same thing no matter what. The benefit of the distributed lock over a localized one is that you're already accounting for queueing to prevent over access to from clients on expensive cache points.
1
5
0
Why do people need a distributed lock? When the shared resource is protected by it's local machine, does this mean that we do not need a distributed lock? I mean when the shared resource is exposed to others by using some kind of api or service, and this api or service is protected using it's local locks; then we do not need this kind of distributed lock; am I right?
What's a distributed lock and why use it?
0.197375
0
0
3,456
12,000,219
2012-08-17T05:25:00.000
1
0
1
0
python,eclipse-plugin,pydev
12,003,951
5
false
0
0
As for me, all I do with Eclipse for working with Python: Install pydev Set tabs to be replaced by spaces Set tab length to 4 you can make tabs and spaces visual by displaying non-printable symbols. Hopefully, this is what you meant.
1
10
0
Recently, I use Eclipse to edit my python code. But lacking indentation guides, I feel not very well. So how to add the auto indentation guides for Eclipse? Is there certain plugin? What's more, I have tried the EditBox. But, you know, that is not very natural under some themes...............
Does Eclipse have indentation guides?
0.039979
0
0
12,501
12,000,436
2012-08-17T05:50:00.000
0
0
0
1
python,pdb
12,004,423
1
true
0
0
If you are using output redirection, the pdb prompt will be redirected as well.
1
0
0
I want to debug a python script which is invoked via os.system() from another python script. I tried calling pdb.set_trace from the invoked code but it doesn't work. I can't see the Python pdb prompt. Its sort of automation framework. My final python script where i want to put set_trace is like: python script1.py --invokes--> script2.py --invokes--> script3.py (Here, in script3.py my set_trace is ) I'm working on linux with python 2.4
Calling pdb.set_trace from script invoked from os.system
1.2
0
0
132
12,002,051
2012-08-17T08:06:00.000
2
0
1
0
python,ipad
15,655,046
6
false
0
0
I use Python 2.7 for IOS and download source python files through iFunBox into /var/mobile/Applications/Python 2.7 for IOS/Documents/User Scripts. Nevertheless I can't recommend this application as it's quite buggy and very slow when editing code.
3
16
0
Recently, I've found that the ipad can run python with a special python interpret. But editing the code on ipad is a terrible nightmare. So how can I push the python code which has been edited completely on PC into the ipad and run it?
How use python on ipad?
0.066568
0
0
77,138
12,002,051
2012-08-17T08:06:00.000
5
0
1
0
python,ipad
12,002,115
6
false
0
0
If you're just running command-line Python, then you can edit it and/or upload it to a server and then run it from an iPad ssh terminal client. I know that's not the same as pushing it onto the iPad, and it requires a server and an internet connection, but it's the easiest way. Otherwise, include a Python interpreter in your app and a UITextView that prints its outputs. Edit the code on your PC (or Mac, I guess), as a file in your Xcode project, and have the interpreter run that code and show the output in the UITextView.
3
16
0
Recently, I've found that the ipad can run python with a special python interpret. But editing the code on ipad is a terrible nightmare. So how can I push the python code which has been edited completely on PC into the ipad and run it?
How use python on ipad?
0.16514
0
0
77,138
12,002,051
2012-08-17T08:06:00.000
7
0
1
0
python,ipad
12,082,154
6
true
0
0
If you are using Python for IOS, the following should work, although I haven't yet tried it myself. Email the program to your own e-mail account as text. Then read the e-mail message on your iPad in any one of several e-mail applications. Cut and paste the text from the e-mail message into the python editor. Don't cut and paste the code into the interpreter. Then you can't save it, at least not in the current version of Python for IOS. Instead, click on the second icon on the bottom (I think that's the icon, my iPad is at home and I'm not home now), to open the editor. You can save files from the editor using the menu button on the upper right; there's a "save" menu item that allows saving the code to a file on the iPad. I'll be trying this tonight. Sorry for posting this before trying it, but I'm not sure I'll return to this question later. It 'should' work. (Famous last words!)
3
16
0
Recently, I've found that the ipad can run python with a special python interpret. But editing the code on ipad is a terrible nightmare. So how can I push the python code which has been edited completely on PC into the ipad and run it?
How use python on ipad?
1.2
0
0
77,138
12,002,442
2012-08-17T08:34:00.000
0
0
1
0
python,debugging,emacs
12,050,718
1
false
0
0
It turns out there are the functions [next/previous]-error bound M-g M-n and M-g M-p that do this (for some reason unmentioned in the python mode descriptions...). I'll keep the question open if somebody has a good solution to the second part - cycle only through the errors in the current file.
1
1
0
I'm using the current python-mode to edit my source file in top window and a python inferior shell in bottom window to see the outputs (using C-c C-c from the source file, the cursor stays in the top, source file window). Is there a way to navigate the traceback errors while still staying in the source code window? Also - is there a similar way to navigate errors just in the file that was actually sent (i.e. not those errors coming from called external files)? At the moment I either do M-g M-g to jump to line number, or switch to the python shell window and navigate there to the error I want to have a look at. This would be a tremendous efficiency boost! Thank you very much!
Navigating python traceback from source file buffer
0
0
0
193
12,005,515
2012-08-17T12:09:00.000
3
0
1
0
python
12,005,547
2
false
0
0
Instead of using threads, use different processes and use some sort of IPC to communicate between each process.
2
0
0
I have implemented tool to extract the data from clear quest server using python. I need to do lot of searches in clearquest so I have implemented it using threading. To do that i try to open individual clearquest session for each thread. When I try to run this I am getting Run Time error and none of the clearquest session opened correctly. I did bit of research on internet and found that it's because of Global Interpretor Lock in python. I would like to know how to overcome this GIL...Any idea would be much appreciated
python how to overcome global interpretor lock
0.291313
0
0
229
12,005,515
2012-08-17T12:09:00.000
2
0
1
0
python
12,005,614
2
false
0
0
I don't think you'll have RuntimeErrors because of the GIL. Can you paste the traceback? If you have some critical parts of the code that are not re entrant, you'll have to isolate them using some concurrency primitives. The main issue with the GIL is that it will forcibly serialise computation. The result is reduced throughput and scaling.
2
0
0
I have implemented tool to extract the data from clear quest server using python. I need to do lot of searches in clearquest so I have implemented it using threading. To do that i try to open individual clearquest session for each thread. When I try to run this I am getting Run Time error and none of the clearquest session opened correctly. I did bit of research on internet and found that it's because of Global Interpretor Lock in python. I would like to know how to overcome this GIL...Any idea would be much appreciated
python how to overcome global interpretor lock
0.197375
0
0
229
12,006,741
2012-08-17T13:26:00.000
2
0
0
0
python,user-interface,python-3.x,wxpython
12,008,375
1
true
0
1
You can try something like SetForegroundColour or SetBackgrounColour, but since the gauge wraps the native widget, I'm not sure that those will have any effect. Fortunately, there is a generic gauge widget called PyGauge: wx.lib.agw.pygauge I don't think it's quite as pretty as the native one, but you can definitely change the color. There are a couple examples of PyGauge in the wxPython demo package.
1
1
0
I am designing a software in Python using WxPython as GUI in Windows.I want to change the default colour of Gauge Progress bar in my application. Please help... Thanks in advance..
Wxpython How to change the colour of Gauge Progress bar in windows
1.2
0
0
1,303
12,014,042
2012-08-17T22:34:00.000
1
0
1
0
python,deep-copy
12,014,071
3
true
0
0
If there are no other objects referenced in graph (just simple fields), then copy.copy(graph) should make a copy, while copy.deepcopy(manager) should copy the manager and its graphs, assuming there is a list such as manager.graphs. But in general you are right, the copy module does not have this flexibility, and for slightly fancy situations you'd probably need to roll your own.
1
2
1
Suppose I have two classes, say Manager and Graph, where each Graph has a reference to its manager, and each Manager has references to a collection of graphs that it owns. I want to be able to do two things 1) Copy a graph, which performs a deepcopy except that the new graph references the same manager as the old one. 2) Copy a manager, which creates a new manager and also copies all the graphs it owns. What is the best way to do this? I don't want to have to roll my own deepcopy implementation, but the standard copy.deepcopy doesn't appear to provide this level of flexibility.
Python muliple deepcopy behaviors
1.2
0
0
1,009
12,014,050
2012-08-17T22:35:00.000
2
0
1
0
javascript,python,hash,corruption,validation
12,014,082
1
true
0
0
TCP has built-in error checking, and so do most link layer network protocols. So there's both per-link and end-to-end checks taking place. The only thing this doesn't protect against is intentional modification of the data, e.g. by a firewall, proxy, or network hacker. However, they can modify the hash as well as the JSON, so adding a hash doesn't protect against them. If you need real, secure protection you need to use cryptography, e.g. SSL.
1
1
0
Is there a library available that can help make sure that JSON objects being sent back and forth between a server running Python and a Javascript client have not been corrupted? I'm thinking it would probably work by creating a hash of the object to be sent a long with the object. Then the receiver of the object could re-hash the object ans make sure that it matches the hash that it received. Is this something that I should even be concerned about, or is this something that browsers and clients normally have taken care of behind the scenes anyway? Thanks!
Python-Javascript Hash library to make sure that a JSON object did not get corrupted in transit
1.2
0
1
115
12,014,203
2012-08-17T23:03:00.000
0
0
1
0
python,sockets,nat
12,020,661
3
false
0
0
Redis, could work but not the exact same functionality.
1
6
0
I want to send and receive messages between two Python programs using sockets. I can do this using the private IPs when the computers are connected to the same router, but how do I do it when there are 2 NATs separating them? Thanks (my first SO question)
How do I communicate between 2 Python programs using sockets that are on separate NATs?
0
0
1
668
12,014,210
2012-08-17T23:04:00.000
0
0
0
0
python,user-interface,tkinter,contextmenu
69,902,726
4
false
0
1
Important Caveat: (Assuming the event argument that contains the coordinates is called "event"): Nothing will happen or be visible when you call tk_popup(...) unless you use "event.x_root" and "event.y_root" as arguments. If you do the obvious of using "event.x" and "event.y", it won't work, even though the names of the coordinates are "x" and "y" and there is no mention of "x_root" and "y_root" anywhere within it. As for the grab_release(..), it's not necessary, anywhere. "tearoff=0" also isn't necessary, setting it to 1 (which is default), simply adds a dotted line entry to the context menu. If you click on it, it detaches the context menu and makes it its own top-level window with window decorators. tearoff=0 will hide this entry. Moreover, it doesn't matter if you set the menu's master to any specific widget or root, or anything at all.
1
31
0
I have a python-tkinter gui app that I've been trying to find some way to add in some functionality. I was hoping there would be a way to right-click on an item in the app's listbox area and bring up a context menu. Is tkinter able to accomplish this? Would I be better off looking into gtk or some other gui-toolkit?
tkinter app adding a right click context menu?
0
0
0
45,627
12,014,573
2012-08-17T23:56:00.000
1
1
0
0
python,pdf,reportlab
12,021,221
2
false
0
0
Yes. Take a look at the ReportLab manual. Based on your (short) description of what you want to do it sounds like you need to look at using Frames within your page layout (assuming you use Platypus, which I would highly recommend).
1
10
0
I need to generate a PDF with dynamic text and I'm using ReportLab. Since the text is dynamic, is there anyway to have it resized to fit within a specific area of the PDF?
ReportLab: How to auto resize text to fit block
0.099668
0
0
6,481
12,016,443
2012-08-18T06:32:00.000
1
0
1
0
python,pygame,game-engine,pyglet
12,016,497
3
false
0
1
Pygame should suffice for what you want to do. Pygame is stable and if you look around the websites you will find games which have been coded in pygame. What type of game are you looking to implement?
1
1
0
I'm looking to make a 2d side scrolling game in Python, however I'm not sure what library to use. I know of PyGame (Hasn't been updated in 3 years), Pyglet, and PyOpenGL. My problem is I can't find any actually shipped games that were made with Python, let alone these libraries - so I don't know how well they perform in real world situations, or even if any of them are suitable for use in an actual game and not a competition. Can anyone please shed some light on these libraries? Is PyGame still used effectively? Is Pyglet worth while? Have either of them been used to make a game? Are there any other libraries I'm forgetting? Honestly I'm not even sure I want to use Python, it seems too slow, unproven (For games written solely in Python), etc.... I have not found any game that was made primarily in Python, for sure, that has been sold. If I don't end up going with Python, what would a second/better choice be? Java? LUA?
Making a 2d game in Python - which library should be used?
0.066568
0
0
6,320
12,022,443
2012-08-18T21:37:00.000
4
0
1
0
python,regex,pcre,icu
12,084,570
3
false
0
0
Unfortunately I cannot answer directly to the comment, but atomic blocks are an important feature (although few people understand their power), since you can create multibyte character sequences with it. I.e. in Windows a newline is \r\n. Example: /(?>\r\n|\n|\r)\p{Any}/ matches to \n\r or \r., because that is a combination of a newline and any character literal, but it does not match to \r\n since nothing follows the newline.
1
7
0
I want to know which RegEx-flavour is used for Python? Is it PCRE, Perl compatible or is it ICU or something else?
Which Regular Expression flavour is used in Python?
0.26052
0
0
1,899
12,022,570
2012-08-18T22:05:00.000
-2
0
0
0
python,facebook,text,login,fill
28,718,663
3
false
1
0
You can also take a look at IEC which uses windows API to run an instance of Internet explorer and give commands to it. Although it may not be good for large scale automation, but it is very easy to use.
1
1
0
I'd like to write a script, preferably a Python code, to fill text areas in web pages and then click certain buttons. I've come across some solutions for this but none worked, mainly because cookies were not stored properly, for exmaple, there was a Python script to login to Facebook, which did seem to get it right in the shell screen, but when I opened Facebook in the browser it was logged out like nothing happened. Also, the code was hard coded for Facebook and I'm asking for something more general. So, please, if anyone had been successful with these kind of things, your advice is much needed. Open a web page, fill text in specified text elements, click a specified button, save cookies, that's all. Many thanks.
Script to open web pages, fill texts and click buttons
-0.132549
0
1
4,599
12,023,402
2012-08-19T00:55:00.000
0
1
0
0
python,selenium,hyperlink
12,023,574
2
false
0
0
In the future you need to pastebin a representative snippet of your code, and certainly a traceback. I'm going to assume that when you say "the code does not compile" that you mean that you get an exception telling you you haven't declared an encoding. You need a line at the top of your file that looks like # -*- coding: utf-8 -*- or whatever encoding the literals you've put in your file are in.
1
1
0
I want to find a link by its text but it's written in non-English characters (Hebrew to be precise, if that matters). The "find_element_by_link_text('link_text')" method would have otherwise suited my needs, but here it fails. Any idea how I can do that? Thanks.
Selenium in Python: how to click non-English link?
0
0
1
258
12,023,683
2012-08-19T02:07:00.000
1
0
1
0
python
12,023,696
2
false
0
0
{} makes an empty dict. You cannot have keyless items inside of dicts. You cannot create sets inside of sets because they are unhashable.
1
1
0
To get an empty set in python I use {} and it works. I need to use the empty set as an element in a set. But {{}} yields an error and {set()} too. Is there a way?
set consisting of an empty set
0.099668
0
0
171
12,023,773
2012-08-19T02:34:00.000
2
0
1
0
python,python-3.3
12,245,510
6
false
0
0
The lack of finding lzma and sqlite3 may be because your paths (LD_LIBRARY_PATH in particular) were incorrect. How did you install those two packages; did you use the package manager? If you installed manually, where did you install them? Also, did you install the development versions, if you used the package manager to install lzma and sqlite3? When installing from source, you'll need the development versions, so Python's source can find the necessary include files. Further, you may have to edit setup.py to indicate where these packages can be found. As for tkinter: this relies on tcl/tk, so check that you have the development versions of these packages installed if you're installing python/tkinter from source.
1
14
0
I am trying to set up the compiled version of CPython, on Ubuntu 12.04, by following the python developer guide. Even after installing the dependent packages lzma and sqlite3, build fails indicating that the dependent modules were not found. Exact Error: *Python build finished, but the necessary bits to build these modules were not found: _lzma _sqlite3 _tkinter To find the necessary bits, look in setup.py in detect_modules() for the module's name.* I could not locate the package tkinter. Appreciate any help.
Python 3.3 source code setup: modules were not found: _lzma _sqlite3 _tkinter
0.066568
0
0
18,150
12,027,033
2012-08-19T13:56:00.000
1
0
0
0
python,django
12,027,094
2
true
1
0
I think the only way to do it is in the backend, because in the frontend you will only have to select which photos you want to download and send the ids or some identifiers to the server side, then retrieve those selected photos from the filesystem (based on the identifiers), compress them in a single file and return that compressed file in a response as attached content. If you do it in the front end how would you get each file and compress them all? Doing it in server side is the best solution in my opinion :)
1
0
0
I developed a photo gallery in python, now I want to insert a new feature, "Download Multiple Photos": a user can select some photos to download and system creates a compressed file with the photos. In your opinion: in the frontend what is the best way to send the ids? Json? input hidden? and in the backend there is a django library that compress the selected photos and return the compressed file? Thanks, Marco
Django multiple photo download
1.2
0
0
282
12,027,081
2012-08-19T14:03:00.000
4
0
0
0
php,python,wkhtmltopdf
12,590,770
1
false
0
1
Try with the --disable-smart-shrinking option
1
1
0
When I convert a file with 158 pixel to pdf file, I just got a pdf file with 121 pixel in 100% zoom. wkhtmltopdf --dpi 100 c:\1.svg c:\2.pdf when I change the DPI, I find't it doesn't work. --dpi 100 is the same as --dpi 1000. wkhtmltopdf version is 0.11, test under windows and linux.
using wkhtmltopdf convert a file to pdf is smaller in size than the real file
0.664037
0
0
1,365
12,027,815
2012-08-19T15:49:00.000
1
0
0
1
python,webserver,hosting
12,027,902
2
true
0
0
Go for Amazon Ec2 instance, Ubantu server. If your process is not much memory consuming , you can go with Micro instance(617 Mb ram, 8 Gb HD) which is free for first year. Or you could go with small instance (1.7 GB ram and 8Gb HD), which might cost you little more. For setting up the python code to run 24/7 , you can create a daemon process in the instance. You can also put the twisted library/ any other library in it. Should not take much time if you have worked with Amazon AWS.
1
0
0
I'm sorry if my question is too elementary. I have some python code, which makes the machine act as a transparent proxy server using "twisted" library. Basically I want my own transparent proxy OUTSIDE my internal network and since I want to be able to monitor traffic, I need to have my own server. So I need a machine that runs my script 24/7 and listens for http connections. What kind of server/host do I need? Any host provider suggestions?
Web server to run python code
1.2
0
1
223
12,028,908
2012-08-19T18:27:00.000
0
1
0
0
java,php,c++,python,c
12,033,703
4
false
1
0
For Java, you can search JNI (Java Native Interface), there're a lot of guides telling how to use it.
1
2
0
Occasionally, I have come across programming techniques that involve creating application frameworks or websites in Java, PHP or Python, but when complex algorithms are needed, writing those out in C or C++ and running them as API-like function calls within your Java/PHP/Python code. I have been googling and searching around the net for this, and unless I don't know the name of the practice, I can't seem to find anything on it. To put simply, how can I: Create functions or classes in C or C++ Compile them into a DLL/binary/some form Run the functions from - Java PHP Python I suspect JSON/XML like output and input must be created between the Java/PHP/Python and the C/C++ function so the data can be easily bridged, but that is okay. I'm just not sure how to approach this technique, but it seems like a very smart way to take advantage of the great features of Java, PHP, and Python while at the same time utilizing the very fast programming languages for large, complex tasks. The other thought going through my head is if I am creating functions using only literals in Java/PHP/Python, will it go nearly as fast as C anyway? The specific tasks I'm looking to work with C/C++ on is massive loops, pinging a database, and analyzing maps. No work has started yet, its all theory now.
Running algorithms in compiled C/C++ code within a Java/PHP/Python framework?
0
0
0
419
12,034,013
2012-08-20T07:53:00.000
3
0
0
0
python,selenium,webdriver
12,036,058
2
false
0
0
No, WebDriver doesn't have any methods to examine or modify the HTTP traffic occurring between the browser and the website. The information you've already gotten from the Selenium IRC channel (likely even from a Selenium committer) is correct. A proxy is the correct approach here.
1
3
0
Is there any way to log http requests/responses using Selenium Webdriver (firefox)? I guess it's possible to drive web traffic through proxy and log it, but maybe there is more simple "internal" selenium solution? Asked this question on #selenium channel: you will need to proxy it to capture the requests so, looks like only way to setup proxy for it.
Is there any way to log http requests/responses using Selenium Webdriver (firefox)?
0.291313
0
1
1,705
12,034,390
2012-08-20T08:25:00.000
1
0
0
0
python,django,mongodb,database-migration,django-postgresql
15,858,338
2
false
1
0
Whether the migration is easy or hard depends on a very large number of things including how many different versions of data structures you have to accommodate. In general you will find it a lot easier if you approach this in stages: Ensure that all the Mongo data is consistent in structure with your RDBMS model and that the data structure versions are all the same. Move your data. Expect that problems will be found and you will have to go back to step 1. The primary problems you can expect are data validation problems because you are moving from a less structured data platform to a more structured one. Depending on what you are doing regarding MapReduce you may have some work there as well.
1
2
0
Could any one shed some light on how to migrate my MongoDB to PostgreSQL? What tools do I need, what about handling primary keys and foreign key relationships, etc? I had MongoDB set up with Django, but would like to convert it back to PostgreSQL.
From MongoDB to PostgreSQL - Django
0.099668
1
0
1,475
12,034,755
2012-08-20T08:55:00.000
4
1
1
0
python,unit-testing,time
12,034,788
5
false
0
0
You could record start time in the setup function and then print elapsed time in cleanup.
1
11
0
Is there any way to get the total amount of time that "unittest.TextTestRunner().run()" has taken to run a specific unit test. I'm using a for loop to test modules against certain scenarios (some having to be used and some not, so they run a few times), and I would like to print the total time it has taken to run all the tests. Any help would be greatly appreciated.
Get python unit test duration in seconds
0.158649
0
0
8,385
12,036,620
2012-08-20T11:11:00.000
0
1
0
1
python,jenkins
70,767,921
4
false
0
0
I came across this as a noob and found the accepted answer is missing something if you're running python scripts through a Windows batch shell in Jenkins. In this case, Jenkins will only fail if the very last command in the shell fails. So your python command may fail but if there is another line after it which changes directory or something then Jenkins will believe the shell was successful. The solution is to check the error level after the python line: if %ERRORLEVEL% NEQ 0 (exit) This will cause the shell to exit immediately if the python line fails, causing Jenkins to be marked as a fail because the last line on the shell failed.
1
8
0
This question might sound weird, but how do I make a job fail? I have a python script that compiles few files using scons, and which is running as a jenkins job. The script tests if the compiler can build x64 or x86 binaries, I want the job to fail if it fails to do one of these. For instance: if I'm running my script on a 64-bit system and it fails to compile a 64-bit. Is there something I can do in the script that might cause to fail?
Making a job fail in jenkins
0
0
0
15,709
12,039,689
2012-08-20T14:36:00.000
1
0
0
0
python,django,json
12,039,958
1
true
1
0
It depends entirely on what you're trying to do. render_to_response passes some data to a template to render an HTML document. simply responding with a JSON object will return a JSON object. If you want to present a usable page to a human, then use render_to_response. If you're simply passing some data to a jQuery element, then simply returning a simplejson.dumps() is perfectly valid. There are other ways to return JSON, but that's by far the easiest and most robust. In order to explain more, it would help if you elaborated on exactly what the infinite scroll view is.
1
1
0
New to Django and Python. I am using MySQL as a backend. I have two views: an infinite scroll call that calls all the records in tableA and an autocomplete field that queries tableB and returns matching records from a column. My infinite scroll and autocomplete were created using help from various separate tutorials around the web. In my infinite scroll, I am currently returning a render_to_response object (I based it off the Django beginner's tutorial). My autocomplete returns simplejson (I based it off some articles I googled). They both are returning records from a DB, so shouldn't the responses be similar? When should I use json (or simplejson, in my case) and when shouldn't I? Thx!
To json or not to json
1.2
0
0
151
12,043,333
2012-08-20T18:49:00.000
1
0
0
0
html,ajax,django,python-3.x,cherrypy
12,043,550
1
false
1
0
No, you don't need a web framework, but in general it's a good idea. Django seems like brutal overkill for this. CherryPy or Pyramid or some micro framework seems better. You can have an HTML page that calls the CherryPy server, but since this page obviously is a part of the system/service you are building, serving it from the server makes more sense. Sure, why not.
1
0
0
I have to make a html page with css and javascript that I have to enter a url in a form. With this url, I have to get some information from the html of the page with a Python 3.2 Script. I start learning Python some days ago and I have some question: I need CherryPy/Django to do that? (I'm asking because I executed a script to get the entire html without using CherryPy/Django and it works - no interaction with browser) CherryPy examples have the html built in the python code. I must write the html in the python script or can I have an html page that call the script with Ajax (or anything else)? If I can use Ajax, is XmlHttpRequest a good choice? Thank you! :D
Call python script from html page
0.197375
0
1
1,126
12,043,607
2012-08-20T19:09:00.000
0
0
1
0
python,eclipse,pydev,pylons,reddit
12,043,855
2
false
0
0
The most proper way to include another project source into your PYTHONPATH is to make a reference from your project to another project. For this make next steps: Choose your project in PyDev Package Explorer (usually tree-like panel on the left). Press Alt + Enter. Click Project References tab. Check Pylons project in a tab content frame. Note, that path with source of both projects must be added to python path for Eclipse to build references for them. Note #2 Also when you install some new Python packages you need to reindex them at Window > Preferences > PyDev > Interpreter - Python.
2
0
0
I'm new to PyDev and fairly rusty in Python. Trying to get back into it with a simple reddit app first. So here's my setup: I have 2 PyDev projects: reddit and pylons (reddit api.py imports from pylons). When I go into any file in the reddit project I get "unresolved import" for anything that tries to import from pylons. In reddit's PYTHONPATH, I've tried adding /pylons and /pylons/pylons, but whenever I refresh the project, PyDev seems to rename my references to /reddit and /reddit/pylons. How do I fix this? How do I properly add the pylons project into the PYTHONPATH of reddit?
PyDev project's PYTHONPATH automatically renamed. How do I properly configure this?
0
0
0
527
12,043,607
2012-08-20T19:09:00.000
0
0
1
0
python,eclipse,pydev,pylons,reddit
12,046,067
2
false
0
0
Since Rostyslav's solution is not working for you, perhaps you should try to add pylons as an external library. At reddit's Properties window click PyDev-PYTHONPATH tab and the External Libraries tab. Click Add source folder and find pylons's source folder. Changes to external libraries are not monitored. So you have to use Force restore internal info when pydev can't find new references (tipically when you make changes to pylons's structure).
2
0
0
I'm new to PyDev and fairly rusty in Python. Trying to get back into it with a simple reddit app first. So here's my setup: I have 2 PyDev projects: reddit and pylons (reddit api.py imports from pylons). When I go into any file in the reddit project I get "unresolved import" for anything that tries to import from pylons. In reddit's PYTHONPATH, I've tried adding /pylons and /pylons/pylons, but whenever I refresh the project, PyDev seems to rename my references to /reddit and /reddit/pylons. How do I fix this? How do I properly add the pylons project into the PYTHONPATH of reddit?
PyDev project's PYTHONPATH automatically renamed. How do I properly configure this?
0
0
0
527
12,044,262
2012-08-20T19:57:00.000
0
1
0
0
python,ssh,paramiko
12,044,350
2
true
0
0
Well, that is what SSH created for - to be a secure shell, and the commands are executed on the remote machine (you can think of it as if you were sitting at a remote computer itself, and that either doesn't mean you can execute Python commands in a shell, though you're physically interact with a machine). You can't send Python commands simply because Python do not have commands, it executes Python scripts. So everything you can do is a "thing" that will make next steps: Wrap a piece of Python code into file. scp it to the remote machine. Execute it there. Remove the script (or cache it for further execution). Basically shell commands are remote machine's programs themselves, so you can think of those scripts like shell extensions (python programs with command-line parameters, e.g.).
1
0
0
I've been using Paramiko today to work with a Python SSH connection, and it is useful. However one thing I'd really like to be able to do over the SSH is to utilise some Pythonic sugar. As far as I can tell I can only use the inbuilt Paramiko functions, and if I want to anything using Python on the remote side I would need to use a script which I have placed on there, and call it. Is there a way I can send Python commands over the SSH connection rather than having to make do only with the limitations of the Paramiko SSH connection? Since I am running the SSH connection through Paramiko within a Python script, it would only seem right that I could, but I can't see a way to do so.
using python commands within paramiko
1.2
0
1
705
12,045,278
2012-08-20T21:13:00.000
0
0
0
1
python,mongodb,mapreduce,celery,distributed-computing
12,079,563
1
false
0
0
It's impossible to say without benchmarking for certain, but my intuition leans toward doing more calculations in Python rather than mapreduce. My main concern is that mapreduce is single-threaded: One MongoDB process can only run one Javascript function at a time. It can, however, serve thousands of queries simultaneously, so you can take advantage of that concurrency by querying MongoDB from multiple Python processes.
1
1
1
I am currently working on a project which involves performing a lot of statistical calculations on many relatively small datasets. Some of these calculations are as simple as computing a moving average, while others involve slightly more work, like Spearman's Rho or Kendell's Tau calculations. The datasets are essentially a series of arrays packed into a dictionary, whose keys relate to a document id in MongoDb that provides further information about the subset. Each array in the dictionary has no more than 100 values. The dictionaries, however, may be infinitely large. In all reality however, around 150 values are added each year to the dictionary. I can use mapreduce to perform all of the necessary calculations. Alternately, I can use Celery and RabbitMQ on a distributed system, and perform the same calculations in python. My question is this: which avenue is most recommended or best-practice? Here is some additional information: I have not benchmarked anything yet, as I am just starting the process of building the scripts to compute the metrics for each dataset. Using a celery/rabbitmq distributed queue will likely increase the number of queries made against the Mongo database. I do not envision the memory usage of either method being a concern, unless the number of simultaneous tasks is very large. The majority of the tasks themselves are merely taking an item within a dataset, loading it, doing a calculation, and then releasing it. So even if the amount of data in a dataset is very large, not all of it will be loaded into memory at one time. Thus, the limiting factor, in my mind, comes down to the speed at which mapreduce or a queued system can perform the calculations. Additionally, it is dependent upon the number of concurrent tasks. Thanks for your help!
Data analysis using MapReduce in MongoDb vs a Distributed Queue using Celery & RabbitMq
0
0
0
944
12,046,683
2012-08-20T23:41:00.000
1
0
0
0
python,lxml,web2py,activepython
12,047,285
1
true
1
0
If you are using the Windows binary version of web2py, it comes with its own Python 2.5 interpreter and is self-contained, so it won't use your system's Python 2.7 nor see any of its modules. Instead, you should switch to running web2py from source. It's just as easy as the binary version -- just download the zip file and unzip it. You can then import lxml without moving anything to the application's /modules folder.
1
0
0
Have been using ActivePython on windows7 and lxml seems working without an issue.. There were a lot of other third party packages I had & they were working too.. Until I wanted to use it inside Web2Py. All the others seem to be working if I copy them directly inside c:/web2py/applications/myApp/modules With lxml, seems I need to copy something else.. I have a third party module, which imports lxml like this : from lxml.etree import tostring It ends up throwing - No module named lxml.etree My test program outside web2py runs without an issue with both these modules. When I do a pypm files lxml I see this : %APPDATA%\Python\Python27\site-packages\lxml-2.3-py2.7.egg-info What else should I copy along with the lxml directory into the modules directory ? Pretty sure it's me doing something wrong instead of Web2py, but can't put a finger on.. web2py version = Version 1.99.7 (2012-03-04 22:12:08) stable
python - web2py - can't seem to find lxml - ActivePython - windows7
1.2
0
1
277
12,046,760
2012-08-20T23:51:00.000
1
0
0
0
python,concurrency,sqlite
12,047,988
2
false
0
0
generally, it is safe if there is only one program writing the sqlite db at one time. (If not, it will raise exception like "database is locked." while two write operations want to write at the same time.) By the way, it is no way to guarantee the program will never have errors. using Try ... catch to handle exception will make the program much safer.
1
3
0
I have two programs: the first only write to sqlite db, and the second only read. May I be sure that there are never be some errors? Or how to avoid from it (in python)?
sqlite3: safe multitask read & write - how to?
0.099668
1
0
383
12,048,181
2012-08-21T03:46:00.000
0
1
1
0
python,optimization
12,048,523
1
false
0
0
scipy , pyANN , and pyevolve are some packages that come to mind that may have some tools to help with this... Im not entirely sure what multistart optimization is but I have a rough idea ...
1
0
0
folks, I am wondering if such a package exists? Or is there a good reference for implementing it?
is there a package for multi-start optimization written in python?
0
0
0
461
12,049,067
2012-08-21T05:48:00.000
1
0
0
0
python,excel
12,049,844
1
false
0
0
Since all that xlrd can do is read a file, I'm assuming that the excel file is saved after each update. If so, use os.stat() on the file before reading it with xlrd and save the results (or at least those of os.stat().st_mtime). Then periodically use os.stat() again, and check if the file modification time (os.stat().st_mtime) has changed, indicating that the file has been changed. If so, re-read the file with xlrd.
1
0
0
So I'm using xlrd to pull data from an Excel sheet. I get it open and it pulls the data perfectly fine. My problem is the sheet updates automatically with data from another program. It is updating stock information using an rtd pull. Has anyone ever figured out any way to pull data from a sheet like this that is up-to-date?
Pulling from an auto-updating Excel sheet
0.197375
1
0
364
12,049,356
2012-08-21T06:22:00.000
2
0
1
0
python,python-2.7
12,049,575
3
false
0
0
You don't do it. It's a good in JavaScript, but in Python, you haven either lightweight syntax nor a need for it. If you need a function scope, define a function and call it. But very often you don't need one. You may need to pull code apart into multiple function to make it more understandable, but then a name for it helps anyway, and it may be useful in more than one place. Also, don't worry about adding some more names to a namespace. Python, unlike JavaScript, has proper namespaces, so a helper you define at module scope is not visible in other files by default (i.e. unless imported).
1
12
0
I have used occasionally (lambda x:<code>)(<some input>) in python, to preserve my namespace's (within the global namespace or elsewhere) cleanliness. One issue with the lambda solution is that it is a very limiting construct in terms of what it may contain. Note: This is a habit from javascript programming Is this a recommended way of preserving namespace? If so, is there a better way to implement a self-executing function?
Self executing functions in python
0.132549
0
0
9,710
12,050,072
2012-08-21T07:24:00.000
7
0
1
0
python,sockets
17,333,359
3
false
0
0
The way to handle this is to have an extra file descriptor included in the list of descriptors passed to poll(). For that descriptor wait for a read to be ready. Have any other thread which wants to awaken the thread waiting on poll() write to that extra descriptor. At that point, the thread which called poll() resumes execution, sees that the extra descriptor is that which awakened it, and does whatever. The normal way to get this extra file descriptor initially is to open an unnamed pipe with pipe(). That way you have two descriptors: the one you hand the read wait in poll() on and the other which you write to to awaken the thread waiting on poll().
2
5
0
I have no way to wake up a thread being blocked by poll.poll() function . Could someone help me ?
How to wake up a thread being blocked by select.poll.poll() function from another thread in socket programming in python?
1
0
1
4,068
12,050,072
2012-08-21T07:24:00.000
-1
0
1
0
python,sockets
12,050,224
3
false
0
0
Use a timeout in your poll call, so it doesn't block indefinitely. N.B.: the timeout value is in milliseconds.
2
5
0
I have no way to wake up a thread being blocked by poll.poll() function . Could someone help me ?
How to wake up a thread being blocked by select.poll.poll() function from another thread in socket programming in python?
-0.066568
0
1
4,068
12,051,381
2012-08-21T08:56:00.000
0
0
1
0
python,image,file
12,051,619
3
false
0
0
I don't think so. The JPEG standard is more like a container rather than a standard about the implementation. The word corrupted usually mean that the file no longer represent the original data but most of the time can still be decoded, it will produce an undefined output, not the one that is supposed to produce, but putted in a JPEG decoder most likely it is going to output something, also since there is no way to associate an unique bit arrangement to the JPEG file format you can't do this programmatically, you don't have a specific pattern and even if you have it you can't say that a bit is the wrong place or is missing without knowing what is the original content when only parsing the actual file. Also the header of the file can be corrupted but in this case your file is probably designated as corrupted without caring about "what is", is corrupted as any generic file can be.
1
6
0
I was wondering if there was a way to determine in Python (or another language) to open a JPEG file, and determine whether or not it is corrupt (for instance, if I terminate a download for a JPG file before it completes, then I am unable to open the file and view it)? Are there libraries that allow this to be done easily?
Is there a way to determine in Python (or other language) to see if a JPG image is corrupt?
0
0
0
3,680
12,052,094
2012-08-21T09:39:00.000
2
0
0
1
python,redis,celery
12,089,960
2
true
1
0
I've used a redis backend for celery while also using the same redis db with prefixed cache data. I was doing this during development, I only used redis for the result backend not to queue tasks, and the production deployment ended up being all AMQP (redis only for caching). I didn't have any problems and don't see why one would (other than performance issues). For running multiple celery projects with different task definitions, I think the issue would be if you have two different types of workers that each can only handle a subset of job types. Without separate databases, I'm not sure how the workers would be able to tell which jobs they could process. I'd probably either want to make sure all workers had all task types defined and could process anything, or would want to keep the separate projects in separate databases. This wouldn't require installing anything extra, you'd just specify a REDIS_DB=1 in one of your celery projects. There might be another way to do this. I don't know for sure that multiple DBs are required, but it kinda makes sense. If you're only using redis for a result backend, maybe that would work for having multiple celery projects on one redis db... I'm not really sure.
1
13
0
Is it possible to use the same redis database for multiple projects using celery? Like using the same database for multiple projects as a cache using a key prefix. Or do i have to use a seperate database for every installation?
Using Multiple Installations of Celery with a Redis Backend
1.2
0
0
4,720
12,052,241
2012-08-21T09:47:00.000
2
0
0
0
java,python
12,052,404
3
false
1
0
You can write a simple command line Java program which calls the library and saves the results in a format you can read in Python, then you can call the program from Python using os.system. Another option is to find Python libraries with equivalent functionality to the Java library: you can read excel, xml and other files in Python, that's not a problem.
1
4
0
I have a java library in jar form which can be used to extract data from files(excel,xml etc). As its in java, it can be used only in java applications. But i need the same library to be used for python projects as well. I have tried py4j etc which takes the objects from jvm. But the library is not an executable and wont be 'run'. I have checked Jython but i need the library to be accessible from Python projects. I have thought about using automated java to python translators, but i would take that as the last resort. Please suggest some way i can accomplish this.
Use java library from python (Python wrapper for java library)
0.132549
0
0
3,847
12,053,633
2012-08-21T11:14:00.000
1
0
1
0
python,date,weekday
12,053,730
4
false
0
0
in datetime module you can do something like this: a = date.today() - timedelta(days=1) and then a.weekday(). Where monday is 0 and sunday is 6.
1
21
0
In Python, given a date, how do I find the preceding weekday? (Weekdays are Mon to Fri. I don't care about holidays)
Previous weekday in Python
0.049958
0
0
20,747
12,054,994
2012-08-21T12:37:00.000
1
0
0
0
python,wxpython,wxwidgets
12,056,334
1
false
0
1
Not that I'm aware of. The events are added to the event queue in the order they are received. You might be able to do this with a thread where the process is running in the thread and you kill it. You might try asking on the wxPython mailing list for other ideas though. A couple of the core developers are there and they might have some insights for you.
1
2
0
Is there anyway to assign priorities to specific wxPython event? I periodically call a method using a timer after starting my GUI. But I want to be able to get out of this method as soon as the user presses a button. Is this possible without having a worker thread?
Assigning priorities to wxPython events
0.197375
0
0
264
12,056,702
2012-08-21T14:08:00.000
1
1
1
0
python,exe,py2exe
12,056,785
2
false
0
0
Partly, it bundles the python environment with the 'precompiled' pyc files. These are already parsed into python byte code but they aren't native speed executables
2
2
0
I really like the PY2EXE module, it really helps me share scripts with other co-workers that are super easy for them to use. My question is: when the PY2EXE module compiles the code into an executable, does the resulting executable process faster? Thanks for any replies!
Does PY2EXE Compile a Python Code to run Faster?
0.099668
0
0
2,575
12,056,702
2012-08-21T14:08:00.000
6
1
1
0
python,exe,py2exe
12,056,778
2
true
0
0
py2exe just bundles the Python interpreter and all the needed libraries into the executable and a few library files. When you run the executable, it uses the bundled interpreter to run your script. Since it doesn't actually generate native code, the speed of execution should be about the same, possibly slower because of the overhead of everything being packaged up.
2
2
0
I really like the PY2EXE module, it really helps me share scripts with other co-workers that are super easy for them to use. My question is: when the PY2EXE module compiles the code into an executable, does the resulting executable process faster? Thanks for any replies!
Does PY2EXE Compile a Python Code to run Faster?
1.2
0
0
2,575
12,057,379
2012-08-21T14:42:00.000
0
0
1
1
python,python-2.7,filepath
12,057,654
3
false
0
0
If I understand you, your first example IS an absolute path. All absolute paths will start with a "/" as they must start at the root directory, and all relative paths will not. So, just check if your string starts with a "/" using str.startswith('/'). Then, if you want to check if the path is valid, then use os.path.exists(). Your second example is not a *nix path.
2
0
0
I need a routine in python to test for a string that contains an absolute path, that is Unix style format. So that /home/eduard/tmp/chrome-data-dir/file.ext would be a valid path. But C:\Users\user\AppData\Local\Google\Chrome\Application\chrome.exe would not be a valid path. I need also the path to bet tested not contain characters that might be consider special like: *,?
Routine in python to test for a string if has a *nix valid absolute path?
0
0
0
334
12,057,379
2012-08-21T14:42:00.000
0
0
1
1
python,python-2.7,filepath
12,057,424
3
false
0
0
Your first example is not a relative path, it's absolute because it begins with /. The second is also absolute, since the first character after the drive name is a \. A relative path in Unix would be something like chrome-data-dir/file.ext or ../../include/. Your question is kind of unclear. Perhaps you should look for a colon?
2
0
0
I need a routine in python to test for a string that contains an absolute path, that is Unix style format. So that /home/eduard/tmp/chrome-data-dir/file.ext would be a valid path. But C:\Users\user\AppData\Local\Google\Chrome\Application\chrome.exe would not be a valid path. I need also the path to bet tested not contain characters that might be consider special like: *,?
Routine in python to test for a string if has a *nix valid absolute path?
0
0
0
334
12,060,358
2012-08-21T17:53:00.000
2
0
0
1
python,copy,directory,file-copying
12,060,469
2
false
0
0
You need to use os.makedirs along side shutil.copytree.
1
0
0
I have a list of directories which have many sub-directories. e.x. C:\home\test\myfiles\myfile.txt I want to copy this to my X: drive. How do I copy myfile.txt if the X: drive only contains X:\\home? I thought shutil would create necessary directories when copying files but I was wrong and I am not sure what to use. Worded another way... I want to copy C:\\home\\test\\myfiles\\myfile.txt to X:\\home\\test\\myfiles\\myfile.txt but X:\\home\\test\\myfiles does not exist. Thanks!
How can I copy files in Python while keeping their directory structure?
0.197375
0
0
3,371
12,060,442
2012-08-21T17:59:00.000
0
0
0
0
python,wxpython
12,060,487
4
false
0
1
have you tried using subprocess? you can assign button event to subprocess to launch another py script.
2
1
0
I'm coding a game using wxpython with multiple *.py scripts and I would like to, when I press a button from the main script, another script is launched. Example: I launch a *.py script for the game menu window that has multiple options. When I press the options button, it will launch an other script that was assigned to that button an open the options window How can I do that? Thx
Open a py file from pressing a button on another py file
0
0
0
3,498
12,060,442
2012-08-21T17:59:00.000
0
0
0
0
python,wxpython
12,060,881
4
false
0
1
well i'd recommend using os module for only in case you want to run an script till it's closed ,Or in other words like making the main script in the ground and focus in the option window use os.system('script_name.py') < in windows or os.system('python script.py') < in Linux after giving it the permission
2
1
0
I'm coding a game using wxpython with multiple *.py scripts and I would like to, when I press a button from the main script, another script is launched. Example: I launch a *.py script for the game menu window that has multiple options. When I press the options button, it will launch an other script that was assigned to that button an open the options window How can I do that? Thx
Open a py file from pressing a button on another py file
0
0
0
3,498
12,061,262
2012-08-21T18:55:00.000
3
0
1
1
python,linux
12,061,400
2
false
0
0
I only have experience with py2exe and pyqt4, but py2exe needs several dlls which can only exist inside a Windows environment (like Visual C runtime libs or the dlls for Qt). It might be hackable with Wine, but having a Windows environment for packaging everything is the "supported" way.
1
2
0
i wrote a program using python with PyQT4 and other modules like numpy, scipy etc. under Linux(Ubuntu 9.10). Now i want a executable of this program under Windows 7. I dont want to install Python on the Windows 7 OS. i try pyinstaller, cx_freeze and py2exe under linux, but i generate only a linux executable which works fine under linux but not working under Windows. Now my Questions are. Is my Task possible or need i to install Python and the needed packages on Windows 7 to generate the executable with pyinstaller for examble? if it is possible--> how is the solution to solve the problem. regards lars
porting python code(linux) to windows
0.291313
0
0
1,172
12,063,463
2012-08-21T21:39:00.000
0
1
0
0
python,flask,gunicorn
54,821,323
5
false
1
0
I did this after reading the docs: when deploying my app through gunicorn, usually there is a file called Procfile open this file, add --timeout 600 finally, my Procfile would be like: web: gunicorn app:app --timeout 600
1
53
0
The gunicorn documentation talks about editing the config files, but I have no idea where it is. Probably a simple answer :) I'm on Amazon Linux AMI.
Where is the Gunicorn config file?
0
0
0
65,764
12,066,923
2012-08-22T05:42:00.000
2
1
0
0
python,plone
12,079,431
1
true
1
1
As Maulwurfn says, there is no such add-on, but this would be fairly straightforward for an experienced developer to implementing using a custom content type. You will want to be pretty sure that the specific file types you're hoping to store will actually benefit from compression (many modern file formats already include some compression, and so simply zipping them won't shrink them much). Also, unless you implement something complex like a client-side Flash uploader with built-in compression, Plone can only compress files after they've been uploaded, not before, so if you're hoping to make uploads quicker for users, rather than to minimize storage space, you're facing a somewhat more difficult challenge.
1
0
0
Is there any add-on which will activate while uploading files into the Plone site automatically? It should compress the files and then upload into the files. These can be image files like CAD drawings or any other types. Irrespective of the file type, beyond a specific size, they should get compressed and stored, rather than manually compressing the files and storing them.I am using plone 4.1. I am aware of the css, javascript files which get compressed, but not of uploaded files. I am also aware of the 'image handling' in the 'Site Setup'
Is there an add-on to auto compress files while uploading into Plone?
1.2
0
0
258
12,068,723
2012-08-22T08:01:00.000
0
0
1
0
python,windows-7
12,069,028
3
false
0
0
You can use regedit to export the path key from HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Environment. This way you create two reg files: one with python2 in the path and one with python3. Double clicking the respective file will change the path accordingly.
2
1
0
I need to use both python 2 and python 3. The only way to change the default python used upon opening a .py file to is change the PATH environment variable. The steps are troublesome. Can I have some windows batch script which modify the PATH variable for me? Thanks.
How do you switch python 2 and 3 quickly?
0
0
0
182