Q_Id
int64 2.93k
49.7M
| CreationDate
stringlengths 23
23
| Users Score
int64 -10
437
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| DISCREPANCY
int64 0
1
| Tags
stringlengths 6
90
| ERRORS
int64 0
1
| A_Id
int64 2.98k
72.5M
| API_CHANGE
int64 0
1
| AnswerCount
int64 1
42
| REVIEW
int64 0
1
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 15
5.1k
| Available Count
int64 1
17
| Q_Score
int64 0
3.67k
| Data Science and Machine Learning
int64 0
1
| DOCUMENTATION
int64 0
1
| Question
stringlengths 25
6.53k
| Title
stringlengths 11
148
| CONCEPTUAL
int64 0
1
| Score
float64 -1
1.2
| API_USAGE
int64 1
1
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 15
3.72M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10,328,943 | 2012-04-26T07:24:00.000 | 0 | 0 | 0 | 0 | 0 | python-3.x,pixels | 0 | 10,331,335 | 0 | 1 | 0 | false | 0 | 1 | You need to use some sort of cross-platform GUI toolkit, such as GTK or KDE, maybe Tk or wx will work as well, I don't know.
How you then do it depends on what toolkit you choose. | 1 | 1 | 0 | 0 | I am using Python3 on Windows 7. I want to grab all the attributes like color intensity, color etc. Of all the pixels of the screen area that I select with mouse. The selection can be of any shape but right now rectangular and square will do.
I want to do it in any area of the screen.
Can you guys please guide me how to do that in Python?
PS: If the method can work across all the platforms that would be much more appreciated.
Thanks,
Aashiq | Grabbing pixel attributes in Python | 0 | 0 | 1 | 0 | 0 | 185 |
10,332,337 | 2012-04-26T11:18:00.000 | 4 | 0 | 0 | 0 | 0 | python,eclipse,pydev,webfaction | 0 | 10,332,409 | 0 | 1 | 0 | true | 1 | 0 | Don't do that. Your host is for hosting. Your personal machine is for developing.
Edit and run your code locally. When it's ready, upload it to Webfaction. Don't edit code on your server. | 1 | 2 | 0 | 0 | This is my first time purchasing a hosting and I opted for Webfaction.com to host my Django application. So far, i've been using Eclipse to write all my code and manage my Django application and I'm not ready to use VIM as a text editor yet. Now my question is, how can I use Eclipse to write my code and manage all my files while being connected to my webfaction account? | Eclipse with Webfaction and Django | 0 | 1.2 | 1 | 0 | 0 | 146 |
10,336,582 | 2012-04-26T15:28:00.000 | 0 | 0 | 0 | 0 | 0 | python,django | 0 | 10,336,728 | 0 | 1 | 0 | true | 1 | 0 | If i'm reading your question correctly the first part wants to make a stylesheet that is dynamic???
I am unable to figure out how to make my stylesheet dynamic for front
end
For that you could use something like
Django admin follows convention of adding {% block extra_head %} (or something similar, sorry don't remember specifics)
Which is exactly what it sounds like a block that is in the <head> tag. This will let you load a stylesheet from any template. Just define that block in your base_site.html and implement it when you extend base_site.html
But then at the end of your question you it seems you want to define style sheet in one place and include that stylesheet for every request?
My only aim is to how i define my app's stylesheet on one place and
applicable through out my application.
Perhaps you could set up a directive in your settings.py 'DEFAULT_STYLESHEET and include that in your base_site.html template. Put the css in the block extra_head. If you need to override it just implement that block and viola! | 1 | 0 | 0 | 0 | I am new to Django framework and kindly consider if my question is novice.
I have created a polls application using the django framwork. I am unable to figure out how to make my stylesheet dynamic for front end. As i dont want to call it in my base_site.html or index.html files as I am also multiple views render different template files. My only aim is to how i define my app's stylesheet on one place and applicable through out my application. | Django Application Assign Stylesheet -- don't want to add it to app's index file? Can it be dynamic? | 0 | 1.2 | 1 | 0 | 0 | 76 |
10,341,707 | 2012-04-26T21:30:00.000 | 13 | 0 | 0 | 0 | 0 | python,django,virtualenv | 0 | 10,341,733 | 0 | 1 | 0 | true | 1 | 0 | In case you are using pip for package management, you can easily recreate the virtualenv on another system:
On system1, run pip freeze --local > requirements.txt and copy that file to system2. Over there, create and activate the virtualenv and use pip install -r requirements.txt to install all packages that were installed in the previous virtualenv.
Your python code can be simply copied to the new system; I'd find -name '*.pyc' -delete though since you usually do not want to move compiled code (even if it's just python bytecode) between machines. | 1 | 5 | 0 | 0 | I would like to know how to setup a complex python website, that is currently running in production environment, into a local machine for development?
Currently the site uses python combined with Django apps (registration + cms modules) in a virtual environment. | How to migrate a python site to another machine? | 0 | 1.2 | 1 | 0 | 0 | 2,657 |
10,349,093 | 2012-04-27T10:35:00.000 | 1 | 0 | 0 | 0 | 0 | authentication,ldap,splunk,python-ldap | 0 | 10,397,477 | 0 | 3 | 0 | false | 0 | 0 | Typically you would search using the username value provided on uid or cn values within the LDAP Tree.
-jim | 2 | 1 | 0 | 0 | I have set up a Ldap Server somewhere. I can bind to it, can add, modify, delete entry in the database. Now when it come to authentication isnt it as simple as giving the username and password to the server, asking it to search for an entry matching the two? And furthermore, isnt it the 'userPassword' field that contains the password for a user in there?
Now,
I tried to set up splunk to authenticate from my Ldap server, i provided the username and password, but it failed authentication. Isnt it that 'userPassword' field that splunk checks? What should be the possible reason? | how to do Ldap Server Authentication? | 0 | 0.066568 | 1 | 0 | 0 | 1,711 |
10,349,093 | 2012-04-27T10:35:00.000 | 2 | 0 | 0 | 0 | 0 | authentication,ldap,splunk,python-ldap | 0 | 10,349,171 | 0 | 3 | 0 | true | 0 | 0 | LDAP servers are generally not going to allow you to search on the userPassword attribute, for obvious security reasons. (and the password attribute is likely stored in hashed form anyway, so a straight search would not work.)
Instead, the usual way to do LDAP authentication is:
prompt for username & password
Bind to LDAP with your application's account, search for username to get the full distinguished name (dn) of the user's LDAP entry
Make a new LDAP connection, and attempt to bind using the user's dn & password
(If you know how to construct the dn from the username, you can skip step 2, but it's generally a good idea to search first - that way you're less sensitive to things like changes in the OU structure of the LDAP directory) | 2 | 1 | 0 | 0 | I have set up a Ldap Server somewhere. I can bind to it, can add, modify, delete entry in the database. Now when it come to authentication isnt it as simple as giving the username and password to the server, asking it to search for an entry matching the two? And furthermore, isnt it the 'userPassword' field that contains the password for a user in there?
Now,
I tried to set up splunk to authenticate from my Ldap server, i provided the username and password, but it failed authentication. Isnt it that 'userPassword' field that splunk checks? What should be the possible reason? | how to do Ldap Server Authentication? | 0 | 1.2 | 1 | 0 | 0 | 1,711 |
10,363,438 | 2012-04-28T12:20:00.000 | 0 | 0 | 1 | 0 | 0 | python,cherrypy,kill-process | 0 | 10,370,708 | 0 | 2 | 0 | false | 0 | 0 | If your process is using CherryPy to block (via quickstart or engine.block), then you could simply call: cherrypy.engine.exit() from your page handler. That would be the cleanest option since it would properly terminate CherryPy and plugins you may have subscribed to. | 1 | 0 | 0 | 0 | I am using cherrypy in python script.I think I have to register a callback method from the main application so that i can stop cherrypy main process from a worker thread,but how do i kill the main process within the that process.
So i want to know how to stop cherrypy from within the main process. | How to kill the cherrypy process? | 0 | 0 | 1 | 0 | 0 | 1,787 |
10,376,129 | 2012-04-29T21:27:00.000 | 1 | 0 | 1 | 0 | 0 | python,sage | 0 | 10,376,372 | 0 | 3 | 0 | false | 0 | 0 | In the case of Sage, it's easy. Sage has complete control of its own REPL (read-evaluate-print loop), so it can parse the commands you give it and make the parts of your expression into whatever classes it wants. It is not so easy to have standard Python automatically use your integer type for integer literals, however. Simply reassigning the built-in int() to some other type won't do it. You could probably do it with an import filter, that scans each file imported for (say) integer literals and replaces them with MyInt(42) or whatever. | 1 | 1 | 0 | 0 | I have started playing with Sage recently, and I've come to suspect that the standard Python int is wrapped in a customized class called Integer in Sage. If I type in type(1) in Python, I get <type 'int'>, however, if I type in the same thing in the sage prompt I get <type 'sage.rings.integer.Integer'>.
If I wanted to replace Python int (or list or dict) with my own custom class, how might it be done? How difficult would it be (e.g. could I do it entirely in Python)? | Creating a customized language using Python | 0 | 0.066568 | 1 | 0 | 0 | 123 |
10,377,131 | 2012-04-30T00:07:00.000 | 1 | 0 | 0 | 0 | 0 | python,openerp | 0 | 10,379,449 | 0 | 4 | 0 | false | 1 | 0 | Depending the logged in user :
You can use the variable 'uid' but I don't think you can do 'uid.name' or 'uid.groups_id'. So the easier method will be the second.
Depending on the groups
Example : We have some users who are managers and others not, create a group 'Manager' (in a xml file !!!) and add this group to managers. Now change the field in the xml like this :
<field name="name" string="this is the string" groups="my_module.my_reference_to_the_group"/>
The field will only be visible for managers | 1 | 1 | 0 | 0 | I am using openerp 5.16 web.
Is there any way we can hide button depending upon the logged in user.
or how can i control the group visibility depending upon the user group. | button visibility in openerp | 0 | 0.049958 | 1 | 0 | 0 | 2,413 |
10,397,695 | 2012-05-01T12:34:00.000 | 6 | 0 | 1 | 0 | 0 | python,multithreading | 0 | 10,397,794 | 0 | 2 | 0 | false | 0 | 0 | a few ideas:
web crawler - have a pool of threads getting work from a dispatcher via a queue, download web pages and return the results somewhere.
chat server - accepting permanent connections from users and dispatching messages from one to another.
mp3 file organizer - rebuild a music library's structure from mp3 tag data, and reorganize them in folders. you can have multiple threads working at once.
I'll edit with some more ideas if I think of any.
EDIT: Since python is limited to one CPU per process, no matter how many threads, if you want to parallelize CPU consuming stuff, threading will get you nowhere, use the multiprocessing interface instead, it's almost identical to the threading API, but dispatches stuff to sub processes that can use more CPU cores. | 2 | 2 | 0 | 0 | I want to learn threading and multiprocessing in Python. I don't know what kind of project to take up for this.
I want to be able to deal with all the related objects like Locks, Mutexes, Conditions, Semaphores, etc.
Please suggest a project type that's best for me.
P.S. Along with the project, please suggest any tools to debug / profile / load-test my app so that I can gauge how good my threaded implementations are. | What type of project will help me learn thread programming | 0 | 1 | 1 | 0 | 0 | 1,969 |
10,397,695 | 2012-05-01T12:34:00.000 | 0 | 0 | 1 | 0 | 0 | python,multithreading | 0 | 10,397,753 | 0 | 2 | 0 | false | 0 | 0 | I propose you attempt to program a very simple database server. Each client can connect to the server and do create, read, update, delete on a set of entities. Implementation-wise, the server should have one thread for each client all operating a global set of entities, which are protected using locks.
For learning how to use conditional variables, the server should also implement a notify method, which allows a client to be notified when an entity changed.
Good luck!
NOTE: Using threads is not the most efficient way to program a simple database server, but I think it is a good project for self-improvement. | 2 | 2 | 0 | 0 | I want to learn threading and multiprocessing in Python. I don't know what kind of project to take up for this.
I want to be able to deal with all the related objects like Locks, Mutexes, Conditions, Semaphores, etc.
Please suggest a project type that's best for me.
P.S. Along with the project, please suggest any tools to debug / profile / load-test my app so that I can gauge how good my threaded implementations are. | What type of project will help me learn thread programming | 0 | 0 | 1 | 0 | 0 | 1,969 |
10,412,063 | 2012-05-02T10:39:00.000 | 0 | 1 | 0 | 0 | 0 | python,nginx,web,fastcgi | 0 | 10,412,251 | 0 | 4 | 0 | false | 1 | 0 | All the same you must use wsgi server, as nginx does not support fully this protocol. | 2 | 7 | 0 | 0 | I want to have simple program in python that can process different requests (POST, GET, MULTIPART-FORMDATA). I don't want to use a complete framework.
I basically need to be able to get GET and POST params - probably (but not necessarily) in a way similar to PHP. To get some other SERVER variables like REQUEST_URI, QUERY, etc.
I have installed nginx successfully, but I've failed to find a good example on how to do the rest. So a simple tutorial or any directions and ideas on how to setup nginx to run certain python process for certain virtual host would be most welcome! | How to run nginx + python (without django) | 0 | 0 | 1 | 0 | 0 | 7,440 |
10,412,063 | 2012-05-02T10:39:00.000 | 4 | 1 | 0 | 0 | 0 | python,nginx,web,fastcgi | 0 | 10,417,619 | 0 | 4 | 0 | true | 1 | 0 | You should look into using Flask -- it's an extremely lightweight interface to a WSGI server (werkzeug) which also includes a templating library, should you ever want to use one. But you can totally ignore it if you'd like. | 2 | 7 | 0 | 0 | I want to have simple program in python that can process different requests (POST, GET, MULTIPART-FORMDATA). I don't want to use a complete framework.
I basically need to be able to get GET and POST params - probably (but not necessarily) in a way similar to PHP. To get some other SERVER variables like REQUEST_URI, QUERY, etc.
I have installed nginx successfully, but I've failed to find a good example on how to do the rest. So a simple tutorial or any directions and ideas on how to setup nginx to run certain python process for certain virtual host would be most welcome! | How to run nginx + python (without django) | 0 | 1.2 | 1 | 0 | 0 | 7,440 |
10,421,194 | 2012-05-02T20:29:00.000 | 1 | 0 | 0 | 0 | 0 | python,layout,python-2.7,boxlayout,kivy | 0 | 22,755,884 | 0 | 2 | 0 | false | 0 | 1 | There is a tricky way to do that.
Use a gridlayout and set cols to 1 | 1 | 3 | 0 | 0 | I am testing kivy and I want to create a BoxLayout so to stack some buttons. My problem is that the children that are added to the layout follow a bottom-top logic while I want the opposite. Do you know how can I reverse the order? Thanks! | How can I change the order of the BoxLayout in kivy? | 0 | 0.099668 | 1 | 0 | 0 | 2,021 |
10,424,456 | 2012-05-03T02:48:00.000 | 4 | 0 | 0 | 1 | 1 | python,django,amazon-s3,celery,sorl-thumbnail | 0 | 11,048,085 | 0 | 3 | 0 | false | 1 | 0 | As I understand Sorl works correctly with the S3 storage but it's very slow.
I believe that you know what image sizes do you need.
You should launch the celery task after the image was uploaded. In task you call to
sorl.thumbnail.default.backend.get_thumbnail(file, geometry_string, **options)
Sorl will generate a thumbnail and upload it to S3. Next time you request an image from template it's already cached and served directly from Amazon's servers
a clean way to handle a placeholder thumbnail image while the image is being processed.
For this you will need to override the Sorl backend. Add new argument to get_thumbnail function, e.g. generate=False. When you will call this function from celery pass generate=True
And in function change it's logic, so if thumb is not present and generate is True you work just like the standard backend, but if generate is false you return your placeholder image with text like "We process your image now, come back later" and do not call backend._create_thumbnail. You can launch a task in this case, if you think that thumbnail can be accidentally deleted.
I hope this helps | 1 | 11 | 0 | 0 | I'm surprised I don't see anything but "use celery" when searching for how to use celery tasks with sorl-thumbnails and S3.
The problem: using remote storages causes massive delays when generating thumbnails (think 100s+ for a page with many thumbnails) while the thumbnail engine downloads originals from remote storage, crunches them, then uploads back to s3.
Where is a good place to set up the celery task within sorl, and what should I call?
Any of your experiences / ideas would be greatly appreciated.
I will start digging around Sorl internals to find a more useful place to delay this task, but there are a few more things I'm curious about if this has been solved before.
What image is returned immediately? Sorl must be told somehow that the image returned is not the real thumbnail. The cache must be invalidated when celery finishes the task.
Handle multiple thumbnail generation requests cleanly (only need the first one for a given cache key)
For now, I've temporarily solved this by using an nginx reverse proxy cache that can serve hits while the backend spends time generating expensive pages (resizing huge PNGs on a huge product grid) but it's a very manual process. | Pointers on using celery with sorl-thumbnails with remote storages? | 0 | 0.26052 | 1 | 0 | 0 | 1,459 |
10,426,506 | 2012-05-03T06:52:00.000 | 1 | 0 | 1 | 0 | 0 | python,regex,solr | 0 | 10,426,794 | 0 | 2 | 1 | false | 0 | 0 | Your use case is very basic and doesn't require regex at all with Solr. It looks like you just may have a syntax issue. q=text:day OR text:run should do exactly what you're looking for. | 2 | 0 | 0 | 0 | I have got indexes created on tables having data of the form:
indexname='text'---->Today is a great day for running in the park.
Now i want to perform a search on the indexes where only 'day' or 'run' is appearing in the text.
I have implemented query like :
q = 'text:(day or run*)'
But this query is not returning me any results from indexes.Is this correct way?or how can i improve my query by applying regex ? | Apply regex on Solr query? | 0 | 0.099668 | 1 | 0 | 0 | 624 |
10,426,506 | 2012-05-03T06:52:00.000 | 0 | 0 | 1 | 0 | 0 | python,regex,solr | 0 | 10,434,663 | 0 | 2 | 1 | false | 0 | 0 | Regex and wildcards are slow in search engines. You'll get better performance by pre-processing the terms in a language-sensitive way.
You can match "run" to "running" with a stemmer, an analysis step that reduces different forms of a word to a common stem. When the query and the index term are both stemmed, then they will match.
You should also look into the Extended Dismax (edismax) search handler. That will do some of the work of turning "day run" into a search for the individual words and the phrase, something like 'day OR run OR "day run"'. Then it can further expand that against multiple fields with different weights, all automatically. | 2 | 0 | 0 | 0 | I have got indexes created on tables having data of the form:
indexname='text'---->Today is a great day for running in the park.
Now i want to perform a search on the indexes where only 'day' or 'run' is appearing in the text.
I have implemented query like :
q = 'text:(day or run*)'
But this query is not returning me any results from indexes.Is this correct way?or how can i improve my query by applying regex ? | Apply regex on Solr query? | 0 | 0 | 1 | 0 | 0 | 624 |
10,435,715 | 2012-05-03T16:40:00.000 | 1 | 0 | 0 | 1 | 1 | python,macos,bash,shell,installation | 0 | 10,435,770 | 0 | 3 | 0 | false | 0 | 0 | Something got messed up in your $PATH. Have a look in ~/.profile, ~/.bashrc, ~/.bash_profile, etc., and look for a line starting with export that doesn't end cleanly. | 1 | 4 | 0 | 0 | I stupidly downloaded python 3.2.2 and since then writing 'python' in the terminal yields 'command not found'. Also, when starting the terminal I get this:
Last login: Wed May 2 23:17:28 on ttys001
-bash: export: `folder]:/Library/Frameworks/Python.framework/Versions/2.7/bin:/opt/local/bin:/opt/local/sbin:/usr/local/git/bin:/opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/Applications/android-sdk-mac_86/tools:/Applications/android-sdk-mac_86/platform-tools:/usr/local/git/bin:/usr/X11/bin:/usr/local/ant/bin': not a valid identifier
Why the Android SDK folder is there is beyond me. It's all jazzed up. Any ideas how can I remove the offending file, folder or fix his problem? I've checked the System Profiler and python 2.6.1 and 2.7.2.5 shows up. | Python installation mess on Mac OS X, cannot run python | 0 | 0.066568 | 1 | 0 | 0 | 1,985 |
10,439,654 | 2012-05-03T21:36:00.000 | 1 | 0 | 0 | 0 | 0 | python,ajax,django,sudo,fabric | 0 | 10,439,756 | 0 | 2 | 0 | false | 1 | 0 | I can't think of a way to do a password prompt only if required... you could prompt before and cache it as required, though, and the backend would have access.
To pass the sudo password to the fabric command, you can use sudo -S... i.e.
echo password | sudo -S command | 2 | 1 | 0 | 0 | I'm working on the deployment tool in Django and fabric. The case is putting some parameters (like hostname and username) in the initial form, then let Django app to call fabric methods to do the rest and collect the output in the web browser.
IF there is a password prompt from OS to fabric (ie. running sudo commands etc.), I would like to popup the one-field form for the password to be put in it (for example using jQuery UI elements). The person will fill the password field for user prompted and fabric will continue to do the things. Is this situation possible to be implemented? I was thinking about some async calls to browser, but I have no idea how it can be done from the other side. Probably there is another way.
Please let me know if you have any suggestions. Thanks! | Fabric + django asynchronous prompt for sudo password | 1 | 0.099668 | 1 | 0 | 0 | 626 |
10,439,654 | 2012-05-03T21:36:00.000 | 2 | 0 | 0 | 0 | 0 | python,ajax,django,sudo,fabric | 0 | 10,439,758 | 0 | 2 | 0 | true | 1 | 0 | Yes, capture the password exception, than popup the form, and run the fabric script again with env.password = userpassword
If you want to continue where you caught the exception, keep a variable that knows what has been done yet (i.e. nlinesexecuted) and save it when you catch the exception. Use logic when you rerun the script to continue where you left of. | 2 | 1 | 0 | 0 | I'm working on the deployment tool in Django and fabric. The case is putting some parameters (like hostname and username) in the initial form, then let Django app to call fabric methods to do the rest and collect the output in the web browser.
IF there is a password prompt from OS to fabric (ie. running sudo commands etc.), I would like to popup the one-field form for the password to be put in it (for example using jQuery UI elements). The person will fill the password field for user prompted and fabric will continue to do the things. Is this situation possible to be implemented? I was thinking about some async calls to browser, but I have no idea how it can be done from the other side. Probably there is another way.
Please let me know if you have any suggestions. Thanks! | Fabric + django asynchronous prompt for sudo password | 1 | 1.2 | 1 | 0 | 0 | 626 |
10,447,858 | 2012-05-04T11:23:00.000 | 4 | 0 | 0 | 0 | 0 | python,plone | 0 | 10,448,068 | 1 | 1 | 0 | true | 1 | 0 | Not sure you can do this with a content rule; there is no code running at that exact time. You'd need to run an external cron job to trigger a scan for expired events.
Why not just use a collection to list expired events in the other location? | 1 | 2 | 0 | 0 | I wish to create a content rule for an event such that after expiry date of the event i.e end date, it should be moved to another folder. How do I specify the content rule. Please guide. Using Plone 4.1 | plone how to add content rule for event which after end date should be moved to another folder | 0 | 1.2 | 1 | 0 | 0 | 213 |
10,451,323 | 2012-05-04T14:59:00.000 | 1 | 0 | 0 | 0 | 0 | python,django | 0 | 10,451,563 | 0 | 2 | 0 | false | 1 | 0 | In well designed django, you should only have to edit the template. Good design provides clean separation. It's possible the developer may have been forced todo something unusual...but you could try to edit the template and see what happens (make backup 1st) | 2 | 0 | 0 | 0 | Say, I find some bug in a web interface. I open firebug and discover element and class, id of this element. By them I can then identify a template which contains variables, tags and so on.
How can I move forward and reveal in which .py files these variables are filled in?
I know how it works in Lift framework: when you've found a template there are elements with attributes bound to snippets. So you can easily proceed to specific snippets and edit code.
How does it work in django? May be, I suppose wrong process... then point me to the right algorithm, please. | Django: How can I find methods/functions filling in the specific template | 0 | 0.099668 | 1 | 0 | 0 | 66 |
10,451,323 | 2012-05-04T14:59:00.000 | 1 | 0 | 0 | 0 | 0 | python,django | 0 | 10,451,982 | 0 | 2 | 0 | true | 1 | 0 | Determining template variable resolution is all about Context.
Use the URL to identify the view being invoked.
Look at the view's return and note a) the template being used, and b) any values being passed in the Context used when the template is being rendered.
Look at settings.py for the list of TEMPLATE_CONTEXT_PROCESSORS. These are routines that are called automatically and invisibly to add values to the Context being passed to the template. This is sort of a Man Behind the Curtain™ process that can really trip you up if you don't know about it.
Check to see if there are any magic template tags being called (either in the template in question, in a template it extends, or in a template that includes the template) that might be modifying the Context. Sometimes I need use an old-school django snippet called {%expr%} that can do evaluation in the template, but I always use it as close to the point of need as possible to highlight the fact it is being used.
Note that because of the way Django template variables are resolved, {{foo.something}} could be either a value or a callable method. I have serious issues with this syntax, but that's the way they wrote it. | 2 | 0 | 0 | 0 | Say, I find some bug in a web interface. I open firebug and discover element and class, id of this element. By them I can then identify a template which contains variables, tags and so on.
How can I move forward and reveal in which .py files these variables are filled in?
I know how it works in Lift framework: when you've found a template there are elements with attributes bound to snippets. So you can easily proceed to specific snippets and edit code.
How does it work in django? May be, I suppose wrong process... then point me to the right algorithm, please. | Django: How can I find methods/functions filling in the specific template | 0 | 1.2 | 1 | 0 | 0 | 66 |
10,463,702 | 2012-05-05T16:19:00.000 | 1 | 0 | 0 | 0 | 0 | python,tree,wxpython | 0 | 10,464,339 | 0 | 2 | 0 | false | 0 | 1 | I don't know use WxPython and so don't have much idea about it. But in general what you can do is whenever a key is pressed, call a callback function and you could get the time when the key was pressed. save it somewhere. And when the next key is pressed, get the time. compare both times, if there's not much significant delay (you can decide the delay), it means that both the keys were pressed simultaneously (although they were not). | 1 | 0 | 0 | 0 | I am creating a Project Manager using wxPython it has a splitter window. On one side is a tree that shows the names of and opens the files and on the other size is a textctrl that is used to edit the file.
One problem I am having is that I would like it to go back 4 spaces when SHIFT and TAB are pressed, I have code working that add's 4 spaces when TAB is pressed.
I am also have a problem that when I add a file that is in a different folder to my programs cwd the tree adds a new node and the file appears under this node and I am struggling to get the tree to save to a file.
Also I would like to know how to add an icon to an item in the tree from an external png file.
I would appreciate any help that could be given with either of these problems. | Multiple key press detection wxPython | 0 | 0.099668 | 1 | 0 | 0 | 1,518 |
10,468,669 | 2012-05-06T06:31:00.000 | 1 | 0 | 1 | 1 | 0 | python,linux,ubuntu,tkinter,pyinstaller | 0 | 10,468,962 | 0 | 1 | 0 | true | 0 | 0 | The following is reposted from my comment on the question, so that this question may be marked as answered (assuming OP is satisfied with this answer). It was originally posted as a comment because it does not answer the question directly.
The reason there aren't many tutorials on how to do this on Linux is
because there is not much point to do this on Linux, as the actual
Python files can be turned into a package with a set of dependencies
and everything. Perhaps you should try that instead; the PyInstaller
approach is only worth it if you have a valid reason not to use
packages (and such reasons do exist). | 1 | 2 | 0 | 1 | I have been searching for tutorials on how to use pyinstaller and cant find that one that i can follow. I have been researching this for hours on end and cant find anything that helps me. I am using Linux and was wondering if anyone can help me out form the very begging, because there is not one part i understand about this. I also have three files that make up one program, and am also using Tkinter so i dont know if that makes it more difficult. | Python PyInstaller Ubuntu Troubles | 0 | 1.2 | 1 | 0 | 0 | 731 |
10,479,040 | 2012-05-07T08:37:00.000 | 5 | 0 | 0 | 0 | 0 | python,hard-drive | 0 | 10,479,073 | 0 | 1 | 0 | true | 0 | 0 | On linux, you can open('/dev/sdX', 'r').
However, the easier way is using the dd commandline utility (but it will only work properly if both disks are exactly the same). | 1 | 4 | 0 | 0 | I want to read bytes directly off a hard drive, preferably using python. How can I do this, provided it is even possible. Also, can I write directly to a hard drive, and how?
I want to do this to make a complete clone of a hard drive, and then restoring from that backup. I'm quite certain there are easier ways to get what I want done, and this is partly simply curiosity ;) | Python - Reading directly from hard drive | 1 | 1.2 | 1 | 0 | 0 | 1,762 |
10,510,450 | 2012-05-09T05:43:00.000 | 0 | 1 | 0 | 0 | 1 | python,mercurial,path,pythonpath | 1 | 10,510,460 | 0 | 1 | 0 | false | 0 | 0 | "site-package"? Did you mean "site-packages"? | 1 | 2 | 0 | 0 | I am having weird behaviors in my Python environment on Mac OS X Lion.
Apps like Sublime Text (based on Python) don't work (I initially thought it was an app bug),
and now, after I installed hg-git, I get the following error every time I lauch HG in term:
*** failed to import extension hggit from /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-package/hggit/: [Errno 2] No such file
or directory
So it probably is a Python environment set up error. Libraries and packages are there in place.
Any idea how to fix it?
Notes:
I installed hg-git following hg-git web site directions.
I also added the exact path to the extension in my .hgrc file as: hggit = /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-package/hggit/
Python was installed using official package on Python web site.
Echoing $PYTHONPATH in term print anything | Python: Failed to import extension - Errno 2 | 0 | 0 | 1 | 0 | 1 | 2,128 |
10,513,759 | 2012-05-09T09:54:00.000 | 2 | 0 | 0 | 0 | 0 | python,django,twitter | 0 | 10,516,808 | 0 | 3 | 0 | false | 1 | 0 | You could just extract the code into your own project and that will work. But the benefits of using an open source library is that there's a good chance when Twitter or Social Network X changes it's API, the library, if popular, would get updated as opposed to you needing to make the change. | 1 | 1 | 0 | 0 | I use Django-social-auth to authenticate the users of a Django project. So I guess I have all the necessary information about a Twitter user to make a post on their behalf on their Twitter account. So do I really need to install a new app? or with the info at hand how would I do that? Isn't it just a matter of posting to a Twitter API with the relevant info? | Django users post to twitter | 0 | 0.132549 | 1 | 0 | 0 | 783 |
10,519,454 | 2012-05-09T15:42:00.000 | 0 | 1 | 0 | 1 | 0 | java,python,perl,client | 0 | 10,519,519 | 0 | 1 | 0 | false | 1 | 0 | What you are talking about is Web Services. A corollary to this is XML and SOAP. In Java, Python, C#, C++... any language, you can create a Web Service that conforms to a standard pattern. Using NetBeans (Oracle's Java IDE) it is easy to create Java web services. Otherwise, use google to search for "web services tutorial [your programming language] | 1 | 3 | 0 | 1 | I have a java application as server (installed on Tomcat/Apache) and another java application as client. The client's task is to get some arguments and pass them to the server and call an adequate method on the server to be execute.
I want to have the client in other languages like Perl, Python or TCL. So, I need to know how to establish the communication and what is the communication structure. I'm not seeking for some codes but rather to know more about how to execute some java codes via other languages. I try to google it, but I mostly found the specific question/answer and not a tutorial or something like that. I wonder if I should search for a specific expression ? Do you know any tutorial or site whom explains such structures considering all aspects ?
Many thanks
Bye. | Execute java methodes via a Python or Perl client | 1 | 0 | 1 | 0 | 0 | 114 |
10,522,290 | 2012-05-09T18:49:00.000 | 1 | 0 | 0 | 0 | 0 | python,sql,excel,ms-access | 0 | 10,522,435 | 0 | 4 | 0 | false | 0 | 0 | While data concerning the frequency of individual tags should be very simple to construct, data concerning the relationships between tags is very difficult and falls under the realm of data mining. Here is what I would do, at a very high level, assuming you have a response table, a tag table, and a response_tag table.
Create a summary table that lists each unique combination of response tags, along with a column that will indicate how many times this combination occurs. The table structure should be something like combination (id, count), combination_tags(combination_id, tag_id). Use a procedural statement (ORM or SQL Cursors) to populate the table, and then use ad-hoc queries to sample the data.
This is not a simple operation, but it will get you results using a simple RDBMS, without having to use enterprise level data mining solutions. | 3 | 2 | 0 | 0 | I have thousands of survey responses that have been tagged according to the content of the response. Each response can have one tag or many (up to 20), and the tags are independent of one another rather than being structured into category-subcategory or something.
I want to be able to do analysis like the following:
How many instances of a given tag are there?
Which tags occur most frequently overall?
Where tag X is present, which other tags appear along with it most frequently?
List of all tags with the count of each next to it
Select subsets of the data to do similar analysis on (by country, for example)
The people I'm working with have traditionally tackled everything in Excel (general business strategy consulting work), and that won't work in this case. Their response is to change the project framework to something that Excel can handle in a pivot table, but it would be so much better if we could use more robust tools that allow for more sophisticated relationships.
I've been learning SQLite but am starting to fear that the kinds of things I want to do will be pretty complicated.
I've also been learning Python (for unrelated reasons) and am kind of wondering if an ORM tool and some Python code might be the better way to go.
And then there's something like Access (which I don't have but would possibly be willing to get if it's a sweet spot for this kind of thing).
In summary, I'd love to know how hard these kinds of analysis would be to do overall and which tools would best be suited for the job. I'm completely open to the idea that I'm thinking about some of or all of the problem in a way that's backwards and would welcome any advice on any aspect of what I've written here. | Best approach to doing analysis of sets of tags? | 0 | 0.049958 | 1 | 0 | 0 | 144 |
10,522,290 | 2012-05-09T18:49:00.000 | 1 | 0 | 0 | 0 | 0 | python,sql,excel,ms-access | 0 | 10,522,611 | 0 | 4 | 0 | false | 0 | 0 | You have a quite small dataset, so you do not need any kind of ORM really, just load all data in Python and chew a report of it.
SQL as a language is horrible for a more complex data analysis (e.g. where you really want to crosstabulate things etc). | 3 | 2 | 0 | 0 | I have thousands of survey responses that have been tagged according to the content of the response. Each response can have one tag or many (up to 20), and the tags are independent of one another rather than being structured into category-subcategory or something.
I want to be able to do analysis like the following:
How many instances of a given tag are there?
Which tags occur most frequently overall?
Where tag X is present, which other tags appear along with it most frequently?
List of all tags with the count of each next to it
Select subsets of the data to do similar analysis on (by country, for example)
The people I'm working with have traditionally tackled everything in Excel (general business strategy consulting work), and that won't work in this case. Their response is to change the project framework to something that Excel can handle in a pivot table, but it would be so much better if we could use more robust tools that allow for more sophisticated relationships.
I've been learning SQLite but am starting to fear that the kinds of things I want to do will be pretty complicated.
I've also been learning Python (for unrelated reasons) and am kind of wondering if an ORM tool and some Python code might be the better way to go.
And then there's something like Access (which I don't have but would possibly be willing to get if it's a sweet spot for this kind of thing).
In summary, I'd love to know how hard these kinds of analysis would be to do overall and which tools would best be suited for the job. I'm completely open to the idea that I'm thinking about some of or all of the problem in a way that's backwards and would welcome any advice on any aspect of what I've written here. | Best approach to doing analysis of sets of tags? | 0 | 0.049958 | 1 | 0 | 0 | 144 |
10,522,290 | 2012-05-09T18:49:00.000 | 0 | 0 | 0 | 0 | 0 | python,sql,excel,ms-access | 0 | 10,523,469 | 0 | 4 | 0 | false | 0 | 0 | Go with SQL! It is very powerful for data analysis. It will allow you to ask questions in the future about the data. Questions that you have not yet thought of.
Although SQL as a language may seem a bit cumbersome, it is much easier to use than a "real" programming language. In your case, SQL interfaces to Excel, so users can get access to the data through a tool they are familiar with.
If you do go with SQL, a real database (SQLLite) is a better solution than MSAccess.
I feel strongly enough in SQL as an analysis tool that I wrote a book on the subject, "Data Analysis Using SQL and Excel". You might check out the Amazon comments (http://www.amazon.com/Data-Analysis-Using-SQL-Excel/dp/0470099518/ref=pd_sim_b_1) to understand how effective it can be. | 3 | 2 | 0 | 0 | I have thousands of survey responses that have been tagged according to the content of the response. Each response can have one tag or many (up to 20), and the tags are independent of one another rather than being structured into category-subcategory or something.
I want to be able to do analysis like the following:
How many instances of a given tag are there?
Which tags occur most frequently overall?
Where tag X is present, which other tags appear along with it most frequently?
List of all tags with the count of each next to it
Select subsets of the data to do similar analysis on (by country, for example)
The people I'm working with have traditionally tackled everything in Excel (general business strategy consulting work), and that won't work in this case. Their response is to change the project framework to something that Excel can handle in a pivot table, but it would be so much better if we could use more robust tools that allow for more sophisticated relationships.
I've been learning SQLite but am starting to fear that the kinds of things I want to do will be pretty complicated.
I've also been learning Python (for unrelated reasons) and am kind of wondering if an ORM tool and some Python code might be the better way to go.
And then there's something like Access (which I don't have but would possibly be willing to get if it's a sweet spot for this kind of thing).
In summary, I'd love to know how hard these kinds of analysis would be to do overall and which tools would best be suited for the job. I'm completely open to the idea that I'm thinking about some of or all of the problem in a way that's backwards and would welcome any advice on any aspect of what I've written here. | Best approach to doing analysis of sets of tags? | 0 | 0 | 1 | 0 | 0 | 144 |
10,532,642 | 2012-05-10T11:02:00.000 | 2 | 0 | 0 | 0 | 0 | c++,python | 0 | 10,532,867 | 0 | 2 | 0 | false | 0 | 0 | There are many, many factors that will influence the available bandwidth: your hardware (network card, router, WiFi stability, cabling), what you are doing (other downloads, machine load) and what is happening elsewhere (bandwidth to target server, ISP issues, etc.). And all of those can change at any moment in time to make things more interesting. The end result is that there is no way to useful way calculate the available bandwidth. The best you can do is to try downloading (or uploading, depending on what direction you are interested in) some testdata to the target server and see what bandwidth you can use. Keep in mind that TCP has a speeds up over time so you need to run your test for a while to get the real available bandwidth. | 1 | 0 | 0 | 0 | I am trying to get the available bandwidth from my PC.
Suppose I'm streaming a video at 2 Mbps and my network card is of 100 Mbps, my program should tell me that 98 Mbps is available.
Is it easy to do in C++ or Python? And how can I find the available bandwidth using any of the suggested programming language. Any help will be appreciated. | Find available bandwidth from Python or C++ | 1 | 0.197375 | 1 | 0 | 0 | 832 |
10,560,041 | 2012-05-12T00:18:00.000 | 1 | 0 | 0 | 0 | 0 | python,listbox,tkinter | 0 | 10,560,466 | 0 | 1 | 0 | false | 0 | 1 | Bind to the event <<ListboxSelect>> instead of <1>, this event will fire after the current selection has been updated.
If you genuinely need for the binding to work literally on a press of the mouse button you will have to rearrange the order of the bind tags for the widget. | 1 | 1 | 0 | 0 | After creating a simple window/widget layout with Page (page.sourceforge.net)
I found that the listbox curselection() call returns the proper index when releasing Button-1.
When hit, it returns the previous index (the item which we just leave).
Becasue of some timer activities I'd like to get the clicked index at click-time, instead of release-time. Can somebody help me how could I do that? Thank you | Getting Tkinter listbox item when hit Button-1 | 0 | 0.197375 | 1 | 0 | 0 | 875 |
10,561,426 | 2012-05-12T05:52:00.000 | 0 | 0 | 0 | 0 | 0 | python,plone | 0 | 10,561,638 | 0 | 1 | 0 | true | 0 | 0 | Products.PressRoom is the answer yahooo! :) | 1 | 0 | 0 | 0 | Hi I have tried UpfrontContacts, collective.contacts couldn't get to build the zope.conf. Any for the version of Plone i.e 4.1 ? Please let me know the exact steps to implement the same also. Also tried Membrane. But don't know how to use it. | Any addons for contacts for Plone 4.1 | 0 | 1.2 | 1 | 0 | 0 | 78 |
10,569,853 | 2012-05-13T06:47:00.000 | 0 | 0 | 1 | 0 | 0 | python,list,sorting,dictionary | 0 | 10,569,942 | 0 | 2 | 0 | false | 0 | 0 | have you already tried
sorted(list_for_sorting, key=dictionary_you_wrote.__getitem__)
? | 1 | 3 | 1 | 0 | I'm trying to make a sorting system with card ranks and their values are obtained from a separate dictionary. In a simple deck of 52 cards, we have 2 to Ace ranks, in this case I want a ranking system where 0 is 10, J is 11, Q is 12, K is 13, A is 14 and 2 is 15 where 2 is the largest valued rank. The thing is, if there is a list where I want to sort rank cards in ASCENDING order according to the numbering system, how do I do so?
For example, here is a list, [3,5,9,7,J,K,2,0], I want to sort the list into [3,5,7,9,0,J,K,2]. I also made a dictionary for the numbering system as {'A': 14, 'K': 13, 'J': 11, 'Q': 12, '0': 10, '2': 15}.
THANKS | Sorting a list with elements containing dictionary values | 0 | 0 | 1 | 0 | 0 | 198 |
10,573,217 | 2012-05-13T16:13:00.000 | 3 | 0 | 0 | 1 | 0 | python,google-app-engine,backend,background-thread | 0 | 10,573,307 | 0 | 1 | 0 | false | 1 | 0 | There is a combobox in the top left corner of the versions/backends of your application switch to the backend there and you will see the backend logs. | 1 | 2 | 0 | 0 | I'm writing an app that writes log entries from a BackgroundThread object on a backend instance. My problem is that I don't know how to access the logs.
The docs say, "A background thread's os.environ and logging entries are independent of those of the spawning thread," and indeed, the log entries don't show up with the backend instance's entries on the admin console. But the admin console doesn't offer an option for showing the background threads.
appcfg request_logs doesn't seem to be the answer either.
Does anybody know? | Where are the logs from BackgroundThreads on App Engine? | 0 | 0.53705 | 1 | 0 | 0 | 354 |
10,578,763 | 2012-05-14T07:01:00.000 | 0 | 0 | 0 | 0 | 0 | python,webcam | 0 | 10,578,877 | 0 | 1 | 0 | true | 0 | 0 | not really sure what you want to happen but if your going to implement this kind of feature in a website I think you should use a flash application instead of python (or if possible html 5). though your using python on the development of you web app it would only run on the server side instead and the feature you want to use is on a client side so for me it's more feasible to use flash instead to capture the video then after capturing the video you upload it to your server then your python code will do the rest of the process on the server side. | 1 | 3 | 0 | 0 | I have written a program using Python and OpenCV where I perform operations on a video stream in run time. It works fine. Now if I want to publish it on a website where someone can see this using their browser and webcam, how do I proceed? | Access webcam over internet using Python | 0 | 1.2 | 1 | 0 | 1 | 878 |
10,583,195 | 2012-05-14T12:19:00.000 | 1 | 0 | 1 | 0 | 0 | python,oop | 0 | 10,583,784 | 0 | 3 | 1 | false | 0 | 0 | I have a .csv file
You're in luck; CSV support is built right in, via the csv module.
Do you suggest creating a class dictionary for accessing every instance?
I don't know what you think you mean by "class dictionary". There are classes, and there are dictionaries.
But I still need to provide a name to every single instance, right? How can I do that dynamically? Probably the best thing would be to use the unique ID, read from the file, as the instance name, but I think that numbers can't be used as instance names, can they?
Numbers can't be instance names, but they certainly can be dictionary keys.
You don't want to create "instance names" dynamically anyway (assuming you're thinking of having each in a separate variable or something gross like that). You want a dictionary. So just let the IDs be keys.
I miss pointers! :(
I really, honestly, can't imagine how you expect pointers to help here, and I have many years of experience with C++. | 1 | 0 | 1 | 0 | I apologise if this question has already been asked.
I'm really new to Python programming, and what I need to do is this:
I have a .csv file in which each line represent a person and each column represents a variable.
This .csv file comes from an agent-based C++ simulation I have done.
Now, I need to read each line of this file and for each line generate a new instance of the class Person(), passing as arguments every variable line by line.
My problem is this: what is the most pythonic way of generating these agents while keeping their unique ID (which is one of the attributes I want to read from the file)? Do you suggest creating a class dictionary for accessing every instance? But I still need to provide a name to every single instance, right? How can I do that dynamically? Probably the best thing would be to use the unique ID, read from the file, as the instance name, but I think that numbers can't be used as instance names, can they? I miss pointers! :(
I am sure there is a pythonic solution I cannot see, as I still have to rewire my mind a bit to think in pythonic ways...
Thank you very much, any help would be greatly appreciated!
And please remember that this is my first project in python, so go easy on me! ;)
EDIT:
Thank you very much for your answers, but I still haven't got an answer on the main point: how to create an instance of my class Person() for every line in my csv file. I would like to do that automatically! Is it possible?
Why do I need this? Because I need to create networks of these people with networkx and I would like to have "agents" linked in a network structure, not just dictionary items. | How can I dynamically generate class instances with single attributes read from flat file in Python? | 1 | 0.066568 | 1 | 0 | 0 | 725 |
10,595,058 | 2012-05-15T06:13:00.000 | 1 | 0 | 1 | 0 | 0 | python,django,dictionary,memcached,redis | 0 | 10,597,896 | 0 | 4 | 1 | false | 1 | 0 | 5Mb isn't that large. You could keep it in memory in process, and I recommend that you do, until it becomes clear from profiling and testing that that approach isn't meeting your needs. Always do the simplest thing possible.
Socket communication doesn't of itself introduce much of an overhead. You could probably pare it back a little by using a unix domain socket. In any case, if you're not keeping your data in process, you're going to have to talk over some kind of pipe. | 3 | 10 | 0 | 0 | I have a big key-value pair dump, that I need to lookup for my django-Python webapp.
So, I have following options:
Store it as json dump and load it as a python dict.
Store it in a dump.py and import the dict from it.
Use some targeted systems for this problem: [ Are these really meant for this usecase ? ]
Mem-cache
Redis
Any other option ?
Which from above is the right way to go ?
How will you compare memcache and redis ?
Update:
My dictionary is about 5 MB in size and will grow over time.
Using Redis/Memcache adds an overhead of hitting a socket every-time, so would dump.py will be better since it would take time to load it to memory but after that it would only do memory lookups.
My dictionary needs to be updated every day, considering that dump.py will be problem, since we have to restart the django-server to reload where as I guess it would reflect on the fly in redis and memcache.
One uses a system like redis only when you have large amount of data and you have to lookup very frequently, in that case socket gives a overhead so, how do we achieve the advantage ?
Please share your experiences on this ! | Maintain a large dictionary in memory for Django-Python? | 0 | 0.049958 | 1 | 0 | 0 | 2,666 |
10,595,058 | 2012-05-15T06:13:00.000 | 1 | 0 | 1 | 0 | 0 | python,django,dictionary,memcached,redis | 0 | 10,595,177 | 0 | 4 | 1 | false | 1 | 0 | In past for a similar problem I have used the idea of a dump.py . I would think that all of the other data structures would require a layer to convert objects of one kind into python objects . However I would still think that this would depend on data size and the amount of data you are handling . Memcache and redis should have better indexing and look up when it comes to really large data sets and things like regex based lookup . So my recommendation would be
json -- if you are serving the data over http to some other service
python file - if data structure is not too large and you need not any special kind of look ups
memcache and redis -- if the data becomes really large | 3 | 10 | 0 | 0 | I have a big key-value pair dump, that I need to lookup for my django-Python webapp.
So, I have following options:
Store it as json dump and load it as a python dict.
Store it in a dump.py and import the dict from it.
Use some targeted systems for this problem: [ Are these really meant for this usecase ? ]
Mem-cache
Redis
Any other option ?
Which from above is the right way to go ?
How will you compare memcache and redis ?
Update:
My dictionary is about 5 MB in size and will grow over time.
Using Redis/Memcache adds an overhead of hitting a socket every-time, so would dump.py will be better since it would take time to load it to memory but after that it would only do memory lookups.
My dictionary needs to be updated every day, considering that dump.py will be problem, since we have to restart the django-server to reload where as I guess it would reflect on the fly in redis and memcache.
One uses a system like redis only when you have large amount of data and you have to lookup very frequently, in that case socket gives a overhead so, how do we achieve the advantage ?
Please share your experiences on this ! | Maintain a large dictionary in memory for Django-Python? | 0 | 0.049958 | 1 | 0 | 0 | 2,666 |
10,595,058 | 2012-05-15T06:13:00.000 | 2 | 0 | 1 | 0 | 0 | python,django,dictionary,memcached,redis | 0 | 10,595,172 | 0 | 4 | 1 | false | 1 | 0 | Memcached, though a great product, is trumped by Redis in my book. It offers lots of things that memcached doesn't, like persistence.
It also offers more complex data structures like hashses. What is your particular data dump? How big is it, and how large / what type of values? | 3 | 10 | 0 | 0 | I have a big key-value pair dump, that I need to lookup for my django-Python webapp.
So, I have following options:
Store it as json dump and load it as a python dict.
Store it in a dump.py and import the dict from it.
Use some targeted systems for this problem: [ Are these really meant for this usecase ? ]
Mem-cache
Redis
Any other option ?
Which from above is the right way to go ?
How will you compare memcache and redis ?
Update:
My dictionary is about 5 MB in size and will grow over time.
Using Redis/Memcache adds an overhead of hitting a socket every-time, so would dump.py will be better since it would take time to load it to memory but after that it would only do memory lookups.
My dictionary needs to be updated every day, considering that dump.py will be problem, since we have to restart the django-server to reload where as I guess it would reflect on the fly in redis and memcache.
One uses a system like redis only when you have large amount of data and you have to lookup very frequently, in that case socket gives a overhead so, how do we achieve the advantage ?
Please share your experiences on this ! | Maintain a large dictionary in memory for Django-Python? | 0 | 0.099668 | 1 | 0 | 0 | 2,666 |
10,597,284 | 2012-05-15T08:55:00.000 | 6 | 0 | 1 | 1 | 0 | python,windows,makefile,cygwin,installation | 0 | 10,607,864 | 0 | 6 | 0 | false | 0 | 0 | @spacediver is right on. Run cygwin's setup.exe again and when you get to the packages screen make sure you select make and python (and any other libs/apps you may need - perhaps gcc or g++). | 5 | 20 | 0 | 0 | I have installed Cygwin Terminal in OS Windows. But I need to install also python and make in cygwin.All of these programs are needed to run petsc library.
Does Someone know how to install these components in cygwin? | install python and make in cygwin | 0 | 1 | 1 | 0 | 0 | 37,080 |
10,597,284 | 2012-05-15T08:55:00.000 | 12 | 0 | 1 | 1 | 0 | python,windows,makefile,cygwin,installation | 0 | 10,597,334 | 0 | 6 | 0 | false | 0 | 0 | Look into cygwin native package manager, devel category. You should find make and python there. | 5 | 20 | 0 | 0 | I have installed Cygwin Terminal in OS Windows. But I need to install also python and make in cygwin.All of these programs are needed to run petsc library.
Does Someone know how to install these components in cygwin? | install python and make in cygwin | 0 | 1 | 1 | 0 | 0 | 37,080 |
10,597,284 | 2012-05-15T08:55:00.000 | 7 | 0 | 1 | 1 | 0 | python,windows,makefile,cygwin,installation | 0 | 19,168,003 | 0 | 6 | 0 | false | 0 | 0 | After running into this problem myself, I was overlooking all of the relevant answers saying to check the setup.exe again. This was the solution to me, there are a few specific things to check.
Check /bin for "make.exe". If it's not there, you have not installed it correctly
Run the setup.exe. Don't be afraid, as new package installs append to your installation and do not over write
In the setup.exe, make sure you run the install from the Internet and NOT your local folder. This was where I was running into problems. Search "make" and make sure you select to Install it, do not leave this as "Default". | 5 | 20 | 0 | 0 | I have installed Cygwin Terminal in OS Windows. But I need to install also python and make in cygwin.All of these programs are needed to run petsc library.
Does Someone know how to install these components in cygwin? | install python and make in cygwin | 0 | 1 | 1 | 0 | 0 | 37,080 |
10,597,284 | 2012-05-15T08:55:00.000 | 0 | 0 | 1 | 1 | 0 | python,windows,makefile,cygwin,installation | 0 | 58,692,435 | 0 | 6 | 0 | false | 0 | 0 | In my case, it was happened due to python is not well installed. So python.exe is referenced in the shell so it can't find the file because the system is different.
Please check cygwin python is well installed. | 5 | 20 | 0 | 0 | I have installed Cygwin Terminal in OS Windows. But I need to install also python and make in cygwin.All of these programs are needed to run petsc library.
Does Someone know how to install these components in cygwin? | install python and make in cygwin | 0 | 0 | 1 | 0 | 0 | 37,080 |
10,597,284 | 2012-05-15T08:55:00.000 | 5 | 0 | 1 | 1 | 0 | python,windows,makefile,cygwin,installation | 0 | 43,129,128 | 0 | 6 | 0 | false | 0 | 0 | Here is a command line version to install python in cygwin
wget rawgit.com/transcode-open/apt-cyg/master/apt-cyg
install apt-cyg /bin
apt-cyg install python | 5 | 20 | 0 | 0 | I have installed Cygwin Terminal in OS Windows. But I need to install also python and make in cygwin.All of these programs are needed to run petsc library.
Does Someone know how to install these components in cygwin? | install python and make in cygwin | 0 | 0.16514 | 1 | 0 | 0 | 37,080 |
10,607,350 | 2012-05-15T19:16:00.000 | 0 | 0 | 0 | 0 | 0 | python,pickle | 0 | 10,608,972 | 0 | 2 | 0 | false | 0 | 0 | Metaprogramming is strong in Python; Python classes are extremely malleable. You can alter them after declaration all the way you want, though it's best done in a metaclass (decorator). More than that, instances are malleable, independently of their classes.
A 'reference to a place' is often simply a string. E.g. a reference to object's field is its name. Assume you have multiple node references inside your node object. You could have something like {persistent_id: (object, field_name),..} as your unresolved references table, easy to look up. Similarly, in lists of nodes 'references to places' are indices.
BTW, could you use a key-value database for graph storage? You'd be able to pull nodes by IDs without waiting. | 2 | 1 | 1 | 0 | I want to perform serialisation of some object graph in a modular way. That is I don't want to serialize the whole graph. The reason is that this graph is big. I can keep timestamped version of some part of the graph, and i can do some lazy access to postpone loading of the parts i don't need right now.
I thought i could manage this with metaprogramming in Python. But it seems that metaprogramming is not strong enough in Python.
Here's what i do for now. My graph is composed of several different objects. Some of them are instances of a special class. This class describes the root object to be pickled. This is where the modularity come in. Each time i pickle something it starts from one of those instances and i never pickle two of them at the same time. Whenever there is a reference to another instance, accessible by the root object, I replace this reference by a persistant_id, thus ensuring that i won't have two of them in the same pickling stream. The problem comes when unpickling the stream. I can found a persistant_id of an instance which is not loaded yet. When this is the case, i have to wait for the target instance to be loaded before allowing access to it. And i don't see anyway to do that :
1/ I tried to build an accessor which get methods return the target of the reference. Unfortunately, accessors must be placed in the class declaration, I can't assign them to the unpickled object.
2/ I could store somewhere the places where references have to be resolved. I don't think this is possible in Python : one can't keep reference to a place (a field, or a variable), it is only possible to keep a reference to a value.
My problem may not be clear. I'm still looking for a clear formulation. I tried other things like using explicit references which would be instances of some "Reference" class. It isn't very convenient though.
Do you have any idea how to implement modular serialisation with pickle ? Would i have to change internal behaviour of Unpickler to be able to remember places where i need to load the remaining of the object graph ? Is there another library more suitable to achieve similar results ? | Modular serialization with pickle (Python) | 1 | 0 | 1 | 0 | 0 | 289 |
10,607,350 | 2012-05-15T19:16:00.000 | 0 | 0 | 0 | 0 | 0 | python,pickle | 0 | 10,608,783 | 0 | 2 | 0 | false | 0 | 0 | Here's how I think I would go about this.
Have a module level dictionary mapping persistent_id to SpecialClass objects. Every time you initialise or unpickle a SpecialClass instance, make sure that it is added to the dictionary.
Override SpecialClass's __getattr__ and __setattr__ method, so that specialobj.foo = anotherspecialobj merely stores a persistent_id in a dictionary on specialobj (let's call it specialobj.specialrefs). When you retrieve specialobj.foo, it finds the name in specialrefs, then finds the reference in the module-level dictionary.
Have a module level check_graph function which would go through the known SpecialClass instances and check that all of their specialrefs were available. | 2 | 1 | 1 | 0 | I want to perform serialisation of some object graph in a modular way. That is I don't want to serialize the whole graph. The reason is that this graph is big. I can keep timestamped version of some part of the graph, and i can do some lazy access to postpone loading of the parts i don't need right now.
I thought i could manage this with metaprogramming in Python. But it seems that metaprogramming is not strong enough in Python.
Here's what i do for now. My graph is composed of several different objects. Some of them are instances of a special class. This class describes the root object to be pickled. This is where the modularity come in. Each time i pickle something it starts from one of those instances and i never pickle two of them at the same time. Whenever there is a reference to another instance, accessible by the root object, I replace this reference by a persistant_id, thus ensuring that i won't have two of them in the same pickling stream. The problem comes when unpickling the stream. I can found a persistant_id of an instance which is not loaded yet. When this is the case, i have to wait for the target instance to be loaded before allowing access to it. And i don't see anyway to do that :
1/ I tried to build an accessor which get methods return the target of the reference. Unfortunately, accessors must be placed in the class declaration, I can't assign them to the unpickled object.
2/ I could store somewhere the places where references have to be resolved. I don't think this is possible in Python : one can't keep reference to a place (a field, or a variable), it is only possible to keep a reference to a value.
My problem may not be clear. I'm still looking for a clear formulation. I tried other things like using explicit references which would be instances of some "Reference" class. It isn't very convenient though.
Do you have any idea how to implement modular serialisation with pickle ? Would i have to change internal behaviour of Unpickler to be able to remember places where i need to load the remaining of the object graph ? Is there another library more suitable to achieve similar results ? | Modular serialization with pickle (Python) | 1 | 0 | 1 | 0 | 0 | 289 |
10,615,196 | 2012-05-16T09:00:00.000 | 9 | 0 | 1 | 0 | 0 | python,list,range | 0 | 10,615,351 | 0 | 3 | 0 | false | 0 | 0 | len([x for x in l if x > 34 and x < 566]) | 1 | 7 | 0 | 0 | I have a list of elements (integers) and what I need to do is to quickly check how many elements from this list fall within a specified range. The example is below.
range is from 34 to 566
l = [9,20,413,425]
The result is 2.
I can of course use a simple for loop for the purpose and compare each element with the min and max value (34 < x < 566) and then use a counter if the statement is true, however I think there might be a much easier way to do this, possibly with a nice one-liner. | Check how many elements from a list fall within a specified range (Python) | 0 | 1 | 1 | 0 | 0 | 2,870 |
10,626,766 | 2012-05-16T21:10:00.000 | 3 | 0 | 0 | 0 | 0 | python,algorithm | 0 | 10,627,590 | 0 | 2 | 0 | false | 0 | 0 | Minimax is a way of exploring the space of potential moves in a two player game with alternating turns. You are trying to win, and your opponent is trying to prevent you from winning.
A key intuition is that if it's currently your turn, a two-move sequence that guarantees you a win isn't useful, because your opponent will not cooperate with you. You try to make moves that maximize your chances of winning and your opponent makes moves that minimize your chances of winning.
For that reason, it's not very useful to explore branches from moves that you make that are bad for you, or moves your opponent makes that are good for you. | 1 | 10 | 0 | 1 | I'm quite new to algorithms and i was trying to understand the minimax, i read a lot of articles,but i still can't get how to implement it into a tic-tac-toe game in python.
Can you try to explain it to me as easy as possible maybe with some pseudo-code or some python code?.
I just need to understand how it works. i read a lot of stuff about that and i understood the basic, but i still can't get how it can return a move.
If you can please don't link me tutorials and samples like (http://en.literateprograms.org/Tic_Tac_Toe_(Python)) , i know that they are good, but i simply need a idiot explanation.
thank you for your time :) | Minimax explanation "for dummies" | 1 | 0.291313 | 1 | 0 | 0 | 5,575 |
10,627,055 | 2012-05-16T21:36:00.000 | 0 | 0 | 0 | 0 | 0 | java,python,nanotime | 0 | 10,627,094 | 0 | 3 | 0 | false | 1 | 0 | Divide the output of System.nanoTime() by 10^9. This is because it is in nanoseconds, while the output of time.time() is in seconds. | 1 | 5 | 0 | 0 | java's System.nanoTime() seems to give a long: 1337203874231141000L
while python time.time() will give something like 1337203880.462787
how can i convert time.time()'s value to something match up to System.nanoTime()? | convert python time.time() to java.nanoTime() | 0 | 0 | 1 | 0 | 0 | 3,163 |
10,628,262 | 2012-05-16T23:52:00.000 | -4 | 0 | 1 | 0 | 0 | python,jupyter-notebook,ipython,jupyter | 0 | 58,837,887 | 0 | 14 | 0 | false | 0 | 0 | You can find your current working directory by 'pwd' command in jupyter notebook without quotes. | 5 | 305 | 0 | 0 | I am starting to depend heavily on the IPython notebook app to develop and document algorithms. It is awesome; but there is something that seems like it should be possible, but I can't figure out how to do it:
I would like to insert a local image into my (local) IPython notebook markdown to aid in documenting an algorithm. I know enough to add something like <img src="image.png"> to the markdown, but that is about as far as my knowledge goes. I assume I could put the image in the directory represented by 127.0.0.1:8888 (or some subdirectory) to be able to access it, but I can't figure out where that directory is. (I'm working on a mac.) So, is it possible to do what I'm trying to do without too much trouble? | Inserting image into IPython notebook markdown | 1 | -1 | 1 | 0 | 0 | 455,440 |
10,628,262 | 2012-05-16T23:52:00.000 | 0 | 0 | 1 | 0 | 0 | python,jupyter-notebook,ipython,jupyter | 0 | 67,960,394 | 0 | 14 | 0 | false | 0 | 0 | I never could get "insert image" into a markdown cell to work. However, the drag and drop entered the png file saved in the same directory as my notebook. It brought this text into the cell
""
The shift + enter > image is now displayed in notebook.
FWIW | 5 | 305 | 0 | 0 | I am starting to depend heavily on the IPython notebook app to develop and document algorithms. It is awesome; but there is something that seems like it should be possible, but I can't figure out how to do it:
I would like to insert a local image into my (local) IPython notebook markdown to aid in documenting an algorithm. I know enough to add something like <img src="image.png"> to the markdown, but that is about as far as my knowledge goes. I assume I could put the image in the directory represented by 127.0.0.1:8888 (or some subdirectory) to be able to access it, but I can't figure out where that directory is. (I'm working on a mac.) So, is it possible to do what I'm trying to do without too much trouble? | Inserting image into IPython notebook markdown | 1 | 0 | 1 | 0 | 0 | 455,440 |
10,628,262 | 2012-05-16T23:52:00.000 | 3 | 0 | 1 | 0 | 0 | python,jupyter-notebook,ipython,jupyter | 0 | 19,664,281 | 0 | 14 | 0 | false | 0 | 0 | minrk's answer is right.
However, I found that the images appeared broken in Print View (on my Windows machine running the Anaconda distribution of IPython version 0.13.2 in a Chrome browser)
The workaround for this was to use <img src="../files/image.png"> instead.
This made the image appear correctly in both Print View and the normal iPython editing view.
UPDATE: as of my upgrade to iPython v1.1.0 there is no more need for this workaround since the print view no longer exists. In fact, you must avoid this workaround since it prevents the nbconvert tool from finding the files. | 5 | 305 | 0 | 0 | I am starting to depend heavily on the IPython notebook app to develop and document algorithms. It is awesome; but there is something that seems like it should be possible, but I can't figure out how to do it:
I would like to insert a local image into my (local) IPython notebook markdown to aid in documenting an algorithm. I know enough to add something like <img src="image.png"> to the markdown, but that is about as far as my knowledge goes. I assume I could put the image in the directory represented by 127.0.0.1:8888 (or some subdirectory) to be able to access it, but I can't figure out where that directory is. (I'm working on a mac.) So, is it possible to do what I'm trying to do without too much trouble? | Inserting image into IPython notebook markdown | 1 | 0.042831 | 1 | 0 | 0 | 455,440 |
10,628,262 | 2012-05-16T23:52:00.000 | 10 | 0 | 1 | 0 | 0 | python,jupyter-notebook,ipython,jupyter | 0 | 48,560,308 | 0 | 14 | 0 | false | 0 | 0 | Last version of jupyter notebook accepts copy/paste of image natively | 5 | 305 | 0 | 0 | I am starting to depend heavily on the IPython notebook app to develop and document algorithms. It is awesome; but there is something that seems like it should be possible, but I can't figure out how to do it:
I would like to insert a local image into my (local) IPython notebook markdown to aid in documenting an algorithm. I know enough to add something like <img src="image.png"> to the markdown, but that is about as far as my knowledge goes. I assume I could put the image in the directory represented by 127.0.0.1:8888 (or some subdirectory) to be able to access it, but I can't figure out where that directory is. (I'm working on a mac.) So, is it possible to do what I'm trying to do without too much trouble? | Inserting image into IPython notebook markdown | 1 | 1 | 1 | 0 | 0 | 455,440 |
10,628,262 | 2012-05-16T23:52:00.000 | 62 | 0 | 1 | 0 | 0 | python,jupyter-notebook,ipython,jupyter | 0 | 55,623,116 | 0 | 14 | 0 | false | 0 | 0 | Getting an image into Jupyter NB is a much simpler operation than most people have alluded to here.
Simply create an empty Markdown cell.
Then drag-and-drop the image file into the empty Markdown cell.
The Markdown code that will insert the image then appears.
For example, a string shown highlighted in gray below will appear in the Jupyter cell:

Then execute the Markdown cell by hitting Shift-Enter. The Jupyter server will then insert the image, and the image will then appear.
I am running Jupyter notebook server is: 5.7.4 with Python 3.7.0 on Windows 7.
This is so simple !!
UPDATE AS OF March 18, 2021:
This simple "Drag-and-Drop-from-Windows-File-System" method still works fine in JupyterLab. JupyterLab inserts the proper HTML code to embed the image directly and permanently into the notebook so the image is stored in the .ipynb file. I am running Jupyter Lab v2.2.7 on Windows 10 Python 3.7.9 still works in JupyterLab. I am running Jupyter Lab v2.2.7 using Python 3.7.9 on Windows 10.
This stopped working in Jupyter Classic Notebook v6.1.5 sometime last year. I reported an bug notice to the Jupyter Classic Notebook developers.
It works again in the latest version of Jupyter Classic Notebook. I just tried it in v6.4 on 7/15/2021. Thank you Jupyter NB Classic Developers !! | 5 | 305 | 0 | 0 | I am starting to depend heavily on the IPython notebook app to develop and document algorithms. It is awesome; but there is something that seems like it should be possible, but I can't figure out how to do it:
I would like to insert a local image into my (local) IPython notebook markdown to aid in documenting an algorithm. I know enough to add something like <img src="image.png"> to the markdown, but that is about as far as my knowledge goes. I assume I could put the image in the directory represented by 127.0.0.1:8888 (or some subdirectory) to be able to access it, but I can't figure out where that directory is. (I'm working on a mac.) So, is it possible to do what I'm trying to do without too much trouble? | Inserting image into IPython notebook markdown | 1 | 1 | 1 | 0 | 0 | 455,440 |
10,632,427 | 2012-05-17T08:47:00.000 | 1 | 0 | 1 | 0 | 0 | python,list,matrix | 0 | 10,632,557 | 0 | 4 | 0 | true | 0 | 0 | First and foremost, such matrix would have 10G elements. Considering that for any useful operation you would then need 30G elements, each taking 4-8 bytes, you cannot assume to do this at all on a 32-bit computer using any sort of in-memory technique. To solve this, I would use a) genuine 64-bit machine, b) memory-mapped binary files for storage, and c) ditch python.
Update
And as I calculated below, if you have 2 input matrices and 1 output matrix, 100000 x 100000 32 bit float/integer elements, that is 120 GB (not quite GiB, though) of data. Assume, on a home computer you could achieve constant 100 MB/s I/O bandwidth, every single element of a matrix needs to be accessed for any operation including addition and subtraction, the absolute lower limit for operations would be 120 GB / (100 MB/s) = 1200 seconds, or 20 minutes, for a single matrix operation. Written in C, using the operating system as efficiently as possible, memmapped IO and so forth. For million by million elements, each operation takes 100 times as many time, that is 1.5 days. And as the hard disk is saturated during that time, the computer might just be completely unusable. | 2 | 1 | 0 | 0 | Circumstances
I have a procedure which will construct a matrix using the given list of values!
and the list starts growing bigger like 100 thousand or million values in a list, which in turn, will result in million x million size matrix.
in the procedure, i am doing some add/sub/div/multiply operations on the matrix, either based on the row, the column or just the element.
Issues
since the matrix is so big that i don`t think doing the whole manipulation in the memory would work.
Questions
therefore, my question would be:
how should i manipulate this huge matrix and the huge value list?
like, where to store it, how to read it etc, so that i could carry out my operations on the matrix and the computer won`t stuck or anything. | How do I operate on a huge matrix (100000x100000) stored as nested list? | 1 | 1.2 | 1 | 0 | 0 | 1,461 |
10,632,427 | 2012-05-17T08:47:00.000 | 0 | 0 | 1 | 0 | 0 | python,list,matrix | 0 | 10,643,647 | 0 | 4 | 0 | false | 0 | 0 | Your data structure is not possible with arrays, it is too large. If the matrix is for instance a binary matrix you could look at representations for its storage like hashing larger blocks of zeros together to the same bucket. | 2 | 1 | 0 | 0 | Circumstances
I have a procedure which will construct a matrix using the given list of values!
and the list starts growing bigger like 100 thousand or million values in a list, which in turn, will result in million x million size matrix.
in the procedure, i am doing some add/sub/div/multiply operations on the matrix, either based on the row, the column or just the element.
Issues
since the matrix is so big that i don`t think doing the whole manipulation in the memory would work.
Questions
therefore, my question would be:
how should i manipulate this huge matrix and the huge value list?
like, where to store it, how to read it etc, so that i could carry out my operations on the matrix and the computer won`t stuck or anything. | How do I operate on a huge matrix (100000x100000) stored as nested list? | 1 | 0 | 1 | 0 | 0 | 1,461 |
10,643,982 | 2012-05-17T21:46:00.000 | 2 | 0 | 1 | 0 | 0 | python | 0 | 10,644,350 | 0 | 3 | 0 | false | 0 | 0 | The "Result too large" doesn't refer to the number of characters in the decimal representation of the number, it means that the number that resulted from your exponential function is large enough to overflow whatever type python uses internally to store floating point values.
You need to either use a different type to handle your floating point calculations, or rework you code so that e**(-x) doesn't overflow or underflow. | 1 | 1 | 0 | 0 | Is there a way in python to truncate the decimal part at 5 or 7 digits?
If not, how can i avoid a float like e**(-x) number to get too big in size?
Thanks | python e**(-x) OverflowError: (34, 'Result too large') | 0 | 0.132549 | 1 | 0 | 0 | 6,061 |
10,652,097 | 2012-05-18T11:48:00.000 | 4 | 1 | 0 | 1 | 0 | python,testing,mocking,integration-testing,celery | 0 | 10,653,559 | 0 | 3 | 0 | false | 1 | 0 | Without the use of a special mock library, I propose to prepare the code for being in mock-up-mode (probably by a global variable). In mock-up-mode instead of calling the normal time-function (like time.time() or whatever) you could call a mock-up time-function which returns whatever you need in your special case.
I would vote down for changing the system time. That does not seem like a unit test but rather like a functional test as it cannot be done in parallel to anything else on that machine. | 1 | 20 | 0 | 0 | I've built a paywalled CMS + invoicing system for a client and I need to get more stringent with my testing.
I keep all my data in a Django ORM and have a bunch of Celery tasks that run at different intervals that makes sure that new invoices and invoice reminders get sent and cuts of access when users don't pay their invoices.
For example I'd like to be a able to run a test that:
Creates a new user and generates an invoice for X days of access to the site
Simulates the passing of X + 1 days, and runs all the tasks I've got set up in Celery.
Checks that a new invoice for an other X days has been issued to the user.
The KISS approach I've come up with so far is to do all the testing on a separate machine and actually manipulate the date/time at the OS-level. So the testing script would:
Set the system date to day 1
Create a new user and generate the first invoice for X days of access
Advance then system date 1 day. Run all my celery tasks. Repeat until X + 1 days have "passed"
Check that a new invoice has been issued
It's a bit clunky but I think it might work. Any other ideas on how to get it done? | Simulating the passing of time in unittesting | 0 | 0.26052 | 1 | 0 | 0 | 3,901 |
10,660,246 | 2012-05-18T21:59:00.000 | 0 | 0 | 1 | 0 | 0 | python,restructuredtext,doctest | 0 | 32,209,186 | 0 | 2 | 0 | false | 0 | 0 | Adding doctests to your documentation makes sense to ensure that code in your documentation is actually working as expected. So, you're testing your documentation. For general code-testing, using doctests can't be recommended at all. | 1 | 0 | 0 | 0 | Another way to ask this:
If I wrote doctests in reST, can I use it for Sphinx or other automatic documentation efforts?
Background: I don't know how to use Sphinx and have not much experience with reST either, so I am wondering if I can use reST-written doctests somewhere else useful than with Sphinx? | Why would I write doctests in restructured text? | 0 | 0 | 1 | 0 | 0 | 880 |
10,665,768 | 2012-05-19T13:55:00.000 | 1 | 1 | 0 | 1 | 0 | python,eclipse-plugin,eclipse-pde | 0 | 10,856,306 | 0 | 1 | 0 | true | 1 | 0 | You can already create an External Launch config from Run>External Tools>External Tools Configurations. You are basically calling the program from eclipse. Any output should then show up in the eclipse Console view. External launch configs can also be turned into External Builders and attached to projects.
If you are looking to run your python script within your JVM then you need a implementation of python in java ... is that what you are looking for? | 1 | 2 | 0 | 0 | I want to generate an Eclipse plugin that just runs an existing Python script with parameters.
While this sounds very simple, I don't think it's easy to implement. I can generate a Eclipse plugin. My issue is not how to use PDE. But:
can I call the existing Python script from Java, from an Eclipse plugin?
it needs to run from the embedded console with some parameters
Is this reasonably easy to do? And I don't plan to reimplement it in any way. Calling it from command-line works very well. My question is: can Eclipse perform this, too?
Best,
Marius | Eclipse plugin that just runs a python script | 0 | 1.2 | 1 | 0 | 0 | 1,884 |
10,681,740 | 2012-05-21T08:23:00.000 | 2 | 0 | 0 | 0 | 0 | python,c,gtk,pygobject,gtktreeview | 0 | 10,690,046 | 0 | 2 | 0 | true | 0 | 1 | At the risk of being too basic (perhaps I misunderstand the problem), to manipulate treeview selections, you use the GtkTreeSelection object returned from GtkTreeView.get_selection. You can attach to signals on this object, change the current selection,etc. | 1 | 3 | 0 | 0 | I'm using PyGObject but I think this is a question that could be adapted to all GTK, so if someone know how to do it using C or anything should work in python also.
I have two treeview, Active and Inactive, I load data from a Sqlite database and I can swap and drag & drop items from one to other.
This is just an aestetic thing, if I click on one item on one treeview I want that a previous selected item on the other be deselected.
It appears that nobody had to do something similar because I didn't found anything about it on the net. | Gtk.Treeview deselect row via signals and code | 0 | 1.2 | 1 | 0 | 0 | 1,990 |
10,689,273 | 2012-05-21T16:49:00.000 | 3 | 1 | 1 | 0 | 0 | python,cryptography,rsa,pycrypto | 0 | 10,689,441 | 0 | 3 | 0 | false | 0 | 0 | No, you can't compute e from d.
RSA is symmetric in d and e: you can equally-well interchange the roles of the public and the private keys. Of course, we choose one specially to be private and reveal the other -- but theoretically they do the same thing. Naturally, since you can't deduce the private key from the public, you can't deduce the public key from the private either.
Of course, if you have the private key that means that you generated the keypair, which means that you have the public key somewhere. | 2 | 8 | 0 | 0 | I am newbie in cryptography and pycrypto.
I have modulus n and private exponent d. From what I understand after reading some docs private key consists of n and d.
I need to sign a message and I can't figure out how to do that using pycrypto. RSA.construct() method accepts a tuple. But I have to additionally provide public exponent e to this method (which I don't have).
So here is my question. Do I have to compute e somehow in order to sign a message?
It seems I should be able to sign a message just by using n and d (that constitute private key). Am I correct? Can I do this with pycrypto?
Thanks in advance. | I have modulus and private exponent. How to construct RSA private key and sign a message? | 0 | 0.197375 | 1 | 0 | 0 | 10,098 |
10,689,273 | 2012-05-21T16:49:00.000 | 2 | 1 | 1 | 0 | 0 | python,cryptography,rsa,pycrypto | 0 | 10,690,482 | 0 | 3 | 0 | false | 0 | 0 | If you don't have the public exponent you may be able to guess it. Most of the time it's not a random prime but a static value. Try the values 65537 (hex 0x010001, the fourth number of Fermat), 3, 5, 7, 13 and 17 (in that order).
[EDIT] Simply sign with the private key and verify with the public key to see if the public key is correct.
Note: if it is the random prime it is as hard to find as the private exponent; which means you would be trying to break RSA - not likely for any key sizes > 512 bits. | 2 | 8 | 0 | 0 | I am newbie in cryptography and pycrypto.
I have modulus n and private exponent d. From what I understand after reading some docs private key consists of n and d.
I need to sign a message and I can't figure out how to do that using pycrypto. RSA.construct() method accepts a tuple. But I have to additionally provide public exponent e to this method (which I don't have).
So here is my question. Do I have to compute e somehow in order to sign a message?
It seems I should be able to sign a message just by using n and d (that constitute private key). Am I correct? Can I do this with pycrypto?
Thanks in advance. | I have modulus and private exponent. How to construct RSA private key and sign a message? | 0 | 0.132549 | 1 | 0 | 0 | 10,098 |
10,697,651 | 2012-05-22T07:32:00.000 | 1 | 0 | 0 | 1 | 0 | python,google-app-engine | 0 | 10,698,246 | 0 | 1 | 0 | false | 1 | 0 | If your are running a unitest and using init_taskqueue_stub() you need to pass the path of the queue.yaml when calling it using the root_path parameter. | 1 | 1 | 0 | 0 | I've added a new queue to a python GAE app, and would like to add tasks to it, but always get an UnknownQueueError when I run my tests. On the other hand, I see the queue present in the GAE admin console (both local and remote). So the question is (1) do I miss something when I add a task to my queue? (2) if not, then how can I run custom queues in a test?
Here is my queue.yaml
queue:
- name: requests
rate: 20/s
bucket_size: 100
retry_parameters:
task_age_limit: 60s
and my python call is the following:
taskqueue.add(queue_name="requests", url=reverse('queue_request', kwargs={"ckey":ckey}))
any ideas? | queues remain unknown or just don't know how to call them | 0 | 0.197375 | 1 | 0 | 0 | 178 |
10,703,616 | 2012-05-22T14:00:00.000 | 3 | 0 | 0 | 0 | 0 | python | 0 | 10,703,762 | 0 | 1 | 0 | true | 1 | 0 | Pystache is a template library not http server! If you want make webapp try to use ready-made webframeworks like Django or Pyramid. | 1 | 1 | 0 | 0 | This is really a newbie question, but I don't know how to search answers for this. I want to use pystache, and I am able to execute the .py file to print out some rendered output from .mustache file. but how exactly do I convert this into .html file? Specifically, how to put it on the server so that the browser would direct to the .html file like index.html? | Get started with pystache | 0 | 1.2 | 1 | 0 | 0 | 617 |
10,735,998 | 2012-05-24T10:50:00.000 | 7 | 0 | 0 | 0 | 0 | python,ruby-on-rails,ruby,django,interop | 0 | 10,736,225 | 0 | 2 | 0 | true | 1 | 0 | I suggest you either:
Expose a ruby service using REST or XML-RPC.
or
Shell out to a ruby script from Django.
To transfer data between Python and Ruby I suggest you use JSON, XML or plain text (depending on what kind of data you need to transfer).
I would recommend to use option 2 (start a ruby script from the Python process), as this introduces fewer moving parts to the solution. | 2 | 2 | 0 | 0 | Lets say I have a few Ruby gems that I'd like to use from my Python (Django) application. I know this isn't the most straightforward question but let's assume that rewriting the Ruby gem in Python is a lot of work, how can I use it?
Should I create an XML-RPC wrapper around it using Rails and call it? Is there something like a ruby implementation in Python within which I could run my gem code?
Are there other methods that I may have missed? I've never tacked anything like this before I was a bit lost in this area.
Thanks | Using a Ruby gem from a Django application | 0 | 1.2 | 1 | 0 | 0 | 333 |
10,735,998 | 2012-05-24T10:50:00.000 | 3 | 0 | 0 | 0 | 0 | python,ruby-on-rails,ruby,django,interop | 0 | 10,737,263 | 0 | 2 | 0 | false | 1 | 0 | It depends a little on what you need to do. The XML-RPC suggestion has already been made.
You might actually be able to use them together in a JVM, assuming you can accept running Django with jython and use jruby. But that is a bit of work, which may or may not be worth the effort.
It would perhaps be easier if you described exactly what the Ruby gem is and what problem it is supposed to solve. You might get suggestions that could help you avoid the problem altogether. | 2 | 2 | 0 | 0 | Lets say I have a few Ruby gems that I'd like to use from my Python (Django) application. I know this isn't the most straightforward question but let's assume that rewriting the Ruby gem in Python is a lot of work, how can I use it?
Should I create an XML-RPC wrapper around it using Rails and call it? Is there something like a ruby implementation in Python within which I could run my gem code?
Are there other methods that I may have missed? I've never tacked anything like this before I was a bit lost in this area.
Thanks | Using a Ruby gem from a Django application | 0 | 0.291313 | 1 | 0 | 0 | 333 |
10,745,363 | 2012-05-24T21:13:00.000 | 2 | 1 | 0 | 1 | 0 | python,linux,ubuntu,console,terminal | 0 | 10,745,449 | 0 | 3 | 0 | false | 0 | 0 | I'd also avoid doing this with a terminal, but to answer the question directly:
right click on the terminal window
profiles
profile preferences
scolling
scollback: unlimited
It's better though to redirect to a file, then access that file. "tail -f" is very helpful. | 1 | 1 | 0 | 0 | I have this python script that outputs the Twitter Stream to my terminal console. Now here is the interesting thing:
* On snowleopard I get all the data I want.
* On Ubuntu (my pc) this data is limited and older data is deleted.
Both terminal consoles operate in Bash, so it has to be an OS thing presumably.
My question is: how do I turn this off? I want to leave my computer on for a week to capture around 1 or 2 gigabytes of data, for my bachelor thesis! | Ubuntu Linux: terminal limits the output when I get the full Twitter Streaming API | 0 | 0.132549 | 1 | 0 | 0 | 957 |
10,754,496 | 2012-05-25T12:32:00.000 | 0 | 0 | 0 | 1 | 0 | python,eclipse,pydev | 1 | 10,803,047 | 0 | 2 | 0 | false | 1 | 0 | The only one so far I found available is PyFlakes, it does some level of dependency check and import validations. | 1 | 0 | 0 | 0 | Is there any eclipse plugin for python dependency management? just like what M2Eclipse does for maven project? so I can resolve all the dependencies and get ride off all the errors when I develop python using pydev.
If there is no such plugin, how do I resolve the dependencies, do I have to install the dependency modules locally? | python eclipse dependency plugin - m2eclipse like | 0 | 0 | 1 | 0 | 0 | 195 |
10,760,968 | 2012-05-25T20:27:00.000 | 0 | 1 | 1 | 0 | 0 | python,python-2.7,omniorb | 0 | 10,761,033 | 0 | 1 | 0 | false | 0 | 0 | you can't and shouldn't. it is compiled specifically for 2.7. that's why "2.7" appears in the download file name.
if you want to use a different python, download the source package and build it yourself. | 1 | 0 | 0 | 0 | I have downloaded the omniORB4.1.6 pre-compiled with msvc10. I have python 2.7 and everything seems to work fine. I want to know how i can tell my omniidl to use my python 2.6 installation instead of 2.7. Can anyone help me? Thanks. | Changing python from 2.7 to 2.6 for omniidl | 0 | 0 | 1 | 0 | 0 | 123 |
10,762,199 | 2012-05-25T22:53:00.000 | 0 | 0 | 1 | 0 | 0 | python | 0 | 10,762,273 | 0 | 9 | 0 | false | 0 | 0 | the STRING.count method should work just fine for the first problem. If you look carefully, there actually aren't two non-overlapping 'sses' strings in assesses.
You either have a- sses -ses, or asse- sses. Do you see the issue? Calling "trans-Panamanian banana".count("an") produces the correct number.
I think using eval() is probably ok. Your other option is to split on the + and then iterate over the resulting list, doing type conversion and accumulation as you go. It sounds like your doing a string module, so that might be the better solution for your gpa ;).
EDIT: F.G. beat me to posting essentially the same answer by mere seconds. Gah! | 1 | 3 | 0 | 0 | I'm following a python website for my schoolwork. It's really neat, it gives you tasks to complete and compiles the code in the browser. Anyway, I came to a challenge that I'm not really sure how to go about.
One of the questions was:
The same substring may occur several times inside the same string: for example "assesses" has the substring "sses" 2 times, and
"trans-Panamanian banana" has the substring "an" 6 times. Write a
program that takes two lines of input, we call the first needle and
the second haystack. Print the number of times that needle occurs as a
substring of haystack.
I'm not too sure how I should start this, I know I have to compare the two strings but how? I used the count method, but it didn't recognize the second occurrence of sses in assesses.
My second question is one I solved but I cheated a little.
The question was:
Write a program that takes a single input line of the form «number1»+«number2», where both of these represent positive integers,
and outputs the sum of the two numbers. For example on input 5+12 the
output should be 17.
I used the eval() method and it worked, I just think that this wasn't what the grader had in mind for this.
Any insight would be greatly appreciated.
EDIT: Second question was solved. | Substring Counting in Python and Adding 2 Numbers From One Line of Input | 0 | 0 | 1 | 0 | 0 | 4,889 |
10,764,025 | 2012-05-26T06:03:00.000 | 2 | 0 | 1 | 0 | 1 | python,python-idle | 1 | 10,764,052 | 0 | 1 | 1 | false | 0 | 0 | According to the doc,
On Windows, HOME and USERPROFILE will be used if set, otherwise a
combination of HOMEPATH and HOMEDRIVE will be used. An initial ~user
is handled by stripping the last directory component from the created
user path derived above.
You can try run 'set' in command prompt to see if these two environment variables are set or not. If yes, remove the setting. | 1 | 1 | 0 | 0 | hi i am having trouble in running python IDLE.
once i have installed EMACS and uninstalled it, whenever i try to run python IDLE it gives me:
Warning: os.path.expanduser("~") points to
C:\Program Files\Emacs\,
but the path does not exist
the IDLE does work, but i can't launch IDLE by simply clicking on "open with IDLE".
i guess i need to change the path of os.path.expanduser to fix this error?
but i can't find it.
where should i look for and which path does it originally point?
thank you. | how do i change the value of os.path.expanduser("~") in python? | 1 | 0.379949 | 1 | 0 | 0 | 858 |
10,775,351 | 2012-05-27T16:03:00.000 | 7 | 0 | 1 | 0 | 0 | python,node.js,ipc | 0 | 10,775,437 | 0 | 7 | 0 | false | 0 | 0 | If you arrange to have your Python worker in a separate process (either long-running server-type process or a spawned child on demand), your communication with it will be asynchronous on the node.js side. UNIX/TCP sockets and stdin/out/err communication are inherently async in node. | 1 | 134 | 0 | 0 | Node.js is a perfect match for our web project, but there are few computational tasks for which we would prefer Python. We also already have a Python code for them.
We are highly concerned about speed, what is the most elegant way how to call a Python "worker" from node.js in an asynchronous non-blocking way? | Combining node.js and Python | 0 | 1 | 1 | 0 | 1 | 116,462 |
10,780,165 | 2012-05-28T06:22:00.000 | 1 | 0 | 1 | 0 | 0 | python | 0 | 10,780,293 | 0 | 1 | 0 | true | 0 | 0 | If you explicitly call your Python 2.6 Binary when installing the package it will install to that instance instead. So instead of python setup.py install you would do /path/to/python26 setup.py install. | 1 | 1 | 0 | 0 | I have python 2.7 by default and 2.6 but I need some modules installed on python 2.6 .But by default it is installing on 2.7.Any idea how to do it. | how to install library modules on python version which is not default | 0 | 1.2 | 1 | 0 | 0 | 100 |
10,780,523 | 2012-05-28T06:53:00.000 | 2 | 0 | 0 | 0 | 0 | python,wxpython | 1 | 10,780,744 | 0 | 1 | 0 | false | 0 | 0 | wx.lib.agw.persist is new in 2.8.12.1. | 1 | 1 | 0 | 0 | I am using fedora and wxpython version 2.8.12 .While trying to import wx.lib.agw.persist
I am getting an error saying
Import Error: No module persist.
Will the module not be there by default with wxPython, if not how do I get this module installed? please help me. | Error importing persist module | 0 | 0.379949 | 1 | 0 | 0 | 58 |
10,805,356 | 2012-05-29T19:22:00.000 | 3 | 0 | 0 | 0 | 1 | java,python,image-processing,rgb | 0 | 10,807,433 | 0 | 4 | 0 | true | 0 | 0 | Typical white-balance issues are caused by differing proportions of red, green, and blue in the makeup of the light illuminating a scene, or differences in the sensitivities of the sensors to those colors. These errors are generally linear, so you correct for them by multiplying by the inverse of the error.
Suppose you measure a point you expect to be perfectly white, and its RGB values are (248,237,236) i.e. pink. If you multiply each pixel in the image by (248/248,248/237,248/236) you will end up with the correct balance.
You should definitely ensure that your Bayer filter is producing the proper results first, or the base assumption of linear errors will be incorrect. | 1 | 1 | 1 | 0 | I am working on a telescope project and we are testing the CCD. Whenever we take pictures things are slightly pink-tinted and we need true color to correctly image galactic objects. I am planning on writing a small program in python or java to change the color weights but how can I access the weight of the color in a raw data file (it is rgb.bin)?
We are using a bayer matrix algorithm to convert monochromatic files to color files and I would imagine the problem is coming from there but I would like to fix it with a small color correcting program.
Thanks! | Change color weight of raw image file | 0 | 1.2 | 1 | 0 | 0 | 1,116 |
10,816,962 | 2012-05-30T13:09:00.000 | 1 | 0 | 1 | 0 | 0 | python,ruby-on-rails,ruby,deployment,scrapy | 0 | 10,817,852 | 0 | 1 | 0 | true | 1 | 0 | Are all your (DTAP) environments using the same operating system and processor architecture?
If not, I wouldn't recommend shipping the Python interpreter with your project. Why don't you compile a more recent version of Python on your environments and install it in some non-standard path, like /opt/python27/ (or similiar).
Then, just create a virtualenv on all environments using that interpreter.
Next, you deploy your project from your virtualenv (without the bin, include, etc.) to the virtualenv of the target environment.
I've never used Capistrano (Python dev myself), but I'm assuming it can just copy over directories from one environment (or VCS) to the other. | 1 | 0 | 0 | 0 | I have a Ruby on Rails project, using Python + Scrapy to scrape the web, and I would like to distribute and deploy the Rails project with all Python executables and libraries installed automatically.
The deployment environment ships by default a Python version lower than 2.6, and I would like users not to depend on OS and installed Python executable.
So, basically I want to achieve a Python virtualenv inside my Rails project.
Any ideas on how do that?
I use Capistrano for deploying my Rails project. | Setting up a Python environment in a Rails project | 0 | 1.2 | 1 | 0 | 0 | 303 |
10,821,741 | 2012-05-30T18:00:00.000 | 0 | 0 | 1 | 0 | 0 | python,gnupg | 0 | 10,824,962 | 0 | 2 | 0 | false | 0 | 0 | Having re-read your question again after reading the python-gnupg documentation, I think you're asking about signing a document with several private keys at the same time you are encrypting it.
Unfortunately, that process is not supported by python-gnupg, because GnuPG does not support it either. You'll have to decide how exactly you want your signatures to be applied, then do them one at a time.
You can, for instance layer the signatures, by encrypting and signing with one key, then signing the results with another private key (and repeating for any additional keys you have beyond the second).
Alternatively, you could create several "detached" signatures, each of just the base document (so no signatures would be applied to other signatures). This is a bit more complicated, as I'm not sure that there's any file format that will automatically be recognized by GnuPG to verify several detached signatures at once. | 1 | 0 | 0 | 0 | I am encrypting a file using python-gnupg and it looks like encrypt_file onlys accepts a single key for the sign parameter. If I have a key file with multiple keys that I want to encrypt the document with, how can I do this? If I understand correctly I should be able to encrypt a file using multiple keys. | Encrypt a file in python-gnupg using multiple keys | 1 | 0 | 1 | 0 | 0 | 1,486 |
10,823,285 | 2012-05-30T19:52:00.000 | 3 | 0 | 0 | 0 | 1 | python,pygame | 0 | 10,823,436 | 0 | 2 | 0 | false | 0 | 1 | Colorkey lets you pick one color in a sprite (surface); any pixel of that color will be completely transparent. (If you remember .gif transparency, it's the same idea.)
'alpha' is a measure of opacity - 0 for completely transparent, 255 for completely opaque - and can be applied to an entire sprite (as an alpha plane) or per pixel (slower, but gives much more control).
To make the sprites disappear, I would just set them to non-visible, rather than playing around with alpha values. | 2 | 1 | 0 | 0 | I am new to using pygame and I was wondering if someone could explain the use of alpha values? I don't quite understand the difference between that and colorkey.
For my current situation I think I want to use alpha values but am not quite clear how.
In my game I have two sprites with .png files loaded to each surface. Upon collision I would like both images to disappear (go completely transparent).
I would really appreciate it if someone could explain the basics of alpha value and how to specifically use them in pygame and if it is possible to use these alpha values to solve my problem.
Thanks! | Pygame alpha values | 1 | 0.291313 | 1 | 0 | 0 | 399 |
10,823,285 | 2012-05-30T19:52:00.000 | 0 | 0 | 0 | 0 | 1 | python,pygame | 0 | 51,940,966 | 0 | 2 | 0 | false | 0 | 1 | Although if you want the sprites to fade until they've vanished, gradually reduce the alpha value after they've collided. When alpha reaches 0, use del sprite if you don't need the sprites anymore. | 2 | 1 | 0 | 0 | I am new to using pygame and I was wondering if someone could explain the use of alpha values? I don't quite understand the difference between that and colorkey.
For my current situation I think I want to use alpha values but am not quite clear how.
In my game I have two sprites with .png files loaded to each surface. Upon collision I would like both images to disappear (go completely transparent).
I would really appreciate it if someone could explain the basics of alpha value and how to specifically use them in pygame and if it is possible to use these alpha values to solve my problem.
Thanks! | Pygame alpha values | 1 | 0 | 1 | 0 | 0 | 399 |
10,824,951 | 2012-05-30T22:08:00.000 | 1 | 0 | 1 | 0 | 0 | python,image,image-processing,python-imaging-library | 0 | 10,825,016 | 0 | 1 | 0 | true | 0 | 0 | You should pick a location to save the image when setting the filename variable.
filename = "/Users/clifgray/Desktop/filename.jpeg"
imgObj.save(filename) | 1 | 0 | 0 | 0 | I am using the Image.save method from PIL and I cannot find where the file is being placed. I have done a system search yet still no luck.
My code looks like this:
print imageObj.save(fileName, "JPEG")
and gives the proper None response to say that it is working. Any idea where they go and how I can find them?
Thanks! | Where does Python Imaging LIbrary Save Objects | 0 | 1.2 | 1 | 0 | 0 | 1,490 |
10,826,266 | 2012-05-31T01:12:00.000 | 1 | 0 | 0 | 0 | 0 | python,mysql,django,migration,django-south | 0 | 10,872,504 | 0 | 6 | 0 | false | 1 | 0 | South isnt used everywhere. Like in my orgainzation we have 3 levels of code testing. One is local dev environment, one is staging dev enviroment, and third is that of a production .
Local Dev is on the developers hands where he can play according to his needs. Then comes staging dev which is kept identical to production, ofcourse, until a db change has to be done on the live site, where we do the db changes on staging first, and check if everything is working fine and then we manually change the production db making it identical to staging again. | 2 | 22 | 0 | 0 | From someone who has a django application in a non-trivial production environment, how do you handle database migrations? I know there is south, but it seems like that would miss quite a lot if anything substantial is involved.
The other two options (that I can think of or have used) is doing the changes on a test database and then (going offline with the app) and importing that sql export. Or, perhaps a riskier option, doing the necessary changes on the production database in real-time, and if anything goes wrong reverting to the back-up.
How do you usually handle your database migrations and schema changes? | Database migrations on django production | 0 | 0.033321 | 1 | 1 | 0 | 11,397 |
10,826,266 | 2012-05-31T01:12:00.000 | 0 | 0 | 0 | 0 | 0 | python,mysql,django,migration,django-south | 0 | 70,559,647 | 0 | 6 | 0 | false | 1 | 0 | If its not trivial, you should have pre-prod database/ app that mimic the production one. To avoid downtime on production. | 2 | 22 | 0 | 0 | From someone who has a django application in a non-trivial production environment, how do you handle database migrations? I know there is south, but it seems like that would miss quite a lot if anything substantial is involved.
The other two options (that I can think of or have used) is doing the changes on a test database and then (going offline with the app) and importing that sql export. Or, perhaps a riskier option, doing the necessary changes on the production database in real-time, and if anything goes wrong reverting to the back-up.
How do you usually handle your database migrations and schema changes? | Database migrations on django production | 0 | 0 | 1 | 1 | 0 | 11,397 |
10,830,829 | 2012-05-31T09:25:00.000 | 0 | 0 | 0 | 0 | 0 | python,tomcat,jetty,data-migration,web-inf | 0 | 10,833,110 | 0 | 1 | 0 | false | 1 | 0 | The ".dbx" suffix has been used by various softwares over the years so it could be almost anything. The only way to know what you really have here is to browse the source code of the legacy java app (or the relevant doc or ask the author etc).
wrt/ scraping, it's probably going to be a lot of a pain for not much results, depending on the app. | 1 | 0 | 0 | 0 | I want to migrate data from an old Tomcat/Jetty website to a new one which runs on Python & Django. Ideally I would like to populate the new website by directly reading the data from the old database and storing them in the new one.
Problem is that the database I was given comes in the form of a bunch of WEB-INF/data/*.dbx and I didn't find any way to read them. So, I have a few questions.
Which format do the WEB-INF/data/*.dbx use?
Is there a python module for directly reading from the WEB-INF/data/*.dbx files?
Is there some external tool for dumpint the WEB-INF/data/*.dbx to an ascii format that will be parsable by python?
If someone has attempted a similar data migration, how does it compare against scraping the data from the old website? (assuming that all important data can be scraped)
Thanks! | migrating data from tomcat .dbx files | 0 | 0 | 1 | 0 | 0 | 126 |
10,836,062 | 2012-05-31T14:56:00.000 | 3 | 0 | 0 | 0 | 0 | python,user-interface,wxpython | 0 | 10,837,325 | 0 | 3 | 1 | false | 0 | 1 | I'm not aware of a way to dynamically change the style flags on the text control widget after creation. Some widgets allow this sort of thing on some OSes and some do not. You could just create two text controls with the second one in normal mode and hide it. Then when you want to toggle, you grab the password-protected version's value and hide it, give the value to the normal one and show it. You'll probably need to call Layout() at the end as well. | 2 | 4 | 0 | 0 | With wxPython a password field could be created as:
wx.TextCtrl(frm, -1, '', style=wx.TE_PASSWORD )
I'm wondering if there is a way to dynamically change this password field into a normal textctrl, such that user could see what the password is. | how to make wxpython password textctrl show chars? | 1 | 0.197375 | 1 | 0 | 0 | 5,172 |
10,836,062 | 2012-05-31T14:56:00.000 | -1 | 0 | 0 | 0 | 0 | python,user-interface,wxpython | 0 | 10,836,512 | 0 | 3 | 1 | false | 0 | 1 | then it would not be a password entry, but you can use style=wx.TE_MULTILINE or TE_RICH. if that is what you are asking.
Hope this helps | 2 | 4 | 0 | 0 | With wxPython a password field could be created as:
wx.TextCtrl(frm, -1, '', style=wx.TE_PASSWORD )
I'm wondering if there is a way to dynamically change this password field into a normal textctrl, such that user could see what the password is. | how to make wxpython password textctrl show chars? | 1 | -0.066568 | 1 | 0 | 0 | 5,172 |
10,838,959 | 2012-05-31T18:12:00.000 | 0 | 0 | 0 | 0 | 0 | python,pygame,mouseevent,mouse | 0 | 10,839,038 | 0 | 3 | 0 | false | 0 | 1 | You will need to poll for events in your main loop, and when you detect a MOUSEBUTTONDOWN event you will need to check if it's on the sprite you want, and if it is then start the music. | 1 | 2 | 0 | 0 | My simple question is how can I use pygame.MOUSEBUTTONDOWN on a sprite or item to trigger an event?
e.g. I have item_A and want music to start when I press the object with my mouse. | How to use pygame.MOUSEBUTTONDOWN? | 0 | 0 | 1 | 0 | 0 | 18,074 |
10,859,714 | 2012-06-02T04:10:00.000 | 0 | 0 | 0 | 0 | 0 | python,django,blob,sorl-thumbnail | 0 | 10,971,247 | 0 | 1 | 0 | true | 1 | 0 | It seems like the issue was the Jquery plugin that i was using to upload multiple files. The plugin was the one who split the file into chunks which were then sent individually as POST requests, and django didn't know that blob1, blob2, blob3, blob4 where the same file in chunks. | 1 | 0 | 0 | 0 | I am writing a small gallery app and after extensive testing i submitted a 3mb image.
Basically the gallery app relies on another app that creates an UploadedFile instance for every image, however i see that for this specific image it has created 4 instances ( rows in db ) that belong to the same 3mb image, each image has "blob" at the end of its name.
My question is, how can i handle an image as big as this and be able to refer to the whole image ? in a html tag or django templatetag like sorl-thumbnail's ?
Im using python 2.7.2, Django 1.3.1 and MySQL 5.1 | Django Handle big files ( imageblob ) | 0 | 1.2 | 1 | 0 | 0 | 140 |
10,871,752 | 2012-06-03T15:53:00.000 | 0 | 0 | 0 | 0 | 0 | python,sockets,network-traffic,traffic-measurement | 0 | 10,872,601 | 0 | 2 | 0 | false | 0 | 0 | IPTraf is an ncurses based IP LAN monitoring tool. Has a capability to generate network statistics including TCP,UDP,ICMP and some more.
Since you're thinking to execute it from python, you may consider to use screen (screen manager with VT100/ANSI terminal emulation) to overcome ncurses issues and you may want to pass logging and interval parameters to IPTraf which forces iptraf to log to a file in a given interval. Little bit tricky but eventually you can have what you are looking for by basically parsing the log file. | 1 | 1 | 0 | 0 | I guess it's socket programming. But I have never done socket programming expect for running the tutorial examples while learning Python. I need some more ideas to implement this.
What I specifically need is to run a monitoring program of a server which will poll or listen to traffic being exchange from different IPs across different popular ports. For example, how do I get data received and sent through port 80 of 192.168.1.10 and 192.168.1.1 ( which is the gateway).
I checked out a number of ready made tools like MRTG, Bwmon, Ntop etc but since we are looking at doing some specific pattern studies, we need to do data capturing within the program.
Idea is to monitor some popular ports and do a study of network traffic across some periods and compare them with some other data.
We would like to figure a way to do all this with Python.... | Python: how to calculate data received and send between two ipaddresses and ports | 0 | 0 | 1 | 0 | 1 | 2,642 |
10,874,949 | 2012-06-03T23:59:00.000 | 0 | 0 | 0 | 1 | 0 | python,eclipse,import,root | 1 | 10,995,927 | 0 | 1 | 0 | false | 0 | 0 | It seems like your PYTHONPATH is different outside/inside Eclipse. Try just removing the Python interpreter and adding it again to gather new paths -- if that's not enough, do: import sys;print('\n'.join(sorted(sys.path))) outside/inside Eclipse to know what's different and fix your paths inside Eclipse. | 1 | 1 | 0 | 0 | Im developing an installer for a GNU/Linux distribution in Python using Eclipse+PyDev. For some tasks on it there is needed that the program runs with root priviledges, but I run Eclipse as a common user.
I had searched a lot of stuff on the Internet about how to run an app as root without having to run Eclipse with priviledges, but no a single clue of how to accomplish this in a "nice way". So I tried with the "gksu2" python module, with has the gksu2.sudo() functions in the same way as gksu in bash.
I created a new module, imported gksu2 and executed the main.py module of the app, but I got a "ImportError: No module named ui.regular_ui.wizard". It runs ok without gksu2 in eclipse, but it doesn't if I use it. I thought it was an environment variables problem, but the sys.path is ok.
The same error happens if I run the app from a terminal, outside of Eclipse. What do you think? | import error in eclipse, running an app as root | 1 | 0 | 1 | 0 | 0 | 98 |
10,877,048 | 2012-06-04T06:16:00.000 | 2 | 1 | 0 | 0 | 0 | python,django,performance,profiling,stress-testing | 0 | 10,906,462 | 0 | 2 | 0 | true | 1 | 0 | You could try configuring your test to ramp up slowly, slow enough so that you can see the CPU gradually increase and then run the profiler before you hit high CPU. There's no point trying to profile code when the CPU is maxed out because at this point everything will be slow. In fact, you really only need a relatively light load to get useful data from a profiler.
Also, by gradually increasing the load you will be better able to see if there is a gradual increase in CPU (suggesting a CPU bottleneck) or if there is a sudden jump in CPU (suggesting perhaps another type of problem, one that would not necessarily be addressed by more CPU).
Try using something like a Cosntant Throughput Timer to pace the requests, this will prevent JMeter getting carried away and over-loading the system. | 1 | 6 | 0 | 0 | Background
I have a Django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down.
Problem:
Profiling the application gives me time taken by functions.
This time increases on high load.
Time consumed may be due to complex calculation or for waiting for CPU.
So, how to find the CPU cycles consumed by a piece of code ?
Since reducing the CPU consumption will increase the response time.
I might have written extremely efficient code and need to add more CPU power
OR
I might have some stupid code taking the CPU and causing the slow down ?
Update
I am using Jmeter to profile my web app, it gives me a throughput of 2 requests/sec. [ 100 users]
I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request.
More Info
Configuration Nginx + Uwsgi with 4 workers
No database used, using a responses from a REST API
On 1st hit the response of REST API gets cached, therefore doesn't makes a difference.
Using ujson for json parsing.
Curious to know:
Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools.
All those I found were casual snippets of code that perform profiling. | How do you find the CPU consumption for a piece of Python? | 0 | 1.2 | 1 | 0 | 0 | 1,360 |
10,897,239 | 2012-06-05T12:23:00.000 | 1 | 0 | 1 | 0 | 0 | python,linux | 0 | 10,897,423 | 0 | 1 | 0 | false | 0 | 0 | I used an editor that does code rollups and understood Python syntax, then I looked for rollups that are in unexpected locations. I don't remember if Kate does that. It's not obvious that there is an issue, but it makes it easier when you are looking for an issue. | 1 | 6 | 0 | 0 | (Warning: Potential flame-war starter. This is however not my goal, the point here is not to discuss the design choices of Python, but to know how to make the best out of it).
Is there a program, script, method (Unix-based, ideally), to display "virtual" brackets around blocs of code in Python, and to keep them where they are so that the code can still be executed even if indenting is broken ?
I realize that Python only uses indentation to define blocks of code, and that the final program may not contain brackets.
However, I find it very annoying that your program can stop functioning just because of an unfortunate and undetected carriage-return.
So, ideally I would be looking for a plugin in a text editor (kate, gedit...) that would:
Display virtual brackets around blocks of code in my Python program
Keep them in place
Generate dynamically the "correct" Python code with the indentation corresponding to where the brackets belong.
(no flame-war, please !) | Virtual brackets in Python | 0 | 0.197375 | 1 | 0 | 0 | 262 |
10,899,192 | 2012-06-05T14:26:00.000 | 0 | 0 | 0 | 0 | 0 | javascript,python,html,automation | 0 | 10,899,256 | 0 | 3 | 0 | false | 1 | 0 | I think it will be easier for you get program like autoit. | 1 | 2 | 0 | 0 | Every Monday at Work, I have the task of printing out Account analysis (portfolio analysis) and Account Positions for over 50 accounts. So i go to the page, click "account analysis", enter the account name, click "format this page for printing", Print the output (excluding company disclosures), then I go back to the account analysis page and click "positions" instead this time, the positions for that account comes up. Then I click "format this page for printing", Print the output (excluding company disclosures).Then I repeat the process for the other 50 accounts.
I haven't taken any programming classes in the past but I heard using python to automate a html response might help me do this faster. I was wondering if that's true, and if so, how does it work? Also, are there any other programs that could enable me automate this process and save time?
Thank you so much | Automating HTTP navigation and HTML printing using Python | 1 | 0 | 1 | 0 | 1 | 773 |
10,906,198 | 2012-06-05T23:01:00.000 | 0 | 1 | 0 | 1 | 0 | python,exe,dmg | 0 | 10,906,453 | 0 | 1 | 0 | false | 1 | 0 | If you mean specifically with Python, as I gather from tagging that in your question, it won't simply run the same way as Java will, because there's no equivalent Virtual Machine.
If the user has a Python interpreter on their system, they they can simply run the .py file.
If they do not, you can bundle the interpreter and needed libraries into an executable using Py2Exe, cxFreeze, or bbFreeze. For replacing a dmg, App2Exe does something similar.
However. the three commands you listed are not python-related, and rely on functionality that is not necessarily available on Windows or Mac, so it might not be as possible. | 1 | 0 | 0 | 0 | Newbie question I am finding it hard to get my head around.
If I wanted to use one of the many tool out their like rsync lsync or s3cmd how can you build these into a program for none computer savvy people to use.
Ie I am comfortable opening terminal and running s3cmd which Is developed in python how would I go about developing this as a dmg file for mac or exe file for windows?
So a user could just install the dmg or exe then they have s3cmd lsync or rsync on their computer.
I can open up eclipse code a simple app in java and then export as a dmg or exe I cannot figure out how you do this for other languages say write a simple piece of code that I cam save as a dmg or exe and that after installed will add a folder to my desktop or something simple like that to get me started? | Compiling and running code as dmg or exe | 0 | 0 | 1 | 0 | 0 | 1,925 |
10,920,199 | 2012-06-06T18:43:00.000 | 0 | 0 | 0 | 0 | 0 | python,metrics,recommendation-engine,personalization,cosine-similarity | 0 | 10,955,138 | 0 | 2 | 0 | false | 0 | 0 | Recommender systems in the land of research generally work on a scale of 1 - 5. It's quite nice to get such an explicit signal from a user. However I'd imagine the reality is that most users of your system would never actually give a rating, in which case you have nothing to work with.
Therefore I'd track page views but also try and incorporate some explicit feedback mechanism (1-5, thumbs up or down etc.)
Your algorithm will have to take this into consideration. | 2 | 0 | 1 | 0 | I'm looking to implement an item-based news recommendation system. There are several ways I want to track a user's interest in a news item; they include: rating (1-5), favorite, click-through, and time spent on news item.
My question: what are some good methods to use these different metrics for the recommendation system? Maybe merge and normalize them in some way? | Recommendation system - using different metrics | 0 | 0 | 1 | 0 | 0 | 614 |
10,920,199 | 2012-06-06T18:43:00.000 | 0 | 0 | 0 | 0 | 0 | python,metrics,recommendation-engine,personalization,cosine-similarity | 0 | 10,956,591 | 0 | 2 | 0 | false | 0 | 0 | For recommendation system, there are two problems:
how to quantify the user's interest in a certain item based on the numbers you collected
how to use the quantified interest data to recommend new items to the user
I guess you are more interested in the first problem.
To solve the first problem, you need either linear combination or some other fancy functions to combine all the numbers. There is really no a single universal function for all systems. It heavily depends on the type of your users and your items. If you want a high quality recommandation system, you need to have some data to do machine learning to train your functions.
For the second problem, it's somehow the same thing, plus you need to analyze all the items to abstract some relationships between each other. You can google "Netflix prize" for some interesting info. | 2 | 0 | 1 | 0 | I'm looking to implement an item-based news recommendation system. There are several ways I want to track a user's interest in a news item; they include: rating (1-5), favorite, click-through, and time spent on news item.
My question: what are some good methods to use these different metrics for the recommendation system? Maybe merge and normalize them in some way? | Recommendation system - using different metrics | 0 | 0 | 1 | 0 | 0 | 614 |
10,929,285 | 2012-06-07T09:39:00.000 | 0 | 0 | 0 | 0 | 0 | python,qt,checkbox,combobox,pyqt | 0 | 10,930,835 | 0 | 1 | 0 | true | 0 | 1 | You can do this using the model->view framework, but it means implementing a custom model to support checkable data.
You create a custom model by subclassing QAbstractItemModel. This presents an API to the QComboBox for accessing the underlying data. Off the top of my head I think you'll need to implement the flags method to indicate that you support ItemIsUserCheckable for the indexes you want to be able to check. You'll also need to implement data() which reports back the data state from your underlying data, and setData() which accept input from the QComboBox and changes the underlying data.
You then set this as the model for the QComboBox using setModel().
This isn't really beginner stuff, but the model->view framework in Qt is one of it's most important and valuable features and well worth getting to grips with. | 1 | 0 | 0 | 0 | I'm new to PyQt and I have to work on an application which use it. For the moment, I don't have any problem, but I'm stuck on something. I have to create a "ComboBox with its items checkable, like a CheckBox". This ComboBox should contain many image format, like "jpg", "exr" or "tga", and I will have to pick up the text of the checked option and put it in a variable. The problem is that I can't find a thing about making items checkable using a ComboBox (if you know how to, It would gladly help me !)
Since I can't do it with a ComboBox, maybe I can do it with a QList I thought, but I can't find anything either which is understandable for a beginner like me. I have read stuff about flags and "Qt.ItemIsUserCheckable" but I don't know how to use it in a easy way :(
Can you help me ? Thanks !
PyQt version : 4.4.3
Python version : 2.6 | How to simply create a list of CheckBox which has a dropdown list like a ComboBox with PyQt? | 0 | 1.2 | 1 | 0 | 0 | 1,527 |
Subsets and Splits