Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
14,071,294 |
2012-12-28T14:50:00.000
| 12 | 0 | 0 | 0 |
python,django,python-2.7
| 14,071,410 | 2 | true | 1 | 0 |
Check settings.py for django.contrib.staticfiles in INSTALLED_APPS and django.core.context_processors.static in TEMPLATE_CONTEXT_PROCESSORS
| 2 | 4 | 0 |
So I'm trying to run collectstatic to push some files to AWS, but I keep getting a "unknown command" error. When running manage.py help I get a list of subcommands, and sure enough collectstatic is not there. I have looked also in the installed apps part of settings.py and the staticfiles app is installed. Python is version 2.7 and Django is 1.4
This project was not built by me. I'm just a frontend developer who has been dragged into the world of programming D: Maybe the solution is really easy.
Thanks!
|
Collectstatic command is not available in Django 1.4
| 1.2 | 0 | 0 | 3,442 |
14,071,294 |
2012-12-28T14:50:00.000
| 4 | 0 | 0 | 0 |
python,django,python-2.7
| 18,474,545 | 2 | false | 1 | 0 |
For those for whom the accepted answer did not work, doing
python manage.py collectstatic
worked while
python django-admin.py collecstatic did not.
Maybe that can help some.
| 2 | 4 | 0 |
So I'm trying to run collectstatic to push some files to AWS, but I keep getting a "unknown command" error. When running manage.py help I get a list of subcommands, and sure enough collectstatic is not there. I have looked also in the installed apps part of settings.py and the staticfiles app is installed. Python is version 2.7 and Django is 1.4
This project was not built by me. I'm just a frontend developer who has been dragged into the world of programming D: Maybe the solution is really easy.
Thanks!
|
Collectstatic command is not available in Django 1.4
| 0.379949 | 0 | 0 | 3,442 |
14,072,428 |
2012-12-28T16:17:00.000
| 0 | 0 | 1 | 0 |
python,c,concurrency,pbkdf2,commoncrypto
| 14,350,237 | 2 | false | 0 | 0 |
Lacking documentation or source code, one option is to build a test app with say 10 threads looping on calls to CCKeyDerivationPBKDF with a random selection from say 10 different sets of arguments with 10 known results.
Each thread checks the result of a call to make sure it is what is expected. Each thread should also have a usleep() call for some random amount of time (bell curve sitting on say 10% of the time each call to CCKeyDerivationPBKDF takes) in this loop in order to attempt to interleave operations as much as possible.
You'll probably want to instrument it with debugging that keeps track of how much concurrency you are able to generate. With a 10% sleep time and 10 threads, you should be able to keep 9 threads concurrent.
If it makes it through an aggregate of say 100,000,000 calls without an error, I'd assume it was thread safe. Of course you could run it for much longer than that to get greater assurances.
| 2 | 0 | 0 |
I'm using CCKeyDerivationPBKDF to generate and verify password hashes in a concurrent environment and I'd like to know whether it it thread safe. The documentation of the function doesn't mention thread safety at all, so I'm currently using a lock to be on the safe side but I'd prefer not to use a lock if I don't have to.
|
Is CCKeyDerivationPBKDF thread safe?
| 0 | 0 | 0 | 223 |
14,072,428 |
2012-12-28T16:17:00.000
| 1 | 0 | 1 | 0 |
python,c,concurrency,pbkdf2,commoncrypto
| 14,394,303 | 2 | true | 0 | 0 |
After going through the source code of the CCKeyDerivationPBKDF() I find it to be "thread unsafe". While the code for CCKeyDerivationPBKDF() uses many library functions which are thread-safe(eg: bzero), most user-defined function(eg:PRF) and the underlying functions being called from those user-defined functions, are potentially thread-unsafe. (For eg. due to use of several pointers and unsafe casting of memory eg. in CCHMac). I would suggest unless they make all the underlying functions thread-safe or have some mechanism to alteast make it conditionally thread-safe, stick with your approach, or modify the commoncrypto code to make it thread-safe and use that code.
Hope it helps.
| 2 | 0 | 0 |
I'm using CCKeyDerivationPBKDF to generate and verify password hashes in a concurrent environment and I'd like to know whether it it thread safe. The documentation of the function doesn't mention thread safety at all, so I'm currently using a lock to be on the safe side but I'd prefer not to use a lock if I don't have to.
|
Is CCKeyDerivationPBKDF thread safe?
| 1.2 | 0 | 0 | 223 |
14,073,030 |
2012-12-28T17:04:00.000
| 1 | 0 | 0 | 0 |
python,sql,django,redis,feed
| 14,074,169 | 3 | false | 1 | 0 |
You said redis? Everything is better with redis.
Caching is one of the best ideas in software development, no mather if you use Materialized Views you should also consider trying to cache those, believe me your users will notice the difference.
| 2 | 4 | 0 |
Need a way to improve performance on my website's SQL based Activity Feed. We are using Django on Heroku.
Right now we are using actstream, which is a Django App that implements an activity feed using Generic Foreign Keys in the Django ORM. Basically, every action has generic foreign keys to its actor and to any objects that it might be acting on, like this:
Action:
(Clay - actor) wrote a (comment - action object) on (Andrew's review of Starbucks - target)
As we've scaled, its become way too slow, which is understandable because it relies on big, expensive SQL joins.
I see at least two options:
Put a Redis layer on top of the SQL database and get activity feeds from there.
Try to circumvent the Django ORM and do all the queries in raw SQL, which I understand can improve performance.
Any one have thoughts on either of these two, or other ideas, I'd love to hear them.
|
Good way to make a SQL based activity feed faster
| 0.066568 | 1 | 0 | 499 |
14,073,030 |
2012-12-28T17:04:00.000
| 1 | 0 | 0 | 0 |
python,sql,django,redis,feed
| 14,201,647 | 3 | true | 1 | 0 |
Went with an approach that sort of combined the two suggestions.
We created a master list of every action in the database, which included all the information we needed about the actions, and stuck it in Redis. Given an action ID, we can now do a Redis look up on it and get a dictionary object that is ready to be returned to the front end.
We also created action id lists that correspond to all the different types of activity streams that are available to a user. So given a user id, we have his friends' activity, his own activity, favorite places activity, etc, available for look up. (These I guess correspond somewhat to materialized views, although they are in Redis, not in PSQL.)
So we get a user's feed as a list of action ids. Then we get the details of those actions by look ups on the ids in the master action list. Then we return the feed to the front end.
Thanks for the suggestions, guys.
| 2 | 4 | 0 |
Need a way to improve performance on my website's SQL based Activity Feed. We are using Django on Heroku.
Right now we are using actstream, which is a Django App that implements an activity feed using Generic Foreign Keys in the Django ORM. Basically, every action has generic foreign keys to its actor and to any objects that it might be acting on, like this:
Action:
(Clay - actor) wrote a (comment - action object) on (Andrew's review of Starbucks - target)
As we've scaled, its become way too slow, which is understandable because it relies on big, expensive SQL joins.
I see at least two options:
Put a Redis layer on top of the SQL database and get activity feeds from there.
Try to circumvent the Django ORM and do all the queries in raw SQL, which I understand can improve performance.
Any one have thoughts on either of these two, or other ideas, I'd love to hear them.
|
Good way to make a SQL based activity feed faster
| 1.2 | 1 | 0 | 499 |
14,074,805 |
2012-12-28T19:40:00.000
| 1 | 0 | 1 | 0 |
python,configobj
| 14,075,363 | 6 | false | 0 | 0 |
Configobj is for reading and writing ini-style config files. You are apparently trying to use it to write bash scripts. That's not something that is likely to work.
Just write the bash-script like you want it to be, perhaps using a template or something instead.
To make ConfigParses not write the spaces around the = probably requires that you subclass it. I would guess that you have to modify the write method, but only reading the code can help there. :-)
| 1 | 2 | 0 |
Simple question. It is possible to make configobj to not put a space before and after the '=' in a configuration entry ?
I'm using configobj to read and write a file that is later processed by a bash script, so putting an antry like:
VARIABLE = "value"
breaks the bash script, it needs to always be:
VARIABLE="value"
Or if someone has another suggestion about how to read and write a file with this kind of entries (and restrictions) is fine too.
Thanks
|
Make python configobj to not put a space before and after the '='
| 0.033321 | 0 | 0 | 1,931 |
14,075,972 |
2012-12-28T21:22:00.000
| 1 | 0 | 0 | 1 |
python,networking,twisted
| 14,077,687 | 1 | true | 0 | 0 |
If the datagram socket is connected, it can receive ICMP Port Unreachable messages via the Sockets API, which presumably maps into calling this method. Note that I am not speaking of the TCP connect operation here, but the Sockets connect() method, which can be called on a UDP socket, and which presumably maps into some method in the API you are using.
| 1 | 5 | 0 |
This is a method of DatagramProtocol class in Twisted. As I understand UDP protocol doesn't guarantee that someone is listening on the given port even using ConnectedDatagramProtocol.
Can someone explain to me, when this method is called and how I suppose to check if there is someone listening to my transmission using UDP?
|
What does ConnectionRefused do?
| 1.2 | 0 | 0 | 102 |
14,076,674 |
2012-12-28T22:38:00.000
| 2 | 0 | 0 | 0 |
python,tkinter
| 14,077,404 | 4 | false | 0 | 1 |
Tkinter does not have a widget that can render a web page.
| 1 | 4 | 0 |
So my App needs to be able to open a single webpage(and it must be from the internet and not saved) in it, and specifically I'd like to use the Tkinter GUI toolkit since it's the one i'm most comfortable with. On top of that though, I'd like to be able to generate events in the window(say a mouse click) but without actually using the mouse. What's a good method to go about this?
EDIT: I suppose to clarify this a bit, I need a way to load a webpage, or maybe even a specific java applet into a tkinter widget or window. Or if not that perhaps another method to do this where I can generate mouse and keyboard events without using either the mouse of the keyboard.
|
Using Tkinter to open a webpage
| 0.099668 | 0 | 0 | 8,422 |
14,076,819 |
2012-12-28T22:56:00.000
| 2 | 0 | 1 | 0 |
python,xml-parsing,document-imaging
| 14,077,019 | 1 | true | 0 | 0 |
Parsing refers to reading a series of tokens and matching rules in a grammar. If you can specify your problem in this way you can write the grammar using pyparsing.
If what you are interested in doing is extracting the structure of an XML document, then you can use the standard python module xml.etree.ElementTree. Also look at beautifulsoup.
| 1 | 0 | 0 |
Given XML objects of many classes (say, types of document images), I need to generate some outputs depending on the class of the object, and a complex set of mathematical rules relating the contents of the XML file.
What is the generic name of this task (parsing?) and what is the easiest way to encode separate rules for each class, bearing in mind that the rules may involve mathematical relationships. I think I should create a file for each class to keep it manageable using a DSL but I am not sure. Someone suggested incorporating a full-blown Lua or Javascript interpreter. Is this a good idea? I want to keep it lean, and simple.
|
How to encode parsing rules in python?
| 1.2 | 0 | 0 | 176 |
14,077,092 |
2012-12-28T23:29:00.000
| 0 | 0 | 1 | 0 |
python,opencv,dependencies,webcam
| 40,101,267 | 1 | false | 0 | 0 |
Camera 0 points to your default camera driver, camera 1 to your secondary driver, camera 2 to your tertiary and so on.
Which means that even with a single camera hardware, you can have multiple drivers that can access it.
Let's assume that your primary cam driver (probably provided by HP) was corrupted during the uninstallation. This would mean that when you call camera 0, you're instantiating the HP driver (now corrupted), which is giving you a black screen.
However, since your camera hardware is unaffected and so are your secondary and tertiary camera drivers, when you access camera 1, your secondary camera driver streams the live feed correctly.
In case you do not have a tertiary camera driver, camera 2 will point to the secondary driver. Hence camera 2 will call the driver corresponding to camera 1 if no driver is associated with camera 2
| 1 | 0 | 0 |
I am running windows 7 64 bit (32 bit python) on an hp touchsmart 600. A while ago I uninstalled then reinstalled opencv 2.4.3. In between the uninstalling and reinstalling I uninstalled some programs I thought weren't being used. Now opencv only displays a black screen when before opencv was able to access my webcam correctly.
However, if I use camera 2 (i.e. cam = create_capture(2,...)) opencv is able to use my webcam correctly. Why did camera 0 suddenly stop working? Did it somehow become camera 2 or could I have uninstalled a dependency that opencv needed to access my webcam? Also, using camera 1 and 3 works as well, even though I only have one webcam.
|
opencv - camera 0 not working
| 0 | 0 | 0 | 1,556 |
14,078,640 |
2012-12-29T04:15:00.000
| 1 | 0 | 0 | 1 |
python,django,google-app-engine
| 14,079,702 | 1 | true | 1 | 0 |
You can use Django 1.4 with CloudSQL.
If you're using the HRD, you'd want to use django-nonrel (the successor to App Engine Helper).
While django-nonrel works, the documentation is a bit lacking at the moment.
| 1 | 0 | 0 |
I would like to remove dependencies on the old-style App Engine Helper for Django in my Python-based App-Engine application . At the same time, I would like to upgrade to Python2.7 and Django1.4. I have a few questions about the upgrade process:
1) The new App Engine SDK (Version 1.7.4) states that Django is fully supported. Does this mean that neither the App Engine Helper nor the Django-norel will be required in order for Django to function on the App Engine?
2) Assuming that the answer to my previous question is that no external patches/helpers are required, I am having trouble finding an example App Engine/Django application based on the new SDK. Do you know where I could find a Django/AppEngine example that does not rely on external patches/helpers? (this will give me a known good starting point, which I can then port my existing code into).
3) Currently my database models inherit from BaseModel which was provided in the App Engine Helper. In order to not break my website, what should these models inherit from given the BaseModel will no longer exist?
|
Google App Engine - Upgrading from App Engine Helper
| 1.2 | 0 | 0 | 91 |
14,078,762 |
2012-12-29T04:41:00.000
| 1 | 0 | 0 | 0 |
python,django,import
| 14,078,822 | 2 | false | 1 | 0 |
from catalog.models import Product is fine, and i think it's better than the example. Because if you want to use the app, you have to change every single import statement where you use your project name. Don't do from myproject.myapp.models import MyModel, it's a bad practice, if you're sure that the models you importing is in the same directory, i think this one is the best : from models import Product, else from myapp.models import MyModel
| 1 | 0 | 0 |
I am going through another django tutorial and I noticed for the imports the author uses
from project.app.file import item but this doesnt work unless I do
from app.file import item
Example:
Authors version:
from ecomstore.catalog.models import Product
which doesnt work
Error: ImportError: No module named catalog.models
but this does
from catalog.models import Product
I am just why the authors version doesnt work for me and why and if there is a setting I can change to fix this so its easier to follow along.
I am seeing this as an issue in following along and trying to understand how django works this and where the settings or configurations for this resides where I can change within my project.
Thanks
|
django import project.app vs import app
| 0.099668 | 0 | 0 | 345 |
14,079,327 |
2012-12-29T06:33:00.000
| 3 | 0 | 0 | 0 |
python,django
| 14,079,421 | 2 | false | 1 | 0 |
Django 1.5 is going to be released very soon. You can start using now.
| 1 | 4 | 0 |
I'm new to Django (coming from PHP Yii), and I want to learn it by developing some web site. Should I write it on Django 1.4 or 1.5?
If you had to develop a new (production) web site now, would you use Django 1.5?
|
Is it time to use Django 1.5?
| 0.291313 | 0 | 0 | 1,665 |
14,079,587 |
2012-12-29T07:13:00.000
| 7 | 0 | 1 | 0 |
python,integer
| 14,079,631 | 4 | false | 0 | 0 |
There's no need. The interpreter handles allocation behind the scenes, effectively promoting from one type to another as needed without you doing anything explicit.
| 1 | 3 | 0 |
I want to differentiate between 32-bit and 64-bit integers in Python. In C it's very easy as we can declare variables using int_64 and int_32. But in Python how do we differentiate between 32-bit integers and 64-bit integers?
|
How to determine whether the number is 32-bit or 64-bit integer in Python?
| 1 | 0 | 0 | 5,901 |
14,079,650 |
2012-12-29T07:23:00.000
| 2 | 0 | 1 | 0 |
c#,java,c++,python
| 14,079,956 | 1 | false | 0 | 0 |
It is theoretically possible. But in practice, it would be a lot of work, and the result would not be a small program. In fact, it would be roughly equivalent in size and functionality / complexity to a standard JVM. Which leads to the obvious point that it is unlikely to be worth the effort.
I suggest that you just use a standard JVM, and leverage the (probably) hundreds of man-years of effort that the implementors have put into building high quality JIT compilers ...
| 1 | 0 | 0 |
Instead of needing something like Java, is there a way I could make a program that has a small piece of machine code to compile itself?
|
Is There Any Way To Make A Self JIT Compiled Program?
| 0.379949 | 0 | 0 | 121 |
14,093,870 |
2012-12-30T19:26:00.000
| 4 | 0 | 0 | 0 |
javascript,python,webview,webkit,web-crawler
| 14,094,212 | 1 | true | 1 | 1 |
There is no real way to determine if that page is fully loaded.
One method is to determine the amount of time since the last request. However, some pages will make repeated requests continually. This is common with tracking scripts and some ad scripts.
What I would do is use a set amount of time after the web view has said it finished loading... 5 seconds or so. It isn't perfect, but is the best you got, as there is no way to determine what "fully loaded" is for an arbitrary page.
| 1 | 6 | 0 |
I am using python webkit.WebView and gtk to crawl a web page. However, the web page is kind of dynamically loaded by javascript.
The WebView "load-finished" event is not sufficient to handle this. Is there any indicator/event to let me know that the page is really fully loaded even the content produced by javascript?
Thanks,
|
How do I know a page is really fully loaded?
| 1.2 | 0 | 0 | 218 |
14,094,224 |
2012-12-30T20:10:00.000
| 2 | 0 | 1 | 1 |
python,python-3.x,exec
| 14,094,933 | 3 | false | 0 | 0 |
import sys,
change "sys.path" by appending the path during run time,then import the module that will help
| 1 | 7 | 0 |
Let's say I have a file foo.py, and within the file I want to execute a file bar.py. But, bar.py isn't in the same directory as foo.py, it's in a subdirectory call baz. Will execfile work? What about os.system?
|
How to run a Python file not in directory from another Python file?
| 0.132549 | 0 | 0 | 6,750 |
14,096,057 |
2012-12-31T00:10:00.000
| 0 | 0 | 0 | 0 |
python,sqlite
| 14,096,117 | 4 | false | 0 | 0 |
Verify if sqlite exist in PATH and what are privileges for file test.db
| 1 | 1 | 0 |
I just got python and typing:
sqlite test.db
..into the shell, but I get a syntax error....what have I missed?
|
Trying to use sqlite with python shell
| 0 | 0 | 0 | 4,471 |
14,096,829 |
2012-12-31T02:34:00.000
| 3 | 1 | 1 | 0 |
python,python-zipfile
| 14,096,860 | 1 | true | 0 | 0 |
No. Read out the rest of the archive and write it to a new zip file.
| 1 | 0 | 0 |
Is it possible to delete a folder from inside a file using the ZipFile module in Python?
|
Deleting a folder in a zip file using the Python zipfile module
| 1.2 | 0 | 0 | 422 |
14,100,093 |
2012-12-31T09:54:00.000
| 3 | 0 | 1 | 0 |
python,node.js,amqp
| 14,100,129 | 3 | true | 0 | 0 |
As far as I am aware, there isn't an equivalent to pickle in JavaScript (or in the standard node libraries).
| 1 | 15 | 0 |
One of Python's features is the pickle function, that allows you to store any arbitrary anything, and restore it exactly to its original form. One common usage is to take a fully instantiated object and pickle it for later use. In my case I have an AMQP Message object that is not serializable and I want to be able to store it in a session store and retrieve it which I can do with pickle. The primary difference is that I need to call a method on the object, I am not just looking for the data.
But this project is in nodejs and it seems like with all of node's low-level libraries there must be some way to save this object, so that it could persist between web calls.
The use case is that a web page picks up a RabbitMQ message and displays the info derived from it. I don't want to acknowledge the message until the message has been acted on. I would just normally just save the data in session state, but that's not an option unless I can somehow save it in its original form.
|
What would be the equivalent of Pythons "pickle" in nodejs
| 1.2 | 0 | 1 | 6,428 |
14,100,792 |
2012-12-31T11:09:00.000
| 0 | 0 | 0 | 0 |
python,openerp
| 14,102,667 | 2 | false | 1 | 0 |
Why cant you use _constraint? You will only get warning when you are saving the record.
| 1 | 2 | 0 |
How can we compare two rows in a oneTomany table in OpenERP6.1?
I have a main table, say 'XX' and i have a oneTomany table, say 'YY' corresponding to
that table.
Now, i have three columns in the 'YY' table.Every time i create records into
this table, i want to check if the values in the three columns are identical.
i.e if i click the create button and entered the first row with values
'happy','new','year',
the next time when you enter the same values, you should be prompted with
a message that these values should not be repeated.
|
Compare two rows in a one2many table in OpenERP6.1
| 0 | 0 | 0 | 467 |
14,101,063 |
2012-12-31T11:36:00.000
| 3 | 0 | 0 | 0 |
c#,python,ui-automation,pywinauto
| 14,104,632 | 2 | false | 0 | 1 |
The short answer is that there's no good way to automate sub-controls of a DataGridView using PyWinAuto.
If you want to read data out of a DataGridView (e.g. read the text contents of a cell, or determine whether a checkbox is checked), you are completely out of luck. If you want to control a DataGridView, there are two approaches that you can try:
clicking at various coordinate offsets.
sending keypresses to it to mimic keyboard navigation.
These may work if your DataGridView has a small amount of data in it, but once the DataGridView starts needing scrollbars you're out of luck. Furthermore, clicking at offsets is sensitive to the sizes of the rows and columns, and if the columns can be resized then this approach will never be reliable.
| 1 | 0 | 0 |
I was working with GUI automation for visual studio C# desktop application.
There I have DataGridView and inside the grid I have combo box and check boxes.
I tried to automate these using pywinauto, I can get only grid layout control only
and internal things I cant able to get the controls
(I tried with print _control_identifiers() , Swapy, AutoIT Window Info and winspy also..)
anyone plz tell me how to automate visual studio C# DataGridView and its sub controls using pywinauto for desktop application??
|
C# GUI automation using PyWinAuto
| 0.291313 | 0 | 0 | 1,878 |
14,106,229 |
2012-12-31T21:35:00.000
| 4 | 0 | 0 | 0 |
python,python-2.7,user-interface
| 14,106,272 | 1 | true | 0 | 1 |
There is no "best" nor "easiest". All of the toolkits you mention have strengths and weaknesses. I've had significant experience with wxPython and Tkinter, and both are nice. I would say Tkinter is a little easier, wxPython is a little more full-featured.
When someone asks me this question I tell them just to pick one. Once you learn one -- any one -- you'll be in a better position to decide for yourself which one is better than the others.
I recommend Tkinter just because you probably already have it, but if you're not afraid of installing other GUI toolkits, pick any of them and start coding.
| 1 | 6 | 0 |
I found some many Python GUI toolkits.
The ones I like are:
wxPython
pyGTK
Tkinter
pyQt
pySide
Which one is the best and easiest to use?
And by the way what is the difference in pyQt and pySide? They both seem to be alike :/.
|
Which is the best and easiest Python GUI toolkit?
| 1.2 | 0 | 0 | 8,808 |
14,111,460 |
2013-01-01T14:58:00.000
| 4 | 0 | 0 | 1 |
python,performance,tcpserver,httpserver
| 14,111,484 | 2 | false | 0 | 0 |
Neither of those built-in libraries was meant for serious production use. Get real implementations, for example, from Twisted, or Tornado, or gunicorn, etc, etc, there are lots of them. There's no need to stick with the standard library modules.
The performance, and probably the robustness of the built-in libraries is poor.
| 1 | 4 | 0 |
I've spent a few days on and off trying to get some hard statistics on what kind of performance you can expect from using the HTTPServer and/or TCPServer built-in libraries in Python.
I was wondering if anyone can give me any idea's as to how either/or would handle serving HTTP requests and if they would be able to hold up in production environments or in situations with high traffic and if anyone had any tips or clues that would improve performance in these situations. (Assuming that there is no access to external libraries like Twisted etc)
Thanks.
|
Performance of Python HTTPServer and TCPServer
| 0.379949 | 0 | 1 | 2,663 |
14,113,906 |
2013-01-01T20:23:00.000
| 0 | 0 | 0 | 1 |
python,macos,command-line
| 14,113,933 | 3 | false | 0 | 0 |
Add the directory it is stored in to your PATH variable? From your prompt, I'm guessing you're using an sh-like shell and from your tags, I'm further assuming OS X. Go into your .bashrc and make the necessary changes.
| 2 | 0 | 0 |
I have just written a python script to do some batch file operations. I was wondering how i could keep it in some common path like rest of the command line utilities like cd, ls, grep etc.
What i expect is something like this to be done from any directory -
$ script.py arg1 arg2
|
How to execute a python command line utility from the terminal in any directory
| 0 | 0 | 0 | 170 |
14,113,906 |
2013-01-01T20:23:00.000
| 1 | 0 | 0 | 1 |
python,macos,command-line
| 14,113,928 | 3 | true | 0 | 0 |
Just put the script directory into the PATH environment variable, or alternatively put the script in a location that is already in the PATH. On Unix systems, you usually use /home/<nick>/bin for your own scripts and add that to the PATH.
| 2 | 0 | 0 |
I have just written a python script to do some batch file operations. I was wondering how i could keep it in some common path like rest of the command line utilities like cd, ls, grep etc.
What i expect is something like this to be done from any directory -
$ script.py arg1 arg2
|
How to execute a python command line utility from the terminal in any directory
| 1.2 | 0 | 0 | 170 |
14,115,173 |
2013-01-01T23:20:00.000
| 0 | 0 | 1 | 0 |
python,tidesdk
| 14,669,489 | 1 | false | 0 | 0 |
Yes, I've done this on multiple occasions. What has worked for me is to place the folder with the module into the "Resources" directory of your TideSDK project and use your usual "import" to load the module.
| 1 | 0 | 0 |
Folks... New to TideSDK. Want to use the tool to develop cross platform scientific applications. I need to import external modules and package them into the app. Is this possible?
|
TIdeSDK Python module import
| 0 | 0 | 0 | 432 |
14,116,907 |
2013-01-02T04:40:00.000
| 1 | 0 | 0 | 0 |
python,mysql,database,django,django-south
| 14,116,934 | 1 | true | 1 | 0 |
It doesn't matter if the database is running on the same machine as django/South. Wherever your django installation lives has the database configured in its settings. South migrations should work exactly the same.
| 1 | 0 | 0 |
Originally, I've just had servers that have both Django, South and my database installed (web + database server combined). But now, we're moving to a dedicate database server, and a dedicate web app server.
Since the database server doesn't have Django or South installed, how can I run migrations? Or at least, what's the best way to update the database with new schema changes? It's MySQL if that helps.
|
How do I run South migrations (Django) on a database server?
| 1.2 | 0 | 0 | 101 |
14,119,208 |
2013-01-02T08:57:00.000
| 2 | 0 | 0 | 0 |
python,openerp
| 14,119,351 | 3 | false | 1 | 0 |
When saving a new record in openerp, a dictionary will be generated with all the fields having data as keys and its data as values. If the field is a one2many and have many lines, then a list of dictionaries will be the value for the one2many field. You can modify it by overriding the create and write functions in openerp.
| 2 | 2 | 0 |
I would like to know as to where is the value stored for a one2many table initially in OpenERP6.1?
i.e if we create a record for a one2many table,this record will be actually
saved to the database table only after saving the record of the main table
associated with this, even though we can create many records(rows) for one2many
table.
Where are these rows stored?
Are they stored in any OpenERP memory variable? if so which is that variable
or function with which we can access those..
Please help me out on this.
Thanks in Advance!!!
|
Where is the value stored for a one2many table initially in OpenERP6.1
| 0.132549 | 1 | 0 | 1,493 |
14,119,208 |
2013-01-02T08:57:00.000
| 0 | 0 | 0 | 0 |
python,openerp
| 14,120,545 | 3 | false | 1 | 0 |
One2Many field is child parent relation in OpenERP. One2Many is just logical field there is no effect in database for that.
If you are creating Sale order then Sale order line is One2Many in Sale order model. But if you will not put Many2One in Sale order line then One2Many in Sale order will not work.
Many2One field put foreign key for the related model in the current table.
| 2 | 2 | 0 |
I would like to know as to where is the value stored for a one2many table initially in OpenERP6.1?
i.e if we create a record for a one2many table,this record will be actually
saved to the database table only after saving the record of the main table
associated with this, even though we can create many records(rows) for one2many
table.
Where are these rows stored?
Are they stored in any OpenERP memory variable? if so which is that variable
or function with which we can access those..
Please help me out on this.
Thanks in Advance!!!
|
Where is the value stored for a one2many table initially in OpenERP6.1
| 0 | 1 | 0 | 1,493 |
14,120,045 |
2013-01-02T10:02:00.000
| 3 | 0 | 0 | 1 |
python,linux,audio
| 14,120,125 | 2 | true | 0 | 0 |
mpd should be perfect for you. It is a daemon and can be controlled by various clients, ranging from GUI-less command-line clients like mpc to GUI command-line clients like ncmpc and ncmpcpp up to several full-featured desktop clients.
mpd + mpc should do the job for you as mpc can be easily controlled via the command line and is also able to provide various status information about the currently played song and other things.
It seems like there is already a python client library available for mpd - python-mpd.
| 1 | 3 | 0 |
I want to build use my Raspberry Pi as a media station. It should be able to play songs via commands over the network. These commands should be handled by a server written in Python. Therefor, I need a way to control audio playback via Python.
I decided to use a command line music player for linux since those should offer the most flexibility for audio file formats. Also, Python libraries like PyAudio and PyMedia don't seem to work for me.
I don't really have great expectations about the music player. It must be possible to play and pause sound files in as much codecs as possible and turn the volume up and down. Also it has to be a headless player since I am not running any desktop environment. There are a lot of players like that out there, it seems. mpg123 for example, works well for all I need.
The problem I have now is that all of these players seem to have a user interface written in ncurses and I have no idea how to access this with the Python subprocess module. So, I either need a music player which comes with Python bindings or one which can be controlled with the command line via the subprocess module. At least these are the solutions I thought about by now.
Does anyone know about a command line audio player for linux that would solve my problem? Or is there any other way?
Thanks in advance
|
Python-controllable command line audio player for Linux
| 1.2 | 0 | 0 | 2,048 |
14,120,217 |
2013-01-02T10:14:00.000
| 0 | 0 | 1 | 0 |
python,types
| 14,122,364 | 3 | false | 0 | 0 |
There is an important distinction of data types for variables (which only statically typed languages have, the respective statements are called declarations and help to determine and allocate the correct amount of memory at fixed addresses) and data types of values, which are much more common. Data types for variables help the compiler to detect incompatible assignments. As a side benefit of this effort for the developer (unnecessary for dynamically typed languages) the compiler may issue warnings for variables never used, detect attempts to use uninitialized variables etc.
| 2 | 3 | 0 |
I have started programming recently and I got confused when I learnt about data types. Why do we have/need datatypes?
Also languages like Python don't have data types making themm much simpler to learn. Why do languages like C or C++ have datatypes then?
|
Regarding data types
| 0 | 0 | 0 | 133 |
14,120,217 |
2013-01-02T10:14:00.000
| 0 | 0 | 1 | 0 |
python,types
| 14,127,206 | 3 | false | 0 | 0 |
Data types impose abstract structure on data. This abstraction allows us to work with data through simpler interfaces, or to use more efficient a algorithms for manipulating data.
Data types (structures) are the bread and butter of computer science.
Languages generally have built-in support for a few common data types (such as arrays, lists, associative arrays), and then vary in how well they support user defined data types.
| 2 | 3 | 0 |
I have started programming recently and I got confused when I learnt about data types. Why do we have/need datatypes?
Also languages like Python don't have data types making themm much simpler to learn. Why do languages like C or C++ have datatypes then?
|
Regarding data types
| 0 | 0 | 0 | 133 |
14,120,640 |
2013-01-02T10:43:00.000
| 0 | 0 | 0 | 0 |
django,python-2.7,django-templates
| 18,424,335 | 5 | false | 1 | 0 |
I think your are writing the code from "beginning Django E-commerce" book.
The error is in your code you write {{ cart_sutotal|currency }} in place of {{ cart_subtotal }} .
| 3 | 1 | 0 |
I am going through the beginning django ecommerce application for shopping cart app. I am getting the error as mentioned above while clicking the add to cart button.
I am getting the error at the line {{ cart_sutotal|currency}}
|
bad operand type for abs(): 'str' in django
| 0 | 0 | 0 | 15,930 |
14,120,640 |
2013-01-02T10:43:00.000
| 1 | 0 | 0 | 0 |
django,python-2.7,django-templates
| 14,121,826 | 5 | false | 1 | 0 |
The currency filter expects its argument to be a numeric value; you're passing a string to your template as cart_sutotal.
Before passing it to the template, convert it to a decimal.Decimal, or, better, figure out why you're adding up price values and coming up with a string for the subtotal.
| 3 | 1 | 0 |
I am going through the beginning django ecommerce application for shopping cart app. I am getting the error as mentioned above while clicking the add to cart button.
I am getting the error at the line {{ cart_sutotal|currency}}
|
bad operand type for abs(): 'str' in django
| 0.039979 | 0 | 0 | 15,930 |
14,120,640 |
2013-01-02T10:43:00.000
| 0 | 0 | 0 | 0 |
django,python-2.7,django-templates
| 14,120,681 | 5 | false | 1 | 0 |
You are missing a cast somewhere in your code.
Wherever in the code you're doing abs(somevar) you need to cast a string to an integer by doing abs(int(somevar)). More info if you post a stacktrace or pieces of code.
| 3 | 1 | 0 |
I am going through the beginning django ecommerce application for shopping cart app. I am getting the error as mentioned above while clicking the add to cart button.
I am getting the error at the line {{ cart_sutotal|currency}}
|
bad operand type for abs(): 'str' in django
| 0 | 0 | 0 | 15,930 |
14,124,281 |
2013-01-02T15:07:00.000
| 2 | 0 | 1 | 0 |
python,windows-7,python-2.7,python-idle
| 14,126,996 | 2 | false | 0 | 0 |
Uninstall the 64 bits Python 2.7 and install 32 bits Python 2.7. Then your IDLE will work fine.
| 1 | 2 | 0 |
I'm using windows 7 64-bit and python 2.7.3 32-bit and the IDLE wont open.
I had python 2.7.3 64-bit (and the IDLE was fine) - but i needed the 32 to run some code.
any ideas?
|
Python 2.7.3 IDLE wont open
| 0.197375 | 0 | 0 | 4,663 |
14,126,201 |
2013-01-02T17:14:00.000
| 1 | 0 | 0 | 0 |
python,matlab,numpy,linear-algebra
| 14,126,790 | 2 | false | 0 | 0 |
In matlab (for historical reason I would argue) the basic type is an M-by-N array (matrix) so that scalars are 1-by-1 arrays and vectors either N-by-1 or 1-by-N arrays. (Memory layout is always Fortran style).
This "limitation" is not present in numpy: you have true scalars and ndarray's can have as many dimensions you like. (Memory layout can be C or Fortran-contigous.) For this reason there is no preferred (standard) practice. It is up to you, according to your application, to choose the one which better suits your needs.
| 1 | 5 | 1 |
Is there a standard practice for representing vectors as 1d or 2d ndarrays in NumPy? I'm moving from MATLAB which represents vectors as 2d arrays.
|
Performance/standard using 1d vs 2d vectors in numpy
| 0.099668 | 0 | 0 | 1,655 |
14,127,875 |
2013-01-02T19:23:00.000
| 4 | 0 | 0 | 0 |
python,boto,amazon-sqs
| 14,129,645 | 2 | false | 0 | 0 |
Concurrency is important, either through threads or multiprocessing, or gevent. Take your pick. Also, are you using send_message_batch? That allows you to send 10 messages at a time and also helps a lot.
| 1 | 3 | 0 |
Right now I have a Python script that uses Boto to insert a number of messages into SQS -- around 100,000 to 200,000. Simply iterating through the loop without creating SQS messages takes about 3 minutes. With SQS messages, it's dreadfully slow.
What's the best way to speed this up? Should I create a pool of SQS connections and thread the insertion of messages? Should I shard the list of messages to insert and spawn multiple processes each with its own share of the list?
What do experienced Boto users recommend?
|
What's the best way to insert a huge number of SQS messages in Python quickly?
| 0.379949 | 0 | 0 | 2,551 |
14,130,010 |
2013-01-02T22:09:00.000
| 0 | 0 | 1 | 0 |
java,python,image,image-recognition
| 14,130,289 | 3 | false | 0 | 0 |
If I were going to do this, I would use normalized floats.
The letter A would be:
[(0.0,0.0),(0.5,1.0),(1.0,0.0),(0.1,0.5)(0.9,0.5)]
Update (further explanation)
So my thought is that you should be able to uniquely identify a letter with an array of normalized points. Points would be at important features of the letter such as start, end, and midpoints of a line. Curves would be sliced up into multiple smaller line segments, which would also be represented by points.
To use this model, the source image would be analyzed. You would then analyze the image for text. You could use edge detection and other methods to find text. You'd also have to analyze for any transforms on the text. After you figure out the transform of the text, you would split the text into characters, then analyze the character for the points of important features. Then you would write an algorithm to normalize the points, and determine which model represents the found points most accurately.
| 3 | 2 | 0 |
I am developing Image recognition software that will detect letters. I was wondering how I could create a model of a letter ("J" for example) so that when I take a picture with the letter "J" on it, the software will compare the image to the model and detect the letter "J".
How would I create the model?
|
Creating an Image model
| 0 | 0 | 0 | 203 |
14,130,010 |
2013-01-02T22:09:00.000
| 0 | 0 | 1 | 0 |
java,python,image,image-recognition
| 14,130,097 | 3 | false | 0 | 0 |
You can use OpenCv library for java , this library contain Template Matching Model already implemented
but the better thing for image recognition is using learning machine or neural network
| 3 | 2 | 0 |
I am developing Image recognition software that will detect letters. I was wondering how I could create a model of a letter ("J" for example) so that when I take a picture with the letter "J" on it, the software will compare the image to the model and detect the letter "J".
How would I create the model?
|
Creating an Image model
| 0 | 0 | 0 | 203 |
14,130,010 |
2013-01-02T22:09:00.000
| 2 | 0 | 1 | 0 |
java,python,image,image-recognition
| 14,130,216 | 3 | false | 0 | 0 |
It seems you want to do OCR (Optical Character Recognition).
If this is only part of a larger project, try OpenCV. Even if you are making a commercial product, it has a permissive BSD license.
If you have set out to make a library of your own, read some of the papers available through any good search engine. There are many tutorials for machine learning and neural nets, which could produce the image models you want.
| 3 | 2 | 0 |
I am developing Image recognition software that will detect letters. I was wondering how I could create a model of a letter ("J" for example) so that when I take a picture with the letter "J" on it, the software will compare the image to the model and detect the letter "J".
How would I create the model?
|
Creating an Image model
| 0.132549 | 0 | 0 | 203 |
14,137,675 |
2013-01-03T11:07:00.000
| 1 | 1 | 0 | 0 |
python,execution-time,pydub
| 14,137,744 | 2 | false | 0 | 0 |
it does not: once your script starts dubbing the wav files, it's another task.
see it as a 3-step (i'm guessing, very little information is provided)
step 1: you send the request --> time determined by "internet speed"
step 2: files get dubbed --> server side work, internet speed doesn't count anymore
step 3: you get the result back --> again internet speed related
you have to time them separately: run a benchmark only on the mixing part and see it for yourself
Funny practical way to see this:
Consider the dinner process: the time you spend eating your dinner doesn't depend on the time it takes for you to order or for the waiter to deliver the meal to you.
quick edit: i just realized it may depend on internet speed, if the dubbing/mixing part is streamed real time while being processed. but this doesn't seem your case.
| 2 | 0 | 0 |
I am using pydub to mix two wav files in one file. Each wav file has about 25Mb and for me page is loaded in about 4 seconds ( so execution time would be 4 seconds )
Does this execution time depend on user's internet connection speed?
If it has any sense : The test.py file is on GoDaddy Deluxe Linux Hosting)
|
Does python script execution time depend on internet speed?
| 0.099668 | 0 | 0 | 544 |
14,137,675 |
2013-01-03T11:07:00.000
| 0 | 1 | 0 | 0 |
python,execution-time,pydub
| 14,137,714 | 2 | false | 0 | 0 |
No. The execution happens on the server and the execution time depends on the server specs and your script optimizations. The internet speed just affects when the client will receive the response after it is ready from the server and sent!
So in few words:
Server gets request from browser (time for request to reach server depends on internet speed of the client and the host)
Server processes the request according to your code (Execution time depends on your code)
Server responds to client and client receives response (time for request to reach client depends on internet speed of the client and the host)
| 2 | 0 | 0 |
I am using pydub to mix two wav files in one file. Each wav file has about 25Mb and for me page is loaded in about 4 seconds ( so execution time would be 4 seconds )
Does this execution time depend on user's internet connection speed?
If it has any sense : The test.py file is on GoDaddy Deluxe Linux Hosting)
|
Does python script execution time depend on internet speed?
| 0 | 0 | 0 | 544 |
14,137,778 |
2013-01-03T11:12:00.000
| 0 | 0 | 0 | 0 |
python,html,django,pdf,phantomjs
| 14,161,687 | 4 | true | 1 | 0 |
Okay, after whole lot googling, I couldn't find anything. So I came up with two hackish solutions.
On the page which user is viewing, create a form with a hiding text area, the submit button is named 'Generate PDF', after rendering the page, I use JavaScript to get all the html within the divs I want and put them into the text area. When the button is clicked, the html will be passed to the server side, then I use python to create a html file locally and use Phantomjs to create a PDF according to the html file.
Create a url render the exact same page user is viewing, but don't need to require user logging in. So one has to configure the Apache or Nginx so that url only can be accessed by the local host. So Phantomjs can access the url without any problem and generate the PDF.
| 1 | 1 | 0 |
I am using Django to create a report site. The reports are generated dynamically, they also include some SVG charts. I want to create a PDF file which is based on the current report the user is viewing, but with extra header and footer. I came across Phantomjs, two problems though, first is that the page requires user to log on, so if I send the url to the server, phantomjs creates the pdf for the logging page; second the reports are generated using ajax, so even the same url will have different reports. Is there any better way to do this?
|
How to convert a html page to pdf in Django
| 1.2 | 0 | 0 | 2,954 |
14,138,946 |
2013-01-03T12:27:00.000
| 0 | 0 | 0 | 0 |
python
| 14,139,047 | 3 | false | 0 | 1 |
You can use matplotlib or pygal. Something similar with your describing technique is used in PyQtGraph.
| 1 | 0 | 0 |
I have question and want to see if it can be realized in wxpython.
I would like to plot data on wxpython, and then use the mouse to select some points which are plotted, using the mouse.
At the moment i am using wx.lib.plot and using the PlotMarker
Is there a way of doing this with wx.lib.plot or do i have to use another graph library
Regards!
|
interactive graph python wx.lib.plot. selecting some points
| 0 | 0 | 0 | 944 |
14,139,190 |
2013-01-03T12:43:00.000
| 1 | 0 | 0 | 0 |
python,qt,python-2.7,pyside
| 14,139,249 | 1 | true | 0 | 1 |
Run the app using pythonw it will hide the window
| 1 | 1 | 0 |
I made an app with PySide. When ever I run the app the GUI shows up but the CMD window is still there too. How do I hide it?
|
How to hide the CMD window in PySide app
| 1.2 | 0 | 0 | 228 |
14,139,377 |
2013-01-03T12:54:00.000
| 0 | 0 | 0 | 1 |
python,batch-file,cross-platform,py2exe,scientific-computing
| 14,139,446 | 3 | false | 0 | 0 |
I would recommend using py2exe for the windows side, and then BuildApplet for the mac side. This will allow you to make a simple app you double click for your less savvy users.
| 2 | 5 | 0 |
EDIT
One option I contemplated but don't know enough about is to e.g. for windows write a batch script to:
Search for a Python installation, download one and install if not present
Then install the bundled package using distutils to also handle dependencies.
It seems like this could be a relatively elegant and simple solution, but I'm not sure how to proceed - any ideas?
Original Question
In brief
What approach would you recommend for the following scenario?
Linux development environment for creation of technical applications
Deployment now also to be on Windows and Mac
Existing code-base in Python
wine won't install windows version of Python
No windows install CDs available to create virtual windows/mac machines
Porting to java incurs large overhead because of existing code-base
Clients are not technical users, i.e. providing standard Python packages not sufficient - really requires installable self-contained products
Background
I am writing technical and scientific apps under Linux but will need some of them to be deployable on Windows/MacOs machines too.
In the past I have used Python a lot, but I am finding that for non-technical users who aren't happy installing python packages, creating a simple executable (by using e.g. py2exe) is difficult as I can't get the windows version of Python to install using wine.
While java would seem a good choice, if possible I wanted to avoid having to port my existing code from Python, especially as Python also allows writing portable code.
I realize I'm trying to cover a lot of bases here, so any suggestions regarding the most appropriate solutions (even if not perfect) will be appreciated.
|
Cross-platform deployment and easy installation
| 0 | 0 | 0 | 1,886 |
14,139,377 |
2013-01-03T12:54:00.000
| 2 | 0 | 0 | 1 |
python,batch-file,cross-platform,py2exe,scientific-computing
| 14,139,409 | 3 | false | 0 | 0 |
py2exe works pretty well, I guess you just have to setup a Windows box (or VM) to be able to build packages with it.
| 2 | 5 | 0 |
EDIT
One option I contemplated but don't know enough about is to e.g. for windows write a batch script to:
Search for a Python installation, download one and install if not present
Then install the bundled package using distutils to also handle dependencies.
It seems like this could be a relatively elegant and simple solution, but I'm not sure how to proceed - any ideas?
Original Question
In brief
What approach would you recommend for the following scenario?
Linux development environment for creation of technical applications
Deployment now also to be on Windows and Mac
Existing code-base in Python
wine won't install windows version of Python
No windows install CDs available to create virtual windows/mac machines
Porting to java incurs large overhead because of existing code-base
Clients are not technical users, i.e. providing standard Python packages not sufficient - really requires installable self-contained products
Background
I am writing technical and scientific apps under Linux but will need some of them to be deployable on Windows/MacOs machines too.
In the past I have used Python a lot, but I am finding that for non-technical users who aren't happy installing python packages, creating a simple executable (by using e.g. py2exe) is difficult as I can't get the windows version of Python to install using wine.
While java would seem a good choice, if possible I wanted to avoid having to port my existing code from Python, especially as Python also allows writing portable code.
I realize I'm trying to cover a lot of bases here, so any suggestions regarding the most appropriate solutions (even if not perfect) will be appreciated.
|
Cross-platform deployment and easy installation
| 0.132549 | 0 | 0 | 1,886 |
14,139,855 |
2013-01-03T13:27:00.000
| 2 | 0 | 1 | 0 |
python,macos,virtualenv
| 14,139,914 | 1 | true | 0 | 0 |
If you're using virtualenv, there's an argument -p to specify the PYTHON_EXE to use for this environment.
| 1 | 1 | 0 |
I have a python virtual environment for the sole purpose of using wxPython. wxPython on Mac uses the Carbon framework, which hasn't been built in 64-bit. Therefore, I can't run wxPython on a mac with Python running 64-bit. Is there a way to tell my installation to always run 32-bit? It's getting annoying having to use arch -i386 every time.
|
Setting Python 32-bit and 64-bit mode for virtualenv on OSX
| 1.2 | 0 | 0 | 1,427 |
14,140,495 |
2013-01-03T14:08:00.000
| 0 | 0 | 0 | 0 |
python,video-capture
| 68,495,610 | 7 | false | 0 | 1 |
You can do offline html,js code to do video with audio recording. Using python lib python webview open that page. It should work fine.
| 1 | 29 | 0 |
i'm looking for a solution, either in linux or in windows, that allows me to
record video (+audio) from my webcam & microphone, simultaneously.
save it as a file.AVI (or mpg or whatever)
display the video on the screen while recording it
Compression is NOT an issue in my case, and i actually prefer to capture RAW and compress it later.
So far i've done it with an ActiveX component in VB which took care of everything, and i'd like to progress with python (the VB solution is unstable, unreliable).
so far i've seen code that captures VIDEO only, or individual frames...
I've looked so far at
OpenCV - couldn't find audio capture there
PyGame - no simultaneous audio capture (AFAIK)
VideoCapture - provide only single frames.
SimpleCV - no audio
VLC - binding to VideoLAN program into wxPthon - hopefully it will do (still investigating this option)
kivy - just heard about it, didn't manage to get it working under windows SO FAR.
The question - is there a video & audio capture library for python?
or - what are the other options if any?
|
How to capture a video (AND audio) in python, from a camera (or webcam)
| 0 | 0 | 0 | 75,201 |
14,142,194 |
2013-01-03T15:44:00.000
| 30 | 0 | 0 | 0 |
python,grid,tkinter
| 14,144,121 | 3 | false | 0 | 1 |
The best tool for doing layouts using grid, IMHO, is graph paper and a pencil. I know you're asking for some type of program, but it really does work. I've been doing Tk programming for a couple of decades so layout comes quite easily for me, yet I still break out graph paper when I have a complex GUI.
Another thing to think about is this: The real power of Tkinter geometry managers comes from using them together*. If you set out to use only grid, or only pack, you're doing it wrong. Instead, design your GUI on paper first, then look for patterns that are best solved by one or the other. Pack is the right choice for certain types of layouts, and grid is the right choice for others. For a very small set of problems, place is the right choice. Don't limit your thinking to using only one of the geometry managers.
* The only caveat to using both geometry managers is that you should only use one per container (a container can be any widget, but typically it will be a frame).
| 1 | 59 | 0 |
Does anyone know of a GUI design app that lets you choose/drag/drop the widgets, and then turn that layout into Python code with the appropriate Tkinter calls & arrangement using the grid geometry manager? So far I've turned up a couple of pretty nice options that I may end up using, but they generate code using either pack or place instead.
=========
EDIT: Please note this is not to seek a "recommendation" per se, but rather it is a factual inquiry. I was asking whether or not such an app exists.
=====
Before you say it: Yes, I know Tkinter is easy to learn, and Yes, I've found multiple online sources to help me do so, and I'm already on my way with that. This isn't about avoiding the effort of learning, it's about using the right tool for the job. I found out a long time ago that those drag-and-drop widget environments for coding program logic, are just too klunky and unmanageable when the project gets beyond a certain size -- for bigger stuff, it's just easier to build and maintain the logic when it's in plain text. More recently I've found out that the reverse is true for designing a GUI. Writing text works up to a point, but when you have a main window with 30 or 40 widgets, plus a couple of side windows each with similar complexity, it goes much faster and easier if you can design it graphically rather than typing it all out.
|
Is there a GUI design app for the Tkinter / grid geometry?
| 1 | 0 | 0 | 150,823 |
14,144,460 |
2013-01-03T17:56:00.000
| -1 | 1 | 0 | 0 |
python,pdf
| 14,144,813 | 2 | false | 0 | 0 |
Use Action Wizard in Acrobat X Pro.
Create New Action.
Setup Start With, and Save To step.
Set checkbox Overwrite existing files.
Select Content. Select Add Document Description.
Left mouse click in option. Uncheck Autor checkbox Leave As Is and enter new Autor name.
Press Save button, set name action Your Action Name.
Run Action:
File- Action Wizard - Your Action Name.
I test it - work).
| 1 | 0 | 0 |
In my office we have about 1000 PDFs that have arbitrary title and author information. My bosses had a spreadsheet created with the PDFs filename and an appropriate title and appropriate author information.
I would like to find a programmatic way to move the data from the Excel sheet to the PDF attributes?
My preferred language is Python so I looked for a Python library to do this, each library I looked at had the author and title fields as read-only.
If Python doesn't have a library that works I am okay using VBA, VB.NET, JavaScript... I will take this as an opportunity to learn a new language.
|
Changing a PDF author programmatically
| -0.099668 | 0 | 0 | 834 |
14,145,677 |
2013-01-03T19:21:00.000
| 0 | 0 | 0 | 0 |
django,model-view-controller,python-3.x,django-views,three-tier
| 14,147,669 | 1 | false | 1 | 0 |
If Class1 or parser import from views, then you have a circular dependency. You'll need to move any shared code out into a third file.
You might consider though whether you need all those separate files under logic. In Python there's no requirement to have a class in its own file, so maybe you could have a single logic.py instead of a logic directory containing several files.
| 1 | 1 | 0 |
I am new to Django and have a little problem with making all the project structure clear and organized and so on.. The MVC stuff in Django is quite easy to use when making a small project. However, when I am trying to get a new layer (application logic, like three-tier architecture) between views.py and models.py I have problem to do so.
I have the following structure:
mysite/
manage.py
app1/
models.py
views.py
logic/
__init__.py
Class1.py
parser.py
...
and I want to load into views.py stuff from Class1.py and parser.py.
How do I do it?
Neither of the following works:
from app1.logic import *
from app1.logic.Class1 import Class1
Also, it would help me if somebody could list me some example of a really huge django project. It looks like either lots of classes and .py files should be in every app folder or everything is in the models.py.. Both look a little disorganised and I'm sure there is a way to make it a little bit clearer (like in Zend or Symfony).
And, I'm working with Python3 and Django 1.5b2.
Thanks.
|
How to load custom packages from inside Django apps into views.py?
| 0 | 0 | 0 | 1,150 |
14,146,814 |
2013-01-03T20:34:00.000
| 0 | 0 | 0 | 1 |
python,nginx,wsgi,haproxy
| 14,148,902 | 2 | false | 1 | 0 |
Of the three options, only option number 1 has any chance of working with websockets. Nginx, and most standard webservers will not play nicely with them.
| 1 | 2 | 0 |
I consider this scenarios for deploying High Available Python web apps:
load balancer -* wsgi servers
load balancer -* production HTTP sever - wsgi server
production HTTP sever (with load balancing features, like Nginx) -* wsgi servers
For load balancer I consider HAProxy
For production HTTP sever I consider Nginx
For wsgi servers I mean servers which directly handle wsgi app (gevent, waitress, uwsgi...)
-* means one to many connection
- means one to one connection
There is no static content to serve. So I wonder if production HTTP server is needed.
What are the pros and cons of each solution?
For each scenario (1-3), in place of wsgi server is there any advantage of using wsgi container server (uWSGI, gunicorn) rather then raw wsgi server (gevent, tornado..)?
I'm also wondering which solution is the best for websockets or long polling request?
|
HA deploy for Python wsgi application
| 0 | 0 | 0 | 2,227 |
14,147,287 |
2013-01-03T21:09:00.000
| 0 | 0 | 1 | 0 |
python,object,import,blender
| 48,065,106 | 2 | false | 0 | 0 |
From what I saw after importing things, the default tag is true for all objects (including those who already exist in the scene). So it seems like that in order to mark objects, you have to assign them a false value, and then import, and then add them to imported objects if their tag is True. No the other way around. So I'm not sure this answer is accurate.
| 1 | 3 | 0 |
The (probably not so well written) question is: Is there any way to get object data right after it is loaded through bpy.import_scene.obj function?
I mean when i import an obj file with this function i need to make some more transformation for it. When i select an object via name 'Mesh' (default name of object after import) all those functions works for other objects named 'Mesh' in my scene. I tried to get an last object from objects list in scene but they're arranged alphabeticaly, so it didn't worked well. When i tried to change object.name and apply next functions to it, it works only for one. All earlier instances of imported object are back to default.
How to solve that problem? Is there a option to get from scene last added object? Or maybe some way to 'mark' *obj object right after it is imported before next functions are applied? Or maybe there is a way to import *obj data straight into created earlier blank object.
cheers,
regg
PS: Working on Blender 2.63
|
How to mark last imported *obj in blender
| 0 | 0 | 0 | 1,433 |
14,147,434 |
2013-01-03T21:20:00.000
| 0 | 0 | 0 | 0 |
python,gtk
| 14,160,390 | 2 | true | 0 | 1 |
Apparently it was being caused by my attempt to change the background color of the tables--I was setting the background color of every HBox (and Label), which was responsible for nearly all of the excessive teardown time. All I had to do was set the background color of the Viewports the Tables are contained in.
| 2 | 0 | 0 |
My PyGTK application creates a secondary popup window for displaying a preview of results. This window is fairly elaborate, with Table widgets nested three deep and populated by HBoxes containing one Label each at the lowest level. The number of Labels total can be in the thousands. I am noticing that when I close this window, GTK becomes extremely busy processing something (functions added with gobject.idle_add don't resolve for >10 seconds) and the main window of my application becomes unresponsive in this time. Even with this many widgets, it seems strange to me that the window should take so long to close, longer even than it takes to set up and display. Is there any way to mitigate this? (I tried creating and showing the window in another thread, but apparently with GTK this is a no-no)
|
Excessive hang time when destroying a PyGTK window
| 1.2 | 0 | 0 | 88 |
14,147,434 |
2013-01-03T21:20:00.000
| 1 | 0 | 0 | 0 |
python,gtk
| 14,156,870 | 2 | false | 0 | 1 |
How long takes that window to show up? Are all the widgets created at once when it is displayed?
Your problem might be caused by the destruction of your thousands of widgets, all at the same time. Or by a lengthy action perform on on of those widgets destruction. But without some code to look at, there could be thousands of reasons, so a ptomato says, use a profiler...
| 2 | 0 | 0 |
My PyGTK application creates a secondary popup window for displaying a preview of results. This window is fairly elaborate, with Table widgets nested three deep and populated by HBoxes containing one Label each at the lowest level. The number of Labels total can be in the thousands. I am noticing that when I close this window, GTK becomes extremely busy processing something (functions added with gobject.idle_add don't resolve for >10 seconds) and the main window of my application becomes unresponsive in this time. Even with this many widgets, it seems strange to me that the window should take so long to close, longer even than it takes to set up and display. Is there any way to mitigate this? (I tried creating and showing the window in another thread, but apparently with GTK this is a no-no)
|
Excessive hang time when destroying a PyGTK window
| 0.099668 | 0 | 0 | 88 |
14,152,187 |
2013-01-04T06:25:00.000
| 4 | 1 | 1 | 0 |
python,ide,livecoding
| 14,152,212 | 4 | false | 0 | 0 |
eclipse+pydev
pycharm
many others ....
| 1 | 0 | 0 |
Are there any IDEs for Python that support automatic error highlighting (like the Eclipse IDE for Java?) I think it would be a useful feature for a Python IDE, since it would make it easier to find syntax errors. Even if such an editor did not exist, it still might be possible to implement this by automatically running the Python script every few seconds, and then parsing the console output for error messages.
|
Is it possible to implement automatic error highlighting for Python?
| 0.197375 | 0 | 0 | 203 |
14,152,389 |
2013-01-04T06:41:00.000
| 0 | 0 | 0 | 0 |
python,django,django-templates
| 14,152,733 | 5 | false | 1 | 0 |
I would suggest to use Javascript or jQuery at client side to populate the dropdown according to the current year.
| 1 | 2 | 0 |
I want to create a year dropdown in django template. The dropdown would contain start year as 2011, and end year should be 5 years from current year.
For ex: If today i see the dropdwon it will show me years ranging from 2011, 2012, 2013,...2017.
It can be done by sending a list from views.py and looping it in the template or defining inside forms.py.
But i don't want to use any form here. I just have to show it in a template.
How can i do it in Django template.
|
Creating year dropdown in Django template
| 0 | 0 | 0 | 10,836 |
14,152,548 |
2013-01-04T06:55:00.000
| 0 | 1 | 0 | 1 |
python,python-3.x
| 64,151,445 | 3 | false | 0 | 0 |
In the general case, no; many Python 2 scripts will not run on Python 3, and vice versa. They are two different languages.
Having said that, if you are careful, you can write a script which will run correctly under both. Some authors take extra care to make sure their scripts will be compatible across both versions, commonly using additional tools like the six library (the name is a pun; you can get to "six" by multiplying "two by three" or "three by two").
However, it is now 2020, and Python 2 is officially dead. Many maintainers who previously strove to maintain Python 2 compatibility while it was still supported will now be relieved and often outright happy to pull the plug on it going forward.
| 2 | 7 | 0 |
i have thousands of servers(linux), some only has python 2.x and some only has python 3.x, i want to write one script check.py can run on all servers just as $./check.py without use $python check.py or $python3 check.py, is there any way to do this?
my question is how the script check.py find the Interpreter no matter the Interpreter is python2.x and python3.x
|
can one python script run both with python 2.x and python 3.x
| 0 | 0 | 0 | 5,143 |
14,152,548 |
2013-01-04T06:55:00.000
| 0 | 1 | 0 | 1 |
python,python-3.x
| 14,152,613 | 3 | false | 0 | 0 |
Considering that Python 3.x is not entirely backwards compatible with Python 2.x, you would have to ensure that the script was compatible with both versions. This can be done with some help from the 2to3 tool, but may ultimately mean running two distinct Python scripts.
| 2 | 7 | 0 |
i have thousands of servers(linux), some only has python 2.x and some only has python 3.x, i want to write one script check.py can run on all servers just as $./check.py without use $python check.py or $python3 check.py, is there any way to do this?
my question is how the script check.py find the Interpreter no matter the Interpreter is python2.x and python3.x
|
can one python script run both with python 2.x and python 3.x
| 0 | 0 | 0 | 5,143 |
14,152,651 |
2013-01-04T07:02:00.000
| 4 | 1 | 0 | 0 |
python,django,apache,mod-wsgi,wsgi
| 14,152,716 | 1 | false | 1 | 0 |
Since I answered your other question regarding Flask, I assume you are referring to using Flask's development server. The problem with that is it is single threaded.
Using mod_wsgi, you would be running behind apache, which will do process forking and allow for multiple simultaneous requests to be handled.
There are other options as well. Depending on your particular use case I would consider using eventlet's wsgi server.
| 1 | 1 | 0 |
I was wondering what is the advantages of mod_wsgi. For most python web framework, I can launch (daemon) the application by python directly and serve it in a port. Then when shall I use mod_wsgi?
|
Why should I use `mod_wsgi` instead of launching by python?
| 0.664037 | 0 | 0 | 82 |
14,153,954 |
2013-01-04T08:57:00.000
| 2 | 1 | 0 | 0 |
python,outlook,smtplib
| 14,154,176 | 1 | false | 0 | 0 |
You can send a copy of that email to yourself, with some header that tag the email was sent by yourself, then get another script (using IMAP library maybe) to move the email to the Outlook Sent folder
| 1 | 4 | 0 |
I am using hosted exchange Microsoft Office 365 email and I have a Python script that sends email with smtplib. It is working very well. But there is one issue, how can I get the emails to show up in my Outlook Sent Items?
|
How can I see emails sent with Python's smtplib in my Outlook Sent Items folder?
| 0.379949 | 0 | 1 | 1,550 |
14,158,584 |
2013-01-04T14:00:00.000
| 0 | 0 | 1 | 0 |
python,private-members
| 14,158,732 | 2 | false | 0 | 0 |
sLet's say you have operation operationA that calls to subOperationA and suboperationB on the same class, and those methods have no meaning as individual operations, they manipulate data and you have to prevent unexpected executions of them (that means calls from methods others than operationA). So private allows you to protect and encapsulate your methods, limiting their visibility only to the desired scope.
| 1 | 1 | 0 |
I understand how it works and I understand meaning of syntax. But I do not understand why would I want to use it?
|
Why do I need the private methods in real life?
| 0 | 0 | 0 | 123 |
14,158,724 |
2013-01-04T14:09:00.000
| 0 | 0 | 1 | 0 |
python,multithreading
| 14,158,784 | 1 | false | 0 | 0 |
Python (generally) doesn't return the memory it used to the operating system. Once the queue is empty, your process will hold on to the memory that was allocated to the queue. If you were to add another 3GB worth of data to the queue, Python should reuse the memory it already holds, rather than requesting more from the OS (i.e., you shouldn't see its memory allocation grow further).
| 1 | 4 | 0 |
I have an application where I stuff lots of data in a queue, then workers empty that data.
I am using queue because it is blocked, so I don't need to apply lock all the times.
So there are times when that queue is stuffed with 1 million items of string data. Each of the item taking about 80 bytes of memory.
At some, in peak usage, python starts taking around 3 GB of memory. Then when consumer empties the entire queue, python is still taking 3 GB of memory. I am checking memory using ps command. What is this phenomenon? Is this high memory usage because of swollen queue at the first place? If it is so, what can be done to keep it in check? Are there any other better collections in python as a replacement for queue?
|
Python's Queue collection doesn't release the memory
| 0 | 0 | 0 | 287 |
14,158,844 |
2013-01-04T14:19:00.000
| 5 | 0 | 0 | 1 |
python,django,webserver,tornado,blocking
| 14,158,864 | 2 | true | 1 | 0 |
I believe It is single threaded and blocking so it is easy to debug. if you put a debugger in it will completely halt the server
| 2 | 3 | 0 |
Why is the Django webserver blocking, not non-blocking like Tornado? Was there a reason to design the webserver in this way?
|
What is a main reason that Django webserver is blocking?
| 1.2 | 0 | 0 | 176 |
14,158,844 |
2013-01-04T14:19:00.000
| 5 | 0 | 0 | 1 |
python,django,webserver,tornado,blocking
| 14,159,011 | 2 | false | 1 | 0 |
If you really need another reason on top of that suggested by dm03514, it I'd probably simply because it was easier to write. Since the dev server I'd for development only, little effort was spent in making it more complex or able to serve multiple requests. In fact, this is an explicit goal: making it any better would encourage people to use it in a production setting, for which it is not tested.
| 2 | 3 | 0 |
Why is the Django webserver blocking, not non-blocking like Tornado? Was there a reason to design the webserver in this way?
|
What is a main reason that Django webserver is blocking?
| 0.462117 | 0 | 0 | 176 |
14,160,178 |
2013-01-04T15:40:00.000
| 1 | 0 | 0 | 1 |
python,cross-platform
| 14,160,208 | 4 | false | 0 | 0 |
you could create your initialization functions to take those variables as parameters so it is easy to spoof them in testing
| 1 | 3 | 0 |
At the beginning of the script, I use platform.system and platform.release to determine which OS and version the script is running on (so it knows it's data is in Application Support on Mac, home on unix-like and non-mac unix, appdata on windows <= XP, and appdata/roaming on windows >= Vista). I'd like to test my series of ifs, elifs, and elses what determine the os and release, but I only have access to Mac 10.6.7, some unknown release of Linux, and Windows 7. Is there a way to feed platform fake system and release information so I can be sure XP, Solaris, etc, would handle the script properly without having an installation?
|
Give python "platform" library fake platform information?
| 0.049958 | 0 | 0 | 823 |
14,160,679 |
2013-01-04T16:11:00.000
| 1 | 0 | 0 | 0 |
python,selenium,webdriver
| 14,161,598 | 1 | true | 0 | 0 |
the server is started independently. creating an instance of webdriver.Remote does not start the server.
| 1 | 0 | 0 |
Using the Selenium Python bindings, is it possible to start the RemoteWebDriver server separately from creating a webdriver.Remote instance? The point of doing this would be to save time spent repeatedly starting and stopping the server when all I really need is a new instance of the client. (This is possible with ChromeDriver.)
|
Start `RemoteWebDriver` server separately from creating a `webdriver.Remote` instance?
| 1.2 | 0 | 1 | 205 |
14,161,479 |
2013-01-04T17:03:00.000
| 1 | 1 | 0 | 0 |
python,selenium,testcase
| 14,161,887 | 1 | true | 0 | 0 |
Sharing a single RemoteWebDriver can be dangerous, since your tests are no longer independently self-contained. You have to be careful about cleaning up browser state and the like, and recovering from browser crashes in the event a previous test has crashed the browser. You'll also probably have more problems if you ever try to do anything distributed across multiple threads, processes, or machines. That said, the options you have for controlling this are not dependent on Selenium itself, but whatever code or framework you are using to drive it. At least with Nose, and I think basic pyunit, you can have setup routines at the class, module, or package level, and they can be configured to run for each test, each class, each module, or each package, if memory serves.
| 1 | 0 | 0 |
Are there problems with sharing a single instance of RemoteWebDriver between multiple test cases? If not, what's the best practice place to create the instance? I'm working with Python, so I think my options are module level setup, test case class setup, test case instance setup (any others?)
|
Share single instance of selenium RemoteWebDriver between multiple test cases
| 1.2 | 0 | 1 | 249 |
14,162,140 |
2013-01-04T17:45:00.000
| 0 | 0 | 1 | 0 |
python
| 14,162,418 | 1 | false | 0 | 0 |
If what you want is a C API version of exec, maybe try PyRun_File and its ilk? Not sure exactly what you're trying to accomplish though.
| 1 | 1 | 0 |
I am trying to set local/globals variables in PyRun_InteractiveLoop call.
Cant figure out how to do it, since, unlike exec counterparts, loop doesn't accept global/local args.
What am I missing?
|
PyRun_InteractiveLoop globals/locals
| 0 | 0 | 0 | 199 |
14,163,114 |
2013-01-04T18:53:00.000
| 7 | 0 | 0 | 0 |
python,amazon-web-services,boto,amazon-cloudformation
| 14,165,269 | 1 | true | 1 | 0 |
If you do a describe_stacks call, it will return a list of Stack objects and each of those will have an outputs attribute which is a list of Output objects.
Is that what you are looking for?
| 1 | 6 | 0 |
I'm trying to retrieve the list of outputs from a CloudFormation template using Boto. I see in the docs there's an object named boto.cloudformation.stack.Output. But I think this is unimplemented functionality. Is this currently possible?
|
Returning the outputs from a CloudFormation template with Boto?
| 1.2 | 0 | 0 | 3,837 |
14,163,949 |
2013-01-04T19:52:00.000
| 1 | 0 | 0 | 0 |
http,rest,python-2.7,keep-alive,python-requests
| 14,171,479 | 1 | true | 0 | 0 |
You could try setting up your application or operating system to use a known good DNS server like 8.8.8.8
EDIT: You can also bypass the DNS by adding the host name and IP address of the REST service to yor hosts file.
| 1 | 0 | 0 |
My internet connection has an issue. Around 50% of times, web pages don't load because of DNS look up failing. Just reloading page works, and I am able to browse like that.
However, I am also using REST api service for my project. When I run the program, it keeps on calling this webservice repeatedly, hundreds of times. Because of my issue, I can at most connect successfully 3-4 times (when I am lucky), and then ultimately I get connection error - "Max number of retries exceeded".
I was exploring my options when I came across this Keep Alive property in Requests module. Its automatic, and I cant forcefully make it work.
How do I get this working?
P.S. - I know fixing my internet connection issue will solve it, but I am moving in a week, so I dont want to waste time here. Also need to complete my project, so please helppppp!!
|
Using Keep Alive in Requests module - Python 2.7
| 1.2 | 0 | 1 | 524 |
14,164,753 |
2013-01-04T20:49:00.000
| 2 | 1 | 0 | 0 |
python,hardware
| 14,164,831 | 2 | false | 0 | 0 |
Have you looked into getting a basic arduino? It sounds like you should pick up a cheap one at Radio Shack and get the RF devices to trigger your garage door opener remotely. With the hardware, you can easily talk to it via python (though it'd be easy enough to just do with the Arduino language).
| 2 | 0 | 0 |
I would like to build a RF transmitter/controller for my garage. When my vehicle gets within 25', I'd like for a computer to trigger a physical relay to open my garage door. you know, like Batman.
I like Python, so I'm hoping I can use it here.
|
Can I use Python to interact with an RF transmitter/controller?
| 0.197375 | 0 | 0 | 465 |
14,164,753 |
2013-01-04T20:49:00.000
| 4 | 1 | 0 | 0 |
python,hardware
| 14,165,338 | 2 | true | 0 | 0 |
Look in to the Raspberry Pi. It's a $25 embeddable computer than supports Python. You will need to spec an RF transceiver that can interface with the on board hardware and use the documentation to determine a control method.
| 2 | 0 | 0 |
I would like to build a RF transmitter/controller for my garage. When my vehicle gets within 25', I'd like for a computer to trigger a physical relay to open my garage door. you know, like Batman.
I like Python, so I'm hoping I can use it here.
|
Can I use Python to interact with an RF transmitter/controller?
| 1.2 | 0 | 0 | 465 |
14,165,573 |
2013-01-04T21:51:00.000
| 1 | 0 | 0 | 0 |
python,django,jenkins,django-testing,django-jenkins
| 14,205,396 | 2 | false | 1 | 0 |
This is a really tough question to answer.
It is possible there are some common pitfalls django developers falls into, but those I do not know.
Outside of that, this is just normal debugging:
Find a way to reproduce the failure. If you can make the test fail on your own laptop, great. If you cannot, you have to debug it on the machine where it fails.
Get more information. Asserts can be made to print a custom message when they fail. Print values of relevant variables. Add debug printouts into your code and tests. See where things are not the way they are supposed to be. Google how to use the Python debugger.
Keep an open mind. The bug can be anywhere: in the hardware, the software environment, your code or in the test code. But unless you are god, Linus Torvalds or Brian Kernighan it is a safe first hypothesis the bug originates somewhere between your keyboard and back of your seat. (And all the three hackers above have made bad bugs too.)
| 1 | 2 | 0 |
I have asked this one earlier too but am not satisfied with the answer.
What I use:
Working on Django/python website.
Development done on python virtual envs locally.
using GIT as my SCM
have separate virtual servers deployed for Developer and Production branches for GIT
Using Jenkins CI for continuous Integration. Separate Virtual server deployed for Jenkins
Working:
I have Unit tests, smoke tests and Integration tests for the website. Jenkins has been setup so that whenever code is pushed from my local git branch to Developer and Production branch on git repo, a build is triggered in Jenkins.
Issue:
My tests are passing locally when I do a 'python manage.py test'
Random tests (mostly unit tests) FAIL in Jenkins when code is pushed to other branches (Developer and Production).
After a test failure, if I do do a build manually by pressing the 'Build Now' button in Jenkins, the tests usually pass and the build is successful.
Sometimes, when no changes are made to the code and code is still pushed to these branches, the tests are randomly failing in Jenkins.
Some common Errors:
AssertionError: 302 != 200
TypeError: 'NoneType' object is not subscriptable
IndexError: list index out of range
AssertionError: datetime.datetime(2012, 12, 5, 0, 0, 27, 218397) != datetime.datetime(2012, 12, 5, 0, 0, 27, 239884)
AssertionError: Response redirected to 'x' expected 'y'
Troubleshooting till date:
Ran all the tests locally on my machine and also on the virtual server. They are running fine.
Ran the individual failing tests locally and also on the virtual server. They are running fine.
Tried to recreate the failing conditions but as of now, the tests are passing.
The only problem I see is that whenever the code is pushed to the developer and production brnaches, the random test failure kicks in. Some tests fail repeatedly.
Can anyone tell me what more I can do to troubleshoot this problem. I tried googling the issue but in vain. I know xunitpatterns website has some good insights on the erratic tests behaviour but it is not helping since I tried most of the stuff there.
|
Random test failures when code pushed to Jenkins
| 0.099668 | 0 | 0 | 1,467 |
14,166,161 |
2013-01-04T22:39:00.000
| 1 | 0 | 0 | 0 |
python,django,search,nosql,full-text-search
| 14,166,303 | 1 | true | 1 | 0 |
Firstly, Haystack isn't a search engine, it's a library that provides a Django API to existing search engines like Solr and Whoosh.
That said, your example isn't really a very good one. You wouldn't use a separate search engine to search by ISBN, because your database would already have an index on the Book table which would efficiently do that search. Where a search engine would come in could be in two places. Firstly, you could index some or all of the book's contents to search on: databases are not very good at full-text search, but this is an area where search engines shine. Secondly, you could provide a search against multiple fields - say, author, title, publisher and description - in one go.
Also, search engines provide useful functionality like suggestions, faceting and so on that you won't get from a database.
| 1 | 0 | 0 |
Could you explain how search engines like Sphinx, Haystack, etc fit in to a web framework. If you could explain in a way that someone new to web development could understand that would help.
One example use case I made up for this question is a book search feature. Lets say I have a noSQL database that contains book objects, each containing author, title, ISBN, etc.; how does something like Sphinx/Haystack/other search engine fit in with my database to search for a books with a given ISBN?
|
Explain search (Sphinx/Haystack) in simple context?
| 1.2 | 0 | 0 | 135 |
14,168,522 |
2013-01-05T04:16:00.000
| 2 | 0 | 1 | 1 |
python,windows
| 14,168,530 | 1 | false | 0 | 0 |
Download (or open if you already have) dependency walker, then open python.exe in it. See if you are missing a DLL or got a DLL corrupted. You may require to re-install python, some reference DLL or exe files could be corrupted/overwritten/modified/deleted.
| 1 | 1 | 0 |
My python environment was working alright before and after running a C++ code multiple times from python subprocesses, I started the computer and saw that it says that python.exe is not a valid win32 application and I no longer access python. What changed I am not sure. Will I need to reinstall python? When I open the IDE, this message comes Python 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] on win32
|
python.exe is not a valid win32 application error coming suddenly
| 0.379949 | 0 | 0 | 5,945 |
14,172,470 |
2013-01-05T14:00:00.000
| 1 | 0 | 0 | 1 |
python,flask,gunicorn
| 14,305,123 | 2 | false | 1 | 0 |
We ended up changing our application to send logs to stdout and now rely on supervisord to aggregate the logs and write them to a file. We also considered sending logs directly to rsyslog but for now this is working well for us.
| 1 | 8 | 0 |
I have a flask app that runs in multiple gunicorn sync processes on a server and uses TimedRotatingFileHandler to log to a file from within the flask application in each worker. In retrospect this seems unsafe. Is there a standard way to accomplish this in python (at high volume) without writing my own socket based logging server or similar? How do other people accomplish this? We do use syslog to aggregate across servers to a logging server already but I'd ideally like to persist the log on the app node first.
Thanks for your insights
|
Gunicorn logging from multiple workers
| 0.099668 | 0 | 0 | 3,257 |
14,176,280 |
2013-01-05T20:49:00.000
| -2 | 0 | 0 | 0 |
python,arrays,numpy,statistics,scipy
| 26,791,595 | 4 | false | 0 | 0 |
Go to MS Excel. If you don't have it your work does, there are alternatives
Enter the array of numbers in Excel worksheet. Run the formula in the entry field, =TTEST (array1,array2,tail). One tail is one, Two tail is two...easy peasy. It's a simple Student's T and I believe you may still need a t-table to interpret the statistic (internet). Yet it's quick for on the fly comparison of samples.
| 1 | 0 | 1 |
I have two 2-D arrays with the same shape (105,234) named A & B essentially comprised of mean values from other arrays. I am familiar with Python's scipy package, but I can't seem to find a way to test whether or not the two arrays are statistically significantly different at each individual array index. I'm thinking this is just a large 2D paired T-test, but am having difficulty. Any ideas or other packages to use?
|
Test for statistically significant difference between two arrays
| -0.099668 | 0 | 0 | 7,302 |
14,177,297 |
2013-01-05T22:53:00.000
| 0 | 0 | 0 | 0 |
menu,python-2.7,python-idle,turtle-graphics,cheeseshop
| 26,772,106 | 1 | false | 0 | 1 |
In 3.x, look at Lib/turtledemo/main.py for how to put a turtle canvas into a tkinter window with other stuff. In 2.x, the turtledemo main code is in Demo/turtle/turtleDemo.py in the source repository, but the Demo directory is not installed on Windows.
| 1 | 1 | 0 |
So I have an assignment in Computer Science that requires that I put an interactive menu into my python turtle graphics projects. I run on python 2.7.3 and would like some help on what modules to import and a basic outline of how to put an interactive menu into python turtle graphics. Thank you for your answers!
|
How to Insert an Interactive Menu into Python Turtle Graphics
| 0 | 0 | 0 | 1,224 |
14,177,436 |
2013-01-05T23:11:00.000
| 0 | 0 | 0 | 0 |
python-3.x,amazon-web-services,amazon-s3
| 70,791,596 | 2 | false | 1 | 0 |
Using AWS CLI,
aws s3 ls s3://*bucketname* --region *bucket-region* --no-sign-request
| 2 | 3 | 0 |
Given a bucket with publicly accessible contents, how can I get a listing of all those publicly accessible contents? I know boto can do this, but boto requires AWS credentials. Also, boto doesn't work in Python3, which is what I'm working with.
|
Get publicly accessible contents of S3 bucket without AWS credentials
| 0 | 0 | 1 | 1,214 |
14,177,436 |
2013-01-05T23:11:00.000
| 4 | 0 | 0 | 0 |
python-3.x,amazon-web-services,amazon-s3
| 14,199,730 | 2 | false | 1 | 0 |
If the bucket's permissions allow Everyone to list it, you can just do a simple HTTP GET request to http://s3.amazonaws.com/bucketname with no credentials. The response will be XML with everything in it, whether those objects are accessible by Everyone or not. I don't know if boto has an option to make this request without credentials. If not, you'll have to use lower-level HTTP and XML libraries.
If the bucket itself does not allow Everyone to list it, there is no way to get a list of its contents, even if some of the objects in it are publicly accessible.
| 2 | 3 | 0 |
Given a bucket with publicly accessible contents, how can I get a listing of all those publicly accessible contents? I know boto can do this, but boto requires AWS credentials. Also, boto doesn't work in Python3, which is what I'm working with.
|
Get publicly accessible contents of S3 bucket without AWS credentials
| 0.379949 | 0 | 1 | 1,214 |
14,179,543 |
2013-01-06T05:16:00.000
| 3 | 0 | 1 | 0 |
python,nlp,nltk
| 14,179,593 | 3 | false | 0 | 0 |
How about you tokenize the string as a first step using str.split(). Then, go through the resulting array using a for loop, doing the following: if the word is not contained in the set of keys of a dictionary, add it to the dictionary, storing its count, in this case 1. Otherwise, the word is already contained, look up its count in the dictionary, increment the count, and push it back into the dictionary. Finally, when you are done with counting words, go trough the dictionary and retain only what has a count of five or more.
| 1 | 3 | 0 |
I just need to do what the title of this post says: write a python program that returns all words that occur at least 5 times in a text. I realize this is a pretty simple question. I am a novice programmer trying to pick up some NLP skills and for some reason I can't figure this out. Your help would be much appreciated!
Thank you!
|
How to write a python program that returns all words that occur at least 5 times in a text?
| 0.197375 | 0 | 0 | 750 |
14,179,941 |
2013-01-06T06:33:00.000
| 1 | 0 | 1 | 1 |
python,numpy,installation,scipy
| 21,084,055 | 4 | false | 0 | 0 |
You can import a module from an arbitrary path by calling:
sys.path.append()
| 1 | 32 | 0 |
I am using numpy / scipy / pynest to do some research computing on Mac OS X. For performance, we rent a 400-node cluster (with Linux) from our university so that the tasks could be done parallel. The problem is that we are NOT allowed to install any extra packages on the cluster (no sudo or any installation tool), they only provide the raw python itself.
How can I run my scripts on the cluster then? Is there any way to integrate the modules (numpy and scipy also have some compiled binaries I think) so that it could be interpreted and executed without installing packages?
|
How to install python packages without root privileges?
| 0.049958 | 0 | 0 | 40,097 |
14,180,944 |
2013-01-06T09:44:00.000
| 2 | 0 | 0 | 1 |
python,django,celery
| 14,181,856 | 1 | false | 1 | 0 |
The stable version of kombu is production ready, same for celery.
kombu takes care of the whole messaging between consumers, producers and the message broker which in order are the celery workers, webworkers (or more in general scripts that put tasks in the queue) and the message broker you are using.
You need kombu to run celery (it is actually in the requirements if you look at its setup)
With kombu you can use different message brokers (rabbitmq, redis ...) so the choice is not between using kombu or rabbitmq as they do different things, but between kombu and redis or kombu and rabbitmq etc etc..
If you are ok with redis as message broker, you just have to install:
celery-with-redis and django-celery packages
| 1 | 3 | 0 |
I am using Django-kombu with Celery but have read at a quite few places that it isn't production ready.
Basically, I want to create a multiple master - multiple slaves architecture using Celery and pass messages in between them and and back to the main program that did the call.
I am not able to understand where does Kombu fit in there. Why not RabbitMQ? The tutorials are all very messy with one person suggesting something and the other something else.
Can someone give me clearer picture of what is a production stack look like when dealing Celery + Django?
Also, do I have to use Dj-Celery?
|
What are the other alternatives to using django-kombu?
| 0.379949 | 0 | 0 | 590 |
14,183,362 |
2013-01-06T15:02:00.000
| 0 | 0 | 1 | 1 |
python,linux
| 14,183,758 | 2 | false | 0 | 0 |
Create a symlink in /usr/bin/ called python2.7, point to to where you have installed the new Python and use that.
Do not attempt to upgrade or force the default python on a redhat box, because a lot of other tools will stop working.
| 2 | 0 | 0 |
I have installed manually python (2.7.3). Whoc do I update the rpm version
usr/bin/python -V:
Python 2.7.3
rpm -qf /usr/bin/python:
python-2.6.5-3.el6.x86_64
any suggestions?
linux version: RH6.3
|
python binary version doesnt match rpm version
| 0 | 0 | 0 | 64 |
14,183,362 |
2013-01-06T15:02:00.000
| 1 | 0 | 1 | 1 |
python,linux
| 14,185,931 | 2 | false | 0 | 0 |
You installed it incorrectly. Instead of make install you should run make altinstall. This will install the new version of Python parallel to existing versions, and create a new executable in $PREFIX/bin with the name of python followed by the minor version of Python installed, e.g. python2.7.
| 2 | 0 | 0 |
I have installed manually python (2.7.3). Whoc do I update the rpm version
usr/bin/python -V:
Python 2.7.3
rpm -qf /usr/bin/python:
python-2.6.5-3.el6.x86_64
any suggestions?
linux version: RH6.3
|
python binary version doesnt match rpm version
| 0.099668 | 0 | 0 | 64 |
14,184,589 |
2013-01-06T17:11:00.000
| 0 | 0 | 0 | 1 |
eclipse,macos,python-2.7,wxpython
| 14,198,642 | 1 | true | 0 | 0 |
I always go to Preferences / PyDev / Interpreter - Python. Then add a new interpreter, and just click Add and Apply. Wait until everything is parsed, this takes a while. Then click OK.
Change the interpreter from "Default" to your newly set-up interpreter.
Check if correct interpreter is set for your project. Right-click the project / Properties / PyDev - Interpreter/Grammar. New projects should get this by default.
| 1 | 0 | 0 |
I've Eclipse, python 2.7, wxpython 2.8, and OSx 10.5.8
I would like wxpython is included correctly in my eclipse environment, to have not all the wxpython commands underlined as errors.
I've imported in the PYTHONPATH, via preferences, the correct path of the wx library. Once I import them manually in the Eclipse, save settings, then it works.
But if i close Eclipse, and open it again, even if the interpreter have its own path of wxpython, it seems it's not recognized, and I've no autocomplete, no documentation. I need to remove and add again the same path to make everything work. It still happen after months. I guess it maybe a problem of macosx eclipse.
Do you know why?
Do you agree?
thank you in advance
|
how to set the interpreter of wxpython for eclipse once for all
| 1.2 | 0 | 0 | 245 |
14,185,288 |
2013-01-06T18:24:00.000
| 0 | 0 | 1 | 0 |
python,virtualenv,virtualenvwrapper
| 50,938,501 | 3 | false | 0 | 0 |
Virtualenv does not work because it uses local python interpreter.
My solution is to use conda (anoconda or miniconda) to build the environment, so if you need some packages, you can just conda install them. Then copy it to the remote machine and run.
| 1 | 9 | 0 |
Suppose I have a python interpreter with many modules installed on my local system, and it has been tuned to just work.
Now I want to create a virtualenv to freeze these, so that they won't be broke by upgrading in the future.
How can I make it? Thanks.
I can't use pip freeze, because that's a cluster on which there's no pip and I don't have the privileges to install it. And I don't want the reinstall the modules either, I'm looking for that whether there's a cloning way.
|
How to create a virtualenv by cloning the current local environment?
| 0 | 0 | 0 | 7,361 |
14,185,831 |
2013-01-06T19:22:00.000
| 1 | 1 | 1 | 0 |
python,unit-testing
| 14,185,895 | 1 | false | 0 | 0 |
I usually make one class that handles the setup and tearing down for a particular test topic and subclass it for every single test. That is, one class for every test, with a name that conveys what is being tested. Nothing fancy.
| 1 | 0 | 0 |
I'm trying learn TDD in python. Unfortunately I have not found any PEPs about unittest.
Does one subclass of unittest.TestCase should contain all tests about one tested function?
What are the recommendations for naming classes, methods or test-files?
|
Python unittest - division tests into classes/functions
| 0.197375 | 0 | 0 | 352 |
14,188,923 |
2013-01-07T02:04:00.000
| 0 | 0 | 0 | 0 |
python,excel,xlrd,xlwt,openpyxl
| 30,048,138 | 2 | false | 0 | 0 |
For "xls" files it's possible to use the xlutils package. It's currently not possible to copy objects between workbooks in openpyxl due to the structure of the Excel format: there are lots of dependencies all over the place that need to be managed. It is, therefore, the responsibility of client code to copy everything required manually. If time permits we might try and port some of the xlutils functionality to openpyxl.
| 1 | 3 | 0 |
I want to compare the value of a given column at each row against another value, and if the values are equal, I want to copy the whole row to another spreadsheet.
How can I do this using Python?
THANKS!
|
How to copy a row of Excel sheet to another sheet using Python
| 0 | 1 | 0 | 14,135 |
14,191,034 |
2013-01-07T06:34:00.000
| 1 | 0 | 0 | 0 |
python,django
| 14,191,085 | 1 | true | 1 | 0 |
What we usually do in our Django projects is create versions of all configuration files for each platform (dev, prod, etc...) and use symlinks to select the correct one. Now that Windows supports links properly, this solution fits everybody.
If you insist on another configuration file, try making it a Python file that just imports the proper configuration file, so instead of name="development" you'll have something like execfile('development_settings.py')
| 1 | 0 | 0 |
I'm building project with Django, I can make two setting files for Django: production_setting.py and development_setting.py, however, I need some configure file for my project and I'm using ConfigParser to parse that files. e.g.
[Section]
name = "development"
version = "1.0"
how to split this configure file to production and development?
|
python project setting for production and developement
| 1.2 | 0 | 0 | 150 |
14,191,410 |
2013-01-07T07:08:00.000
| 1 | 0 | 0 | 0 |
python,ruby-on-rails,ruby,node.js,webserver
| 14,192,830 | 2 | true | 1 | 0 |
It all depends on what you are actually trying to do and what your requirements are.
There is no real "right" language for things like these, it's mostly determined by the Frameworks you'll be using on those language (since all are general-purpose programming languages) and your personal preference/experience.
I can't comment too much on Python as I never tried it really, but from what I heard/saw it can be used for all things Ruby is also used, although the Community around Python is a bit smaller with Python being used a lot more in the Scientific community (that may be good if your app may be doing any crayz calculations).
That leads us to Ruby. Ruby and the Ruby on Rails framework is mostly used to write Web-Applications and Services.
Ruby is a very elegant language to program in and the tools are very mature and easy to work with.
Rails is a framework on Ruby that makes Web-Development very simple in providing you with a very good set of tools especially suited to write data-driven web-apps.
Very flexible and a joy to work with.
There are however some drawbacks to Ruby at the moment, mostly related to poor threading.
Node.JS is a new language that is focused on paralellism and supports all things Ruby and Python can do, although it's documentation is lacking compared to what Ruby will give you. It's also not the most beginner-friendly choice as JavaScript with all it's quirks and the callback-oriented async model is not of the simplest thing around.
That said, Node is very bare metal and makes it very very easy to write arbitrary TCP/UDP Servers that don't necessary work over HTTP. Custom streaming protocols or any custom protocol in fact are almost trivially done in Node.. (I don't advise you do that, but maybe that's important to your task).
To be fair there are frameworks that facilitate writing of Web-Apps for node, but the coices are a) not as mature as Rails or Django, and b) you have to pick your framework choices.
This means: Where Rails does come with a lot of defaults that guide you, (Rails for example has a default Database stack it's optimized around), Node with Frameworks like Express only provide you with a bare-bones HTTP server where you have to bring in the Database of your choice etc...
In closing: All languages and frameworks you asked about are mostly used for writing Web-Applications. They all can however be used to write a client that consumes the service too - it mostly comes down to general preference.
| 1 | 2 | 0 |
please excuse my ignorance as I'm an Aerospace Engineer going headfirst into the software world.
I'm building a web solution that allows small computers (think beagleboard) to connect to a server that sends and receives data to these clients. The connection will be over many types including GPRS/3G/4G.
The user will interact with clients in real time through webpages served by this central server. The solution must scale well.
I've been using python for the client side and some simple ruby code for the servers with Heroku. I have also tried a bit of NodeJS and Ruby on Rails. With so many options i'm struggling to see the forest from the trees and wondering where these languages will fit into my stack.
Your help is appreciated; I'm happy to give more details.
|
Outlining a Solution Stack
| 1.2 | 0 | 0 | 98 |
14,191,462 |
2013-01-07T07:12:00.000
| 0 | 0 | 0 | 0 |
python,python-2.7
| 14,193,534 | 1 | false | 0 | 1 |
The question is vague, but I guess you should use c_char_p instead of POINTER(c_char).
| 1 | 0 | 0 |
I call foreign c library in my program using ctypes, I don't know how to assign a POINTER(c_char) variant to a string.
|
How can I assign a variant of POINTER(c_char) type to a string?
| 0 | 0 | 0 | 149 |
14,191,487 |
2013-01-07T07:14:00.000
| 1 | 0 | 0 | 0 |
python,bioinformatics,pca,biopython,hierarchical-clustering
| 28,408,552 | 4 | false | 0 | 0 |
I recommend to use R Bioconductor and free software like Expander and MeV. Good flexible choice is a Cluster software with TreeViews. You can also run R and STATA or JMP from your Python codes and completely automate your data management.
| 1 | 3 | 1 |
I'm trying to analyze microarray data using hierarchical clustering of the microarray columns (results from the individual microarray replicates) and PCA.
I'm new to python. I have python 2.7.3, biopyhton, numpy, matplotlib, and networkx.
Are there functions in python or biopython (similar to MATLAB's clustergram and mapcaplot) that I can use to do this?
|
Microarray hierarchical clustering and PCA with python
| 0.049958 | 0 | 0 | 1,029 |
14,194,997 |
2013-01-07T11:29:00.000
| 7 | 0 | 1 | 0 |
python,types,dictionary
| 14,195,088 | 5 | false | 0 | 0 |
The Pythonic way here is to just use a normal dictionary and only add objects of a particular type to it - don't try to enforce the restriction, it shouldn't be necessary.
Edit: To expand my argument, let me explain - you seem to be under the impression that writing good code requires type safety. The first question is why? Sure, type safety catches some errors at compile time, but in my experience, those errors are rare, easy to catch with even the most trivial testing, and generally easy to fix.
By contrast, the most annoying, hard to fix, and hard to test for bugs are logical ones, that the computer can't spot at all. These are best prevented by making readable code that is easy to understand, so errors stand out more. Dynamic typing massively helps with that by reducing the verbosity of code. You can argue typing makes it easier to read the code (as one can see the types of variables as you use them), but in dynamic languages, this kind of thing is given by naming carefully - if I name a variable seq, people will presume it's a sequence and can be used as such. A mixture of descriptive naming and good documentation makes dynamic code far better, in my experience.
When it comes down to it, type safety in a language is a matter of preference, however, Python is a dynamic language designed around the idea of duck typing. Everything in the language is designed around that and trying to use it in another way would be incredibly counter-productive. If you want to write Java, write Java.
| 1 | 1 | 0 |
What's a good way to implement a type safe dictionary in Python (3.2) - a dictionary that will only allow adding objects of a particular type to itself?
I myself have a simple solution: build a wrapper class around the dictionary with an 'addItem' method that does a type check assertion before adding the object. Looking to see if someone has something better.
|
Type safe Python 3.2 dictionary
| 1 | 0 | 0 | 998 |
14,201,284 |
2013-01-07T17:50:00.000
| 1 | 0 | 0 | 0 |
python,logging,numpy,matplotlib
| 14,223,556 | 1 | true | 0 | 0 |
Another option for storage could be using hdf5 or pytables. Depending on how you structure the data, with pytables you can query the data at key "points". As noted in comments, I dont think an off the shelf solution exists.
| 1 | 1 | 1 |
I'm using python to prototype the algorithms of a computer vision system I'm creating. I would like to be able to easily log heterogeneous data, for example: images, numpy arrays, matplotlib plots, etc, from within the algorithms, and do that using two keys, one for the current frame number and another to describe the logged object. Then I would like to be able to browse all the data from a web browser. Finally, I would like to be able to easily process the logs to generate summaries, for example retrieve the key "points" for all the frame numbers and calculate some statistics on them. My intention is to use this logging subsystem to facilitate debugging the behaviour of the algorithms and produce summaries for benchmarking.
I'm set to create this subsystem myself but I thought to ask first if someone has already done something similar. Does anybody know of any python package that I can use to do what I ask?
otherwise, does anybody have any advice on which tools to use to create this myself?
|
heterogeneous data logging and analysis
| 1.2 | 0 | 0 | 281 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.