Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 6 | 0 | 4 | 46 | 1 | 0.132549 | 0 | I need to generate a sine wave sound in Python, and I need to be able to control frequency, duration, and relative volume. By 'generate' I mean that I want it to play though the speakers immediately, not save to a file.
What is the easiest way to do this? | 0 | python,audio | 2011-11-28T16:51:00.000 | 0 | 8,299,303 | One of the more consistent andeasy to install ways to deal with sound in Python is the Pygame multimedia libraries.
I'd recomend using it - there is the pygame.sndarray submodule that allows you to manipulate numbers in a data vector that become a high-level sound object that can be playerd in the pygame.mixer module.
The documentation in the pygame.org site should be enough for using the sndarray module. | 0 | 68,751 | false | 0 | 1 | Generating sine wave sound in Python | 8,300,219 |
2 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | Background: I am writing a matching script in python that will match records of a transaction in one database to names of customers in another database. The complexity is that names are not unique and can be represented multiple different ways from transaction to transaction.
Rather than doing multiple queries on the database (which is pretty slow) would it be faster to get all of the records where the last name (which in this case we will say never changes) is "Smith" and then have all of those records loaded into memory as you go though each looking for matches for a specific "John Smith" using various data points.
Would this be faster, is it feasible in python, and if so does anyone have any recommendations for how to do it? | 1 | python,mysql | 2011-11-28T17:14:00.000 | 0 | 8,299,614 | Your strategy is reasonable though I would first look at doing as much of the work as possible in the database query using LIKE and other SQL functions. It should be possible to make a query that matches complex criteria. | 0 | 98 | false | 0 | 1 | Could someone give me their two cents on this optimization strategy | 8,299,759 |
2 | 3 | 0 | 2 | 0 | 0 | 0.132549 | 0 | Background: I am writing a matching script in python that will match records of a transaction in one database to names of customers in another database. The complexity is that names are not unique and can be represented multiple different ways from transaction to transaction.
Rather than doing multiple queries on the database (which is pretty slow) would it be faster to get all of the records where the last name (which in this case we will say never changes) is "Smith" and then have all of those records loaded into memory as you go though each looking for matches for a specific "John Smith" using various data points.
Would this be faster, is it feasible in python, and if so does anyone have any recommendations for how to do it? | 1 | python,mysql | 2011-11-28T17:14:00.000 | 0 | 8,299,614 | Regarding: "would this be faster:"
The behind-the-scenes logistics of the SQL engine are really optimized for this sort of thing. You might need to create an SQL PROCEDURE or a fairly complex query, however.
Caveat, if you're not particularly good at or fond of maintaining SQL, and this isn't a time-sensitive query, then you might be wasting programmer time over CPU/IO time in getting it right.
However, if this is something that runs often or is time-sensitive, you should almost certainly be building some kind of JOIN logic in SQL, passing in the appropriate values (possibly wildcards), and letting the database do the filtering in the relational data set, instead of collecting a larger number of "wrong" records and then filtering them out in procedural code.
You say the database is "pretty slow." Is this because it's on a distant host, or because the tables aren't indexed for the types of searches you're doing? … If you're doing a complex query against columns that aren't indexed for it, that can be a pain; you can use various SQL tools including ANALYZE to see what might be slowing down a query. Most SQL GUI's will have some shortcuts for such things, as well. | 0 | 98 | false | 0 | 1 | Could someone give me their two cents on this optimization strategy | 8,299,780 |
1 | 2 | 0 | 0 | 3 | 0 | 0 | 0 | I'm writing an IRC bot using twisted python, and some actions should only be available to channel operators. How do I determine the 'user level' of a user in a channel using twisteds IRCClient? | 0 | python,twisted,irc | 2011-11-28T18:32:00.000 | 0 | 8,300,545 | You can also use channel access lists if your bot has permissions to do so. | 0 | 1,572 | false | 0 | 1 | Check if user is 'voiced' or 'op' in IRC channel using twisted | 9,563,454 |
2 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 1 | I am working with a Python function that sends mails wich include an attachment and a HTML message......I want to add an image on the HTML message using
<img src="XXXX">
When I try it, the message respects the tag, but does not display the image I want (it displays the not found image "X").....
does anyone know if this is a problem with the MIME thing....because i am using the MIMEMultipart('Mixed').....
or it is a problem with the path of the image (I'm using the same path for the atachment file and there is no problem with it)....
I dont know what else could it be!!
thanks a lot!! | 0 | python,html,mime-types,sendmail,mime-message | 2011-11-28T19:53:00.000 | 0 | 8,301,501 | You need to write src="cid:ContentId" to refer to an attached image, where ContentId is the ID of the MIME part. | 0 | 431 | false | 1 | 1 | Image in the HTML email message | 8,301,559 |
2 | 2 | 0 | 1 | 0 | 0 | 1.2 | 1 | I am working with a Python function that sends mails wich include an attachment and a HTML message......I want to add an image on the HTML message using
<img src="XXXX">
When I try it, the message respects the tag, but does not display the image I want (it displays the not found image "X").....
does anyone know if this is a problem with the MIME thing....because i am using the MIMEMultipart('Mixed').....
or it is a problem with the path of the image (I'm using the same path for the atachment file and there is no problem with it)....
I dont know what else could it be!!
thanks a lot!! | 0 | python,html,mime-types,sendmail,mime-message | 2011-11-28T19:53:00.000 | 0 | 8,301,501 | In your html you need the fully qualified path to the image: http://yourdomain.com/images/image.jpg
You should be able to take the URL in the image tag, paste it into the browser's address bar and view it there. If you can't see it, you've got the wrong path. | 0 | 431 | true | 1 | 1 | Image in the HTML email message | 8,301,539 |
1 | 4 | 0 | 1 | 8 | 0 | 0.049958 | 0 | I am trying to build a system that accepts text and outputs the phonetic spelling of the words of this text. Any ideas on what libraries can be used in Python and Java? | 0 | java,python,text-processing,text-mining,spelling | 2011-11-28T21:26:00.000 | 0 | 8,302,553 | Are you looking for something akin to the international phonetic alphabet (IPA) or some other phonetic output? If ARPAbet is ok, there is the CMU pronouncing dictionary (http://www.speech.cs.cmu.edu/cgi-bin/cmudict). That'll give the ARPAbet rendering of most words in English. I've written some code that converts the ARPAbet spelling to IPA and post to github if you'd like. | 0 | 5,615 | false | 1 | 1 | phonetic spelling in Python and Java | 8,546,969 |
2 | 4 | 0 | 2 | 12 | 0 | 1.2 | 0 | I have been trying to get a project of mine to run but I have run into trouble. After much debugging I have narrowed down the problem but have no idea how to proceed.
Some background, I am using a python script inside C++ code. This is somewhat documented on Python, and I managed to get it running very well in my basic executable. #include and a -lpython2.6 and everything was grand.
However, difficulty has arisen when running this python script from a shared library(.so). This shared library is "loaded" as a "module" by a simulation system (OpenRAVE). The system interacts with this module using a virtual method for "modules" called SendCommand. The module then starts a boost::thread, giving python its own thread, and returns to the simulation system. However, when python begins importing its modules and thus loading its dynamic libraries it fails, I assume due to the following error:
ImportError: /usr/lib/python2.6/dist-packages/numpy/core/multiarray.so: undefined symbol: _Py_ZeroStruct
I have run ldd on my executable and the shared library, there doesn't some to be a difference. I have also run nm -D on the file above, the _Py_ZeroStruct is indeed undefined. If you guys would like print outs of the commands I would be glad to supply them. Any advice would be greatly appreciated, thank you.
Here is the full python error:
Traceback (most recent call last):
File "/usr/lib/python2.6/dist-packages/numpy/__init__.py", line 130, in
import add_newdocs
File "/usr/lib/python2.6/dist-packages/numpy/add_newdocs.py", line 9, in
from lib import add_newdoc
File "/usr/lib/python2.6/dist-packages/numpy/lib/__init__.py", line 4, in
from type_check import *
File "/usr/lib/python2.6/dist-packages/numpy/lib/type_check.py", line 8, in
import numpy.core.numeric as _nx
File "/usr/lib/python2.6/dist-packages/numpy/core/__init__.py", line 5, in
import multiarray
ImportError: /usr/lib/python2.6/dist-packages/numpy/core/multiarray.so: undefined symbol: _Py_ZeroStruct
Traceback (most recent call last):
File "/home/constantin/workspace/OpenRAVE/src/grasp_behavior_2.py", line 3, in
from openravepy import *
File "/home/constantin/workspace/rospackages/openrave/lib/python2.6/site-packages/openravepy/__init__.py", line 35, in
openravepy_currentversion = loadlatest()
File "/home/constantin/workspace/rospackages/openrave/lib/python2.6/site-packages/openravepy/__init__.py", line 16, in loadlatest
return _loadversion('_openravepy_')
File "/home/constantin/workspace/rospackages/openrave/lib/python2.6/site-packages/openravepy/__init__.py", line 19, in _loadversion
mainpackage = __import__("openravepy", globals(), locals(), [targetname])
File "/home/constantin/workspace/rospackages/openrave/lib/python2.6/site-packages/openravepy/_openravepy_/__init__.py", line 29, in
from openravepy_int import *
ImportError: numpy.core.multiarray failed to import | 0 | python,shared-libraries,undefined-symbol,openrave | 2011-11-28T21:46:00.000 | 0 | 8,302,810 | The solution was linking the python2.6 library with my executable as well.
Even though the executable made no python calls, it needed to be linked with the python library. I assume its because my shared library doesn't pass the symbols of python library through to the executable. If anyone could explain why my executable (which loads my dynamic library at runtime, without linking) needs those symbols it would be great.
For clarification, my program model is something like:
[My Executable] -(dynamically loads)-> [My Shared Library] -(calls and links with)-> [Python shared Library] | 0 | 9,662 | true | 0 | 1 | Undefined Symbol in C++ When Loading a Python Shared Library | 8,313,757 |
2 | 4 | 0 | 0 | 12 | 0 | 0 | 0 | I have been trying to get a project of mine to run but I have run into trouble. After much debugging I have narrowed down the problem but have no idea how to proceed.
Some background, I am using a python script inside C++ code. This is somewhat documented on Python, and I managed to get it running very well in my basic executable. #include and a -lpython2.6 and everything was grand.
However, difficulty has arisen when running this python script from a shared library(.so). This shared library is "loaded" as a "module" by a simulation system (OpenRAVE). The system interacts with this module using a virtual method for "modules" called SendCommand. The module then starts a boost::thread, giving python its own thread, and returns to the simulation system. However, when python begins importing its modules and thus loading its dynamic libraries it fails, I assume due to the following error:
ImportError: /usr/lib/python2.6/dist-packages/numpy/core/multiarray.so: undefined symbol: _Py_ZeroStruct
I have run ldd on my executable and the shared library, there doesn't some to be a difference. I have also run nm -D on the file above, the _Py_ZeroStruct is indeed undefined. If you guys would like print outs of the commands I would be glad to supply them. Any advice would be greatly appreciated, thank you.
Here is the full python error:
Traceback (most recent call last):
File "/usr/lib/python2.6/dist-packages/numpy/__init__.py", line 130, in
import add_newdocs
File "/usr/lib/python2.6/dist-packages/numpy/add_newdocs.py", line 9, in
from lib import add_newdoc
File "/usr/lib/python2.6/dist-packages/numpy/lib/__init__.py", line 4, in
from type_check import *
File "/usr/lib/python2.6/dist-packages/numpy/lib/type_check.py", line 8, in
import numpy.core.numeric as _nx
File "/usr/lib/python2.6/dist-packages/numpy/core/__init__.py", line 5, in
import multiarray
ImportError: /usr/lib/python2.6/dist-packages/numpy/core/multiarray.so: undefined symbol: _Py_ZeroStruct
Traceback (most recent call last):
File "/home/constantin/workspace/OpenRAVE/src/grasp_behavior_2.py", line 3, in
from openravepy import *
File "/home/constantin/workspace/rospackages/openrave/lib/python2.6/site-packages/openravepy/__init__.py", line 35, in
openravepy_currentversion = loadlatest()
File "/home/constantin/workspace/rospackages/openrave/lib/python2.6/site-packages/openravepy/__init__.py", line 16, in loadlatest
return _loadversion('_openravepy_')
File "/home/constantin/workspace/rospackages/openrave/lib/python2.6/site-packages/openravepy/__init__.py", line 19, in _loadversion
mainpackage = __import__("openravepy", globals(), locals(), [targetname])
File "/home/constantin/workspace/rospackages/openrave/lib/python2.6/site-packages/openravepy/_openravepy_/__init__.py", line 29, in
from openravepy_int import *
ImportError: numpy.core.multiarray failed to import | 0 | python,shared-libraries,undefined-symbol,openrave | 2011-11-28T21:46:00.000 | 0 | 8,302,810 | Check your python-headers and python's runtime. It looks like you have mix of 2.5 and 2.6 versions. | 0 | 9,662 | false | 0 | 1 | Undefined Symbol in C++ When Loading a Python Shared Library | 8,303,699 |
1 | 2 | 0 | 6 | 6 | 1 | 1.2 | 0 | Many problems I've ran into in Python have been related to not having something in Unicode. Is there any good reason to not use Unicode by default? I understand needing to translate something in ASCII, but it seems to be the exception and not the rule.
I know Python 3 uses Unicode for all strings. Should this encourage me as a developer to unicode() all my strings? | 0 | python,unicode | 2011-11-28T21:48:00.000 | 0 | 8,302,833 | Generally, I'm going to say "no" there's not a good reason to use string over unicode. Remember, as well, that you don't have to call unicode() to create a unicode string, you can do so by prefixing the string with a lowercase u like u"this is a unicode string". | 0 | 131 | true | 0 | 1 | Is there any good reason not to use unicode as opposed to string? | 8,302,860 |
1 | 10 | 0 | 0 | 183 | 1 | 0 | 0 | In RStudio, you can run parts of code in the code editing window, and the results appear in the console.
You can also do cool stuff like selecting whether you want everything up to the cursor to run, or everything after the cursor, or just the part that you selected, and so on. And there are hot keys for all that stuff.
It's like a step above the interactive shell in Python -- there you can use readline to go back to previous individual lines, but it doesn't have any "concept" of what a function is, a section of code, etc.
Is there a tool like that for Python? Or, do you have some sort of similar workaround that you use, say, in vim? | 0 | python,ide | 2011-11-29T04:21:00.000 | 0 | 8,305,809 | Wing IDE, and probably also other Python IDEs like PyCharm and PyDev have features like this. In Wing you can either select and execute code in the integrated Python Shell or if you're debugging something you can interact with the paused debug program in a shell (called the Debug Probe). There is also special support for matplotlib, in case you're using that, so that you can work with plots interactively. | 0 | 92,404 | false | 0 | 1 | Is there something like RStudio for Python? | 16,095,116 |
1 | 1 | 0 | 2 | 3 | 0 | 0.379949 | 0 | In Linux, I am trying to debug the C++ code of a shared library which is loaded from Python code. The loading is done using the ctypes package. In Eclipse, I set breakpoints both in the Python and in the C++ code, however Eclipse just skips the breakpoints in the C++ code (breakpoints in the Python code work OK).
I have tried using attach to application in Eclipse (under Debug Configurations) and choosing the Python process, but it didn't change anything. In the attach to application dialog box I choose the shared library as the Project, and I choose /usr/bin/python2.6 as the C/C++ application. Is that the correct way?
I've tried it both before running the python code, and after a breakpoint in the Python code was caught, just before the line calling a function of the shared library.
EDIT
Meanwhile I am using a workaround of calling the python code and debugging using a gdb command-line session by attaching to the python process. But I would like to hear a solution to doing this from within Eclipse. | 0 | python,debugging,gdb,shared-libraries,eclipse-cdt | 2011-11-29T07:51:00.000 | 1 | 8,307,425 | I have been able to debug the c++ shared library loaded by the python in Eclipse successfully.
The prerequisites:
Two eclipse projects in an eclipse workspace: one is the C++ project, from which the c++ shared library is generated, the other is the python project (PyDev), which loads the generated c++ shared library.
The steps are:
create a "Python Run" debug configuration named PythonDebug with the corresponding python environment and parameters
create a "C/C++ Attach to Application" debug configuration named CppDebug. The project field is the C++ project, leave the C/C++ Application field empty
set a breakpoint in python code where after the c++ shared library has already been loaded
start the debug session PythonDebug, the program will be breaked at the created breakpoint at step 3
start the debug session CppDebug, a menu will be popped up, select python process with correct pid (there will be 3 pids, the correct one can be found in PythonDebug session)
set a breakpoint in c++ source code where you want the program to break
continue the PythonDebug session
continue the CppDebug session
the program will break at the c++ breakpoint
I tested the above procedure with Eclipse Mars version.
Hopefully it helps. | 0 | 3,167 | false | 0 | 1 | Eclipse: debug shared library loaded from python | 34,515,642 |
3 | 4 | 0 | 0 | 4 | 0 | 0 | 0 | I have two python scripts running as cronjobs.
ScriptA processes log files and insert records to a table, ScriptB uses the records to generate a report.
I have arranged ScriptA to run one hour before ScriptB, but sometimes ScriptB run before ScriptA finish inserting, thus generating a incorrect report.
How do I make sure ScriptB runs right after ScriptA finishes?
EDIT
ScriptA and ScriptB do very different things, say, one is for saving user data, the other is for internal use. And somewhere else there maybe some ScriptC depending on ScriptA.
So I can't just merge these two jobs. | 0 | python,bash,cron,crontab | 2011-11-30T02:06:00.000 | 1 | 8,320,304 | Make it write a file and check if the file is there. | 0 | 3,971 | false | 0 | 1 | How to make sure a script only runs after another script | 8,320,318 |
3 | 4 | 0 | 0 | 4 | 0 | 0 | 0 | I have two python scripts running as cronjobs.
ScriptA processes log files and insert records to a table, ScriptB uses the records to generate a report.
I have arranged ScriptA to run one hour before ScriptB, but sometimes ScriptB run before ScriptA finish inserting, thus generating a incorrect report.
How do I make sure ScriptB runs right after ScriptA finishes?
EDIT
ScriptA and ScriptB do very different things, say, one is for saving user data, the other is for internal use. And somewhere else there maybe some ScriptC depending on ScriptA.
So I can't just merge these two jobs. | 0 | python,bash,cron,crontab | 2011-11-30T02:06:00.000 | 1 | 8,320,304 | an approach that you could use it! is having some flag of control! somewhere, for example in the DB!
So ScriptB just runs after that flag is set! and right after it finish it it sets the flag back to default state!
Another way that you could implement that flag approach is using file system! Like @Benjamin suggested! | 0 | 3,971 | false | 0 | 1 | How to make sure a script only runs after another script | 8,320,328 |
3 | 4 | 0 | 2 | 4 | 0 | 1.2 | 0 | I have two python scripts running as cronjobs.
ScriptA processes log files and insert records to a table, ScriptB uses the records to generate a report.
I have arranged ScriptA to run one hour before ScriptB, but sometimes ScriptB run before ScriptA finish inserting, thus generating a incorrect report.
How do I make sure ScriptB runs right after ScriptA finishes?
EDIT
ScriptA and ScriptB do very different things, say, one is for saving user data, the other is for internal use. And somewhere else there maybe some ScriptC depending on ScriptA.
So I can't just merge these two jobs. | 0 | python,bash,cron,crontab | 2011-11-30T02:06:00.000 | 1 | 8,320,304 | One approach would be to make sure that if those two jobs are separate cron jobs - there is enough time inbetween to surely cover the run of job 1.
Another approach is locking, as others here suggested, but then note, that cron will not re-run your job just because it completed unsuccessfully because of a lock. So either job2 will have to run in sleep cycles until it doesn't see the lock anymore, or on the contrary sees a flag of job 1 completions, or you'll have to get creative.
Why not to trigger 2nd script from the 1st script after it's finished and make it a single cron job? | 0 | 3,971 | true | 0 | 1 | How to make sure a script only runs after another script | 8,320,343 |
1 | 2 | 0 | 1 | 4 | 0 | 0.099668 | 0 | Is there a way to password protect an application which is hosted in gunicorn,
I did this with .htaccess in apache, but can we do this in gurnicorn? | 0 | python,django,gunicorn | 2011-12-01T12:59:00.000 | 0 | 8,341,797 | You can also use middleware and for example kill every session and show nothing if it not passes the requirements. For example, you can define middleware which checks if the request comes from the IP you use, if yes - do nothing, if no - stop. Maybe not the best, but solution :) | 0 | 3,648 | false | 1 | 1 | how to password protect a website hosted on gunicorn | 8,342,278 |
2 | 3 | 0 | 0 | 2 | 1 | 0 | 0 | So I'm creating unit tests for all of my scripts, many of which arent oo, and have main-loop code. I was wondering what the standard is for the location of creating unittest classes in relation to the affected script. Should the unit test be in a separate file which imports the script, and then function-alize the mainloop code? Or can it shoved at the end of the the related script? | 0 | python,unit-testing | 2011-12-01T18:44:00.000 | 0 | 8,346,580 | The best practice is to have some related functionality bundled in a module. For that module, you create a separate file with unittests. The convention is to name it test_foo.py if your module is foo.py.
Where it sits exactly is not well defined, although a separate directory named test or something similar is common.
And that has nothing to do with the nature of your code. It can be classes or functions, whatever, as long as it's testable. Of course, making it amenable to testing is not always trivial. | 0 | 105 | false | 0 | 1 | Putting unittest classes in the related script itself vs making a separate file | 8,346,679 |
2 | 3 | 0 | 0 | 2 | 1 | 0 | 0 | So I'm creating unit tests for all of my scripts, many of which arent oo, and have main-loop code. I was wondering what the standard is for the location of creating unittest classes in relation to the affected script. Should the unit test be in a separate file which imports the script, and then function-alize the mainloop code? Or can it shoved at the end of the the related script? | 0 | python,unit-testing | 2011-12-01T18:44:00.000 | 0 | 8,346,580 | One additional reason for not putting the unit tests in source file is that during distribution/packaging of your python application you can exclude all the directories that contains the tests. | 0 | 105 | false | 0 | 1 | Putting unittest classes in the related script itself vs making a separate file | 8,346,919 |
1 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 0 | I am currently switching from Eclipse Java Development more and more Python scripting using PyDev. Almost all the time there is a Eclipse backgropund thread called "reindexing PythonHome..." which loads my CPU for almost 100%. Unusable to coding in there anymore :/
Do you have any idea?
Thanks a lot for your help!
John | 0 | python,eclipse,cpu,pydev | 2011-12-02T16:17:00.000 | 1 | 8,359,291 | Disable 'Build Automatically' and 'Refresh Automatically' under
Preferences->General->Workspace
Disable 'Code Analysis' entirely, or configure it to only run on save under
Preferences->PyDev->Editor->Code Analysis | 0 | 706 | false | 1 | 1 | Eclipse & Python: 100% CPU load due to PythonHome reindexing | 10,102,742 |
2 | 2 | 0 | 0 | 3 | 0 | 1.2 | 0 | I would like to build an application in Google App Engine (Python) that would be fully connected to a single GMail account and then filter e-mails from this account (e.g. filter messages for a certain string and show it on the string). In the future I am also going to implement the option to send messages.
What is the most efficient way to do this (solution provided by Google if possible)? | 0 | python,django,google-app-engine,gmail | 2011-12-03T11:39:00.000 | 1 | 8,367,381 | What do you mean by "fully connected"?
It's possible to set up a GMail filter to forward emails to a different address (say, the email address of your App Engine app). And an App Engine app an send emails (say, to a GMail address). The trick is to set up the GMail filter carefully to avoid loops. | 0 | 487 | true | 1 | 1 | Filtering GMail messages in Google App Engine application | 8,373,562 |
2 | 2 | 0 | 0 | 3 | 0 | 0 | 0 | I would like to build an application in Google App Engine (Python) that would be fully connected to a single GMail account and then filter e-mails from this account (e.g. filter messages for a certain string and show it on the string). In the future I am also going to implement the option to send messages.
What is the most efficient way to do this (solution provided by Google if possible)? | 0 | python,django,google-app-engine,gmail | 2011-12-03T11:39:00.000 | 1 | 8,367,381 | There is no Api for Gmail in App Engine. The only thing you can do is forwarding messages to App Engine.
I have used fowarding for building auto responders.
But there is an excellent GMail Api in Google Apps Script with lots of functions. Apps scrips uses javascript. And ofcourse your apps script can communicate with App Engine. | 0 | 487 | false | 1 | 1 | Filtering GMail messages in Google App Engine application | 8,379,161 |
1 | 1 | 0 | 4 | 4 | 0 | 1.2 | 0 | I'm trying to figure out where does the initial sys.path value come from. One ubuntu system suddenly (by which I mean probably manually by someone doing something weird) lost entries at the end of the array.
All other hosts: ['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7']
That host: ['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages']
The /usr/lib/pymodules/python2.7 path is the one I actually care about. But where does it come from on the healthy nodes? | 0 | python,import,path | 2011-12-06T19:48:00.000 | 1 | 8,405,855 | It comes from the python-support package, specifically from the /usr/lib/python2.7/dist-packages/python-support.pth file that is installed.
There shouldn't be any modules installed to that directory manually and any package installing modules to that directory should have a dependency on the python-support package, so you shouldn't have to worry about whether it is in sys.path or not. | 0 | 649 | true | 0 | 1 | Where does the initial sys.path come from | 8,405,986 |
1 | 1 | 1 | 1 | 2 | 0 | 0.197375 | 0 | Background:
I am writing an ebook editing program in python. Currently it utilizes a source-code view for editing, and I would like to port it over to a wysiwyg view for editing. The best (only?) html renderer I could find for python was webkit (I am using the PyQt version).
Question:
How do I accomplish wysiwyg editing? The requirements/issues are as follows:
An ebook may be up to 10,000 paragraphs / 1,000,000
characters.
PyQt Webkit (ContentEditable): No problem.
PyQt Webkit (TinyMce, etc): Takes forever to open them!
The format is <body><p>...</p><p>...</p>...</body>. The body element contains only paragraphs, there are no divs, etc (but in the paragraph there may be spans, links, etc.). Editing must take place with no significant delays as far as the user is concerned.
PyQt Webkit (ContentEditable): If you try deleting text across multiple paragraphs, it takes forever!! My understanding is that this is because it resets the common-parent of the elements being changed - i.e. the entire body element, since two different paragraphs are being deleted/merged. But, there should be no need for this - it should need only delete/merge/change those individual paragraphs!
I am open to implementing my own wysiwyg editing, but for the life of me I can't figure out how to delete/cut/paste/merge/change the html code correctly. I searched online for articles about html wysiwyg design theory, and came up dry.
Thanks! | 0 | python,html,wysiwyg | 2011-12-07T08:17:00.000 | 0 | 8,412,215 | Can i suggest a complete another approach ? Since your ebook is only <p></p>:
Split the text on <p></p> to get an indexed array of all your paragraphs
Make your own pagination system, and fill the screen with N paragraphs, that automatically get enough text to show from the indexed array
When you are doing selection, you can use [paragraph index + character index in the paragraph] for selection start / end
Then implement cut/copy/paste/delete/undo/redo based on thoses assumptions.
(Note: when you'll do a selection, since the start point is saved, you can safely change the text on the screen / pagination, until the selection end.) | 0 | 1,587 | false | 1 | 1 | Python/Javascript: WYSIWYG html editor - Handle large documents fast and/or design theory | 8,413,073 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | With Python, I need to read a file into a script similar to open(file,"rb"). However, the file is on a server that I can access through SSH. Any suggestions on how I can easily do this? I am trying to avoid paramiko and am using pexpect to log into the SSH server, so a method using pexpect would be ideal.
Thanks,
Eric | 0 | python,ssh,pexpect | 2011-12-08T01:35:00.000 | 1 | 8,425,089 | If it's a short file you can get output of ssh command using subprocess.Popen
ssh root@ip_address_of_the_server 'cat /path/to/your/file'
Note: Password less setup using keys should be configured in order for it to work. | 0 | 1,242 | false | 0 | 1 | Python - Read in binary file over SSH | 8,425,184 |
1 | 1 | 0 | 0 | 5 | 0 | 0 | 0 | I mostly do work in Python, but I have been using some of the Ruby stuff for Server Configuration Management (ie Puppet and Chef). I also use Ubuntu/Debian as my primary Linux distro for servers.
Why is there a weird Debian/Ruby conflict over Gems, and not a similar showdown between Debian/Python over Pip?
Personally, I don't mind installing newer packages then the "system" approves of. I know Debian wants to make a stable system, but when I am running my own application code on the server, I can guarantee you it's not stable to begin with.
Anyway, I would be interested to know if Pip is doing something different, or if it's an ego thing or whatever? | 0 | python,ruby,rubygems,pip | 2011-12-08T16:04:00.000 | 1 | 8,433,881 | I think you should raise your problem about gem/debian and what are you going to do with it specially.
I am using pip and debian now and still no problem by now. | 0 | 3,102 | false | 1 | 1 | Python Pip vs Ruby Gems | 8,507,138 |
1 | 1 | 0 | 3 | 1 | 0 | 1.2 | 0 | I have an interesting project going on at our workplace. The task, that stands before us, is such:
Build a custom server using Python
It has a web server part, serving REST
It has a FTP server part, serving files
It has a SMTP part, which receives mail only
and last but not least, a it has a background worker that manages lowlevel file IO based on requests received from the above mentioned services
Obviously the go to place was Twisted library/framework, which is an excelent networking tool. However, studying the docs further, a few things came up that I'm not sure about.
Having Java background, I would solve the task (at least at the beginning) by spawning a separate thread for each service and going from there. Being in Python however, I cannot do that for any reasonable purpose as Python has GIL. I'm not sure, how Twisted handles this. I would expect, that Twisted has large (if not majority) code written in C, where GIL is not the issue, but that I couldn't find the docs explained to my satisfaction.
So the most oustanding question is: Given that Twisted uses Reactor as it's main design pattern, will it be able to:
Serve all those services needed
Do it in a non-blocking fashion (it should, according to docs, but if someone could elaborate, I'd be grateful)
Be able to serve about few hundreds of clients at once
Serve large file downloads in a reasonable way, meaning that it can serve multiple clients, using multiple services, downloading and uploading large files.
Large files being in the order of hundres of MB, or few GB. The size is not important, it's the time that the client has to stay connected to the server that matters.
Edit: I'm actually inclined to go the way of python multiprocessing, but not sure, whether that's a correct thing to do with Twisted etc. | 0 | python,twisted | 2011-12-09T10:14:00.000 | 1 | 8,443,994 | Serve all those services needed
Yes.
Do it in a non-blocking fashion (it should, according to docs, but if someone could elaborate, I'd be grateful)
Twisted's uses the common reactor model. I/O goes through your choice of poll, select, whatever to determine if data is available. It handles only what is available, and passes the data along to other stages of your app. This is how it is non-blocking.
I don't think it provides non-blocking disk I/O, but I'm not sure. That feature not what most people need when they say non-blocking.
Be able to serve about few hundreds of clients at once
Yes. No. Maybe. What are those clients doing? Is each hitting refresh every second on a browser making 100 requests? Is each one doing a numerical simulation of galaxy collisions? Is each sending the string "hi!" to the server, without expecting a response?
Twisted can easily handle 1000+ requests per second.
Serve large file downloads in a reasonable way, meaning that it can serve multiple clients, using multiple services, downloading and uploading large files.
Sure. For example, the original version of BitTorrent was written in Twisted. | 0 | 216 | true | 0 | 1 | Compound custom service server using Twisted | 8,445,796 |
1 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | I'm tracking a linux filesystem (that could be any type) with pyinotify module for python (which is actually the linux kernel behind doing the job). Many directories/folders/files (as much as the user want to) are being tracked with my application and now i would like track the md5sum of each file and store them on a database (includes every moving, renaming, new files, etc).
I guess that a database should be the best option to store all the md5sum of each file... But what should be the best database for that? Certainly a very performatic one. I'm looking for a free one, because the application is gonna be GPL. | 0 | python,database,database-performance,pyinotify | 2011-12-11T01:36:00.000 | 1 | 8,461,306 | You could try Redis. It is most certainly fast.
But really, since you're tracking a filesystem, and disks are slow as snails in comparison to even a medium-fast database, performance shouldn't be your primary concern. | 0 | 1,214 | false | 0 | 1 | most performatic free database for file system tracking | 8,463,655 |
4 | 7 | 0 | 0 | 5 | 1 | 0 | 0 | I would like to compare different methods of finding roots of functions in python (like Newton's methods or other simple calc based methods). I don't think I will have too much trouble writing the algorithms
What would be a good way to make the actual comparison? I read up a little bit about Big-O. Would this be the way to go? | 0 | python,performance,algorithm,math,numerical-methods | 2011-12-13T02:11:00.000 | 0 | 8,483,522 | I just finish a project where comparing bisection, Newton, and secant root finding methods. Since this is a practical case, I don't think you need to use Big-O notation. Big-O notation is more suitable for asymptotic view. What you can do is compare them in term of:
Speed - for example here newton is the fastest if good condition are gathered
Number of iterations - for example here bisection take the most iteration
Accuracy - How often it converge to the right root if there is more than one root, or maybe it doesn't even converge at all.
Input - What information does it need to get started. for example newton need an X0 near the root in order to converge, it also need the first derivative which is not always easy to find.
Other - rounding errors
For the sake of visualization you can store the value of each iteration in arrays and plot them. Use a function you already know the roots. | 0 | 2,621 | false | 0 | 1 | Comparing Root-finding (of a function) algorithms in Python | 26,903,947 |
4 | 7 | 0 | 1 | 5 | 1 | 0.028564 | 0 | I would like to compare different methods of finding roots of functions in python (like Newton's methods or other simple calc based methods). I don't think I will have too much trouble writing the algorithms
What would be a good way to make the actual comparison? I read up a little bit about Big-O. Would this be the way to go? | 0 | python,performance,algorithm,math,numerical-methods | 2011-12-13T02:11:00.000 | 0 | 8,483,522 | Big-O notation is designed to describe how an alogorithm behaves in the limit, as n goes to infinity. This is a much easier thing to work with in a theoretical study than in a practical experiment. I would pick things to study that you can easily measure that and that people care about, such as accuracy and computer resources (time/memory) consumed.
When you write and run a computer program to compare two algorithms, you are performing a scientific experiment, just like somebody who measures the speed of light, or somebody who compares the death rates of smokers and non-smokers, and many of the same factors apply.
Try and choose an example problem or problems to solve that is representative, or at least interesting to you, because your results may not generalise to sitations you have not actually tested. You may be able to increase the range of situations to which your results reply if you sample at random from a large set of possible problems and find that all your random samples behave in much the same way, or at least follow much the same trend. You can have unexpected results even when the theoretical studies show that there should be a nice n log n trend, because theoretical studies rarely account for suddenly running out of cache, or out of memory, or usually even for things like integer overflow.
Be alert for sources of error, and try to minimise them, or have them apply to the same extent to all the things you are comparing. Of course you want to use exactly the same input data for all of the algorithms you are testing. Make multiple runs of each algorithm, and check to see how variable things are - perhaps a few runs are slower because the computer was doing something else at a time. Be aware that caching may make later runs of an algorithm faster, especially if you run them immediately after each other. Which time you want depends on what you decide you are measuring. If you have a lot of I/O to do remember that modern operating systems and computer cache huge amounts of disk I/O in memory. I once ended up powering the computer off and on again after every run, as the only way I could find to be sure that the device I/O cache was flushed. | 0 | 2,621 | false | 0 | 1 | Comparing Root-finding (of a function) algorithms in Python | 8,484,871 |
4 | 7 | 0 | 1 | 5 | 1 | 0.028564 | 0 | I would like to compare different methods of finding roots of functions in python (like Newton's methods or other simple calc based methods). I don't think I will have too much trouble writing the algorithms
What would be a good way to make the actual comparison? I read up a little bit about Big-O. Would this be the way to go? | 0 | python,performance,algorithm,math,numerical-methods | 2011-12-13T02:11:00.000 | 0 | 8,483,522 | You can get wildly different answers for the same problem just by changing starting points. Pick an initial guess that's close to the root and Newton's method will give you a result that converges quadratically. Choose another in a different part of the problem space and the root finder will diverge wildly.
What does this say about the algorithm? Good or bad? | 0 | 2,621 | false | 0 | 1 | Comparing Root-finding (of a function) algorithms in Python | 8,487,731 |
4 | 7 | 0 | 5 | 5 | 1 | 0.141893 | 0 | I would like to compare different methods of finding roots of functions in python (like Newton's methods or other simple calc based methods). I don't think I will have too much trouble writing the algorithms
What would be a good way to make the actual comparison? I read up a little bit about Big-O. Would this be the way to go? | 0 | python,performance,algorithm,math,numerical-methods | 2011-12-13T02:11:00.000 | 0 | 8,483,522 | The answer from @sarnold is right -- it doesn't make sense to do a Big-Oh analysis.
The principal differences between root finding algorithms are:
rate of convergence (number of iterations)
computational effort per iteration
what is required as input (i.e. do you need to know the first derivative, do you need to set lo/hi limits for bisection, etc.)
what functions it works well on (i.e. works fine on polynomials but fails on functions with poles)
what assumptions does it make about the function (i.e. a continuous first derivative or being analytic, etc)
how simple the method is to implement
I think you will find that each of the methods has some good qualities, some bad qualities, and a set of situations where it is the most appropriate choice. | 0 | 2,621 | false | 0 | 1 | Comparing Root-finding (of a function) algorithms in Python | 8,483,756 |
1 | 1 | 0 | 0 | 2 | 0 | 0 | 0 | How do you get the name of a win32com.client.gencache.EnsureDispath name?
For example, I have an application named "xxxxx", and the Python object browser detects it when I use makepy.py. How do I get name like "Excel.Application"? | 0 | python,win32com | 2011-12-13T02:37:00.000 | 0 | 8,483,683 | You can use the combrowser.py to find the running application on your PC. Go to the win32com library folder and you can find the python script.
You can get the list of all the running application. Sometimes it would be tricky if you get the token or moniker of the application instead of the name. But you can use those moniker to reach the application instead. | 0 | 333 | false | 0 | 1 | How to Get gencache.EnsureDispatch Name | 71,846,830 |
1 | 1 | 0 | 1 | 2 | 0 | 0.197375 | 0 | This is part of some preliminary research and I am having a difficult time figuring out what options might be available or if this is even a situation where a solution even exists.
Essentially we have an existing python based simulation that we would like to make available to people via the web. It can be pretty processor intensive, so while we could just run the sim server side and write a client that connects to it, this would not be ideal.
Writing a UI in Flash/Flex or HTML5, not a problem. However, is there any way to keep the core simulation logic in python without having it live server side? Is there any existing way to embed python modules in either of these technologies?
Thanks all. | 0 | python,apache-flex,html | 2011-12-13T04:16:00.000 | 0 | 8,484,253 | Pyjamas: Python->Javascript, set of widgets for use in a browser or a desktop
Skulpt: Python written in Javascript
Emscripten: C/C++ -> LLVM -> Javascript
Empythoned: Based on emscripten and cpython, working on a stdlib? There are bugs to file | 0 | 451 | false | 1 | 1 | Python in a webapp (client side) | 8,484,821 |
1 | 2 | 0 | 3 | 4 | 0 | 1.2 | 0 | We have a growing library of apps depending on a set of common util modules. We'd like to:
share the same utils codebase between all projects
allow utils to be extended (and fixed!) by developers working on any project
have this be reasonably simple to use for devs (i.e. not a big disruption to workflow)
cross-platform (no diffs for devs on Macs/Win/Linux)
We currently do this "manually", with the utils versioned as part of each app. This has its benefits, but is also quite painful to repeatedly fix bugs across a growing number of codebases.
On the plus side, it's very simple to deal with in terms of workflow - util module is part of each app, so on that side there is zero overhead.
We also considered (fleetingly) using filesystem links or some such (not portable between OS's)
I understand the implications about release testing and breakage, etc. These are less of a problem than the mismatched utils are at the moment. | 0 | python | 2011-12-13T09:34:00.000 | 1 | 8,486,942 | You can take advantage of Python paths (the paths searched when looking for module to import).
Thus you can create different directory for utils and include it within different repository than the project that use these utils. Then include path to this repository in PYTHONPATH.
This way if you write import mymodule, it will eventually find mymodule in the directory containing utils. So, basically, it will work similarly as it works for standard Python modules.
This way you will have one repository for utils (or separate for each util, if you wish), and separate repositories for other projects, regardless of the version control system you use. | 0 | 250 | true | 0 | 1 | Sharing util modules between actively developed apps | 8,487,730 |
1 | 3 | 0 | 2 | 1 | 1 | 1.2 | 0 | I have a genetic expression tree program to control a bot. I have a GP.py and a MyBot.py. I need to be able to have the MyBot.py access an object created in GP.py
The GP.py is starting the MyBot.py via the os.system() command
I have hundreds of tree objects in the GP.py file and the MyBot.py needs to evaluate them.
I can't combine the two into one .py file because a fresh instance of MyBot.py is executed thousands of times, but GP.py needs to evaluate the fitness of MyBot.py with each tree.
I know how to import the methods and Class definitions using import GP.py, but I need the specific instance of the Tree class object
Any ideas how to send this tree from the first instance to the second? | 0 | python | 2011-12-14T06:17:00.000 | 0 | 8,500,225 | You could serialize the object with the pickle module (or maybe json?)
If you really want to stick with os.system, then you could have MyBot.py write the pickled object to a file, which GP.py could read once MyBot.py returns.
If you use the subprocess module instead, then you could have MyBot.py write the pickled object to stdout, and GP.py could read it over the pipe.
If you use the multiprocessing module, then you could spawn the MyBot process and pass data back and forth over a Queue instance. | 0 | 771 | true | 0 | 1 | How to pass object between python instances | 8,500,285 |
1 | 2 | 0 | 2 | 1 | 0 | 1.2 | 0 | I'm making a scan server for my company, which will be used to launch scans from tools such as nessus, nmap, nikto etc. I've written the pages in php, so once the scan is launched it backgrounds the process and returns the PID. Part of the design spec is that once a scan has finished, the results are then emailed to the appropiate consultant. This is where I'm looking for some ideas, for I'm not sure how to go about doing this.
Would I be best making the php script feed the PID to instances of a python (my main language) script, which constantly checked to see if the process had finished, for example? I did try putting this process checking loop in the PHP page, but obviously this makes the PHP page pause whilst it waits for the scan to complete, which doesn't work for me unfortunately as multiple scans will be being run.
Or would I be better creating a database which stored state information about the process? I have no database experience but this could be a good time to learn.
Any suggestions? Even some ideas that I can google would be much appreciated!
Thanks | 0 | php,python,process | 2011-12-14T16:57:00.000 | 0 | 8,508,457 | When I face this kind of problem, I usually do a Python daemon that does the whole job, and have the PHP just sending messages to this Python daemon. These messages can be something very simple, like a bunch of files in a directory that is constantly checked, or registers in database.
How good this implmeentation is would depend on the scale of your application. | 0 | 95 | true | 0 | 1 | How to perform an action when a process has finished, whilst there are multiple processes are running | 8,508,562 |
2 | 2 | 0 | 0 | 2 | 0 | 0 | 0 | The title makes it obvious, is this a good idea? I've been looking for a robotics simulator in languages i know (i know ruby best, then c++, then python -- want to strengthen here--, forget about javascript, but i know it).
i found something called pyro, but it probably doesn't fit my needs (listed below).
In my last university term i learned c++, then they took me to RobotC (which was only about 2 months of the term). Pyro seems similar but now i want something different.
I need something that allows to import graphics, allows 3d environments, allows to easily modify actions robot can perform. Also provides other things necessary for robot programming, like a sensor. | 0 | c++,python,ruby,robotics,panda3d | 2011-12-15T01:24:00.000 | 0 | 8,513,998 | I would suggest you to go for ROS(gazebo) and write your nodes in C++ or python. You can follow Lentin Joseph's book on Learning Robotics Using Python. It helps you in building autonomous bots with ROS and OpenCV. | 0 | 867 | false | 1 | 1 | Panda3d Robotics | 48,337,393 |
2 | 2 | 0 | 0 | 2 | 0 | 0 | 0 | The title makes it obvious, is this a good idea? I've been looking for a robotics simulator in languages i know (i know ruby best, then c++, then python -- want to strengthen here--, forget about javascript, but i know it).
i found something called pyro, but it probably doesn't fit my needs (listed below).
In my last university term i learned c++, then they took me to RobotC (which was only about 2 months of the term). Pyro seems similar but now i want something different.
I need something that allows to import graphics, allows 3d environments, allows to easily modify actions robot can perform. Also provides other things necessary for robot programming, like a sensor. | 0 | c++,python,ruby,robotics,panda3d | 2011-12-15T01:24:00.000 | 0 | 8,513,998 | Panda 3D is a good language to write your own robot system in. It's written by CMU people, so it's very clean and makes a lot of sense. It allows you to import very complex models from Maya or Blender. It supports 3D environments. Although it has its own scripting language for running actions (animations) imported from your modeling package, I prefer to write my own robot driver. It supports three different physics engines, including its own basic version, Open Dynamics Engine (ODE), and most recently Bullet. Although it supports collision detection, which allows triggering, it is an animation and graphic rendering system, not a robotics system per se, and so you'll have to craft your own sensor simulations beside or on top of it. All in all, though, it is quite satisfactory. Good luck. | 0 | 867 | false | 1 | 1 | Panda3d Robotics | 10,025,027 |
1 | 2 | 0 | 0 | 2 | 0 | 0 | 0 | I want to use a program written in a high level language like Java or Python to talk to a GSM Modem.
I want to be able to tell the modem what number to call and when to call it. I also want to be able to read and send text messages.
I do NOT need to handle voice transmission in either direction of the call.
I'd appreciate recommendations of any applicable libraries and specific modems that are good to start with? I like Java but am willing to try something else.
Thanks | 0 | java,python,mobile,gsm,modem | 2011-12-18T01:41:00.000 | 0 | 8,549,259 | Almost all modems and (phones which support tethering to your PC) can do this. All modems are equally good at it.There are no starter's modems. Just go through the AT commands specific to your applications and thats it. | 0 | 4,955 | false | 1 | 1 | Programming a GSM phone/modem to make phone calls | 8,605,549 |
2 | 4 | 0 | 1 | 4 | 0 | 0.049958 | 0 | I am writing a desktop application in Python. This application requires the user to input their GMail email and password (however an account must be created specifically for this app, so it's not their personal (read: important) GMail account). I was wondering what would be the best way to store those login credentials. I don't need it to be super secure, but would like something more than storing it as plain text.
Thank you in advance. | 0 | python,security | 2011-12-18T06:21:00.000 | 0 | 8,550,193 | Any chance you could not store the information on disk at all? I think that's always the most secure approach, if you can manage it. Can you check the credentials and then discard that information?
You can always encrypt the information if that doesn't work, but the decryption mechanism and key would probably have to reside in your program, then. Still, it might meet your criterion of not super-secure but better than plain text. | 0 | 1,583 | false | 0 | 1 | Where to store user credentials in a Python desktop application? | 8,550,234 |
2 | 4 | 0 | 0 | 4 | 0 | 0 | 0 | I am writing a desktop application in Python. This application requires the user to input their GMail email and password (however an account must be created specifically for this app, so it's not their personal (read: important) GMail account). I was wondering what would be the best way to store those login credentials. I don't need it to be super secure, but would like something more than storing it as plain text.
Thank you in advance. | 0 | python,security | 2011-12-18T06:21:00.000 | 0 | 8,550,193 | Use the platform's native configuration storage mechanism (registry, GConf, plist). | 0 | 1,583 | false | 0 | 1 | Where to store user credentials in a Python desktop application? | 8,550,200 |
4 | 16 | 0 | 1 | 315 | 1 | 0.012499 | 0 | I want to set up a complete Python IDE in Sublime Text 2.
I want to know how to run the Python code from within the editor. Is it done using build system? How do I do it ? | 0 | python,ide,sublimetext2,sublimetext | 2011-12-18T12:36:00.000 | 0 | 8,551,735 | seems the Ctrl+Break doesn't work on me, neither the Preference - User...
use keys, Alt → t → c | 0 | 582,824 | false | 0 | 1 | How do I run Python code from Sublime Text 2? | 62,906,919 |
4 | 16 | 0 | 5 | 315 | 1 | 0.062419 | 0 | I want to set up a complete Python IDE in Sublime Text 2.
I want to know how to run the Python code from within the editor. Is it done using build system? How do I do it ? | 0 | python,ide,sublimetext2,sublimetext | 2011-12-18T12:36:00.000 | 0 | 8,551,735 | [ This applies to ST3 (Win), not sure about ST2 ]
To have the output visible in Sublime as another file (+ one for errors), do this:
Create a new build system: Tools > Build Systems > New Build System...
Use the following configuration:
{
"cmd": ["python.exe", "$file", "1>", "$file_name.__STDOUT__.txt", "2>", "$file_name.__STDERR__.txt"],
"selector": "source.python",
"shell": true,
"working_dir": "$file_dir"
}
For your Python file select the above build system configuration file: Tools > Build Systems > {your_new_build_system_filename}
ctrl + b
Now, next to your file, e.g. "file.py" you'll have "file.__STDOUT__.py" and "file.__STDERR__.py" (for errors, if any)
If you split your window into 3 columns, or a grid, you'll see the result immediately, without a need to switch panels / windows | 0 | 582,824 | false | 0 | 1 | How do I run Python code from Sublime Text 2? | 40,689,778 |
4 | 16 | 0 | 0 | 315 | 1 | 0 | 0 | I want to set up a complete Python IDE in Sublime Text 2.
I want to know how to run the Python code from within the editor. Is it done using build system? How do I do it ? | 0 | python,ide,sublimetext2,sublimetext | 2011-12-18T12:36:00.000 | 0 | 8,551,735 | I had the same problem. You probably haven't saved the file yet. Make sure to save your code with .py extension and it should work. | 0 | 582,824 | false | 0 | 1 | How do I run Python code from Sublime Text 2? | 10,363,165 |
4 | 16 | 0 | 0 | 315 | 1 | 0 | 0 | I want to set up a complete Python IDE in Sublime Text 2.
I want to know how to run the Python code from within the editor. Is it done using build system? How do I do it ? | 0 | python,ide,sublimetext2,sublimetext | 2011-12-18T12:36:00.000 | 0 | 8,551,735 | You can access the Python console via “View/Show console” or Ctrl+`. | 0 | 582,824 | false | 0 | 1 | How do I run Python code from Sublime Text 2? | 8,552,005 |
1 | 2 | 0 | 2 | 0 | 0 | 1.2 | 0 | I need a tool to access my email inbox via some convenient interface. For example, I need to get the timestamps of letters from some address. I need that data to build some mailing statistics.
What can I use to do that? Maybe there is an API in Thunderbird or something like that. I work under Windows.
Thank you in advance.
EDIT: Maybe I can do that with some Python library? | 0 | python,api,email,statistics | 2011-12-19T11:32:00.000 | 0 | 8,560,684 | Your best bet is to use a programming language that has a simple IMAP library which you can use to connect to your account. Then you will have to write a script that can do your statistics.
Since you asked for an API I assume that you know how to use a scripting language like PHP, Python or Ruby which all have an IMAP library available. | 0 | 411 | true | 0 | 1 | Mail inbox analyzer | 8,560,732 |
1 | 2 | 0 | 2 | 1 | 0 | 0.197375 | 0 | I use eclipse (cdt) for c++ projects. It contains some test scripts in python. Is there anyway I could get syntax highlighting and auto completion(if possible) when viewing a python script? Or should I load the complete python perspective? | 0 | python,eclipse-cdt | 2011-12-19T13:41:00.000 | 0 | 8,562,189 | You can just open your python file in the C/C++ perspective and get full syntax highlighting and auto completion.
If that does not work for you, then perhaps you don't have PyDev installed. Find it and install it using Help > Eclipse Marketplace.
If that still does not work for you (if you double-click it the file is opened as plain text, or gets executed in a command shell) then perhaps somehow the wrong editor got associated to it. Right click the file, then select Open With > Python Editor. You'll need to do this once: Eclipse will remember your choice.
Note: You can open any file in any perspective, and the corresponding editor will always open for you. A perspective is just a grouping of views that should be helpful for you in the given context. It does not enable or disable any additionally installed software. | 0 | 2,419 | false | 0 | 1 | How to edit a python file in eclipse when in C++ perspective? | 8,587,142 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I'm attempting to get Eclipse + PyDev set up so I don't need to alter the PYTHONPATH from within Eclipse, but rather it will inherit the PYTHONPATH from the .profile document from inside my home directory. Is that possible, or do I need to actually add the PYTHONPATH locations using Eclipse's PYTHONPATH editor? I ask because I am getting different errors when going from Terminal-based python to python in Eclipse, using the same files. | 0 | python,eclipse,pythonpath | 2011-12-20T00:43:00.000 | 1 | 8,569,486 | You need to configure it inside Eclipse. Now, provided have a shell that has the PYTHONPATH properly setup, you should be able to start Eclipse from it and when adding an interpreter (you may want to remove your currently configured interpreter), it should automatically pick all of the PYTHONPATH you had setup from the shell (some of those may be unchecked in the wizard at that time, so, you have to go on and check those too -- just don't add the paths from files you'll be editing inside your project, as those should be added to the PYTHONPATH for your project so that PyDev is able to track changes on those files to properly offer you code-completion). | 0 | 603 | false | 0 | 1 | How to make Eclipse inherit $PYTHONPATH on Mac OS X | 8,658,155 |
1 | 1 | 0 | 2 | 0 | 0 | 1.2 | 0 | I need to test application with testers around the world. I've written class for saving important data to the file. What is the easiest way to send this file (or simple coded string) from tester's pc to me (I suppose to some server, or a special mail?) without interrupting the tester in a Python? | 0 | python | 2011-12-20T10:12:00.000 | 0 | 8,573,807 | I think what you want is a Version Control System such as mercurial, git, etc | 0 | 197 | true | 0 | 1 | Python - online data collection | 8,573,933 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | Should my wsgi directory be completely outside of www?
DocumentRoot /usr/local/www/
WSGIScriptAlias / /usr/local/wsgi/
Something like that, yeah? | 0 | python,django,mod-wsgi,wsgi | 2011-12-21T00:00:00.000 | 0 | 8,583,541 | You don't need a DocumentRoot when using a WSGIScriptAlias /.
The answer to your actual question: it's probably best, yeah. I usually set the DocumentRoot to a 404 folder (folder with an index.html that shows a 404 page) and the WSGIScriptAlias to the actual script. Whether the 404 folder is actually useful? No idea, I've never seen it get hit. However, it's a good idea to keep them separated to avoid direct access to the contents of your code... which is something I have seen happen. | 0 | 508 | true | 0 | 1 | mod_wsgi directory outside of DocumentRoot? | 8,583,560 |
1 | 1 | 0 | 2 | 2 | 0 | 1.2 | 1 | How can I get paramiko to do the equivalent of setting "TCPKeepAlive yes" in ~/.ssh/config? | 0 | python,ssh,paramiko | 2011-12-21T10:39:00.000 | 0 | 8,588,506 | Got it: Transport.set_keepalive. Use in conjunction with the timeout argument to SSHClient.connect to set the socket timeout. | 0 | 383 | true | 0 | 1 | Can I enable TCPKeepAlive with paramiko? | 8,588,973 |
1 | 2 | 1 | 1 | 0 | 1 | 1.2 | 0 | I wanted to recompile PIL after having installed libjpeg because it threw the decoder jpeg not available whenever I tried importing JPEG images.
So, I've downloaded libjpeg, compiled it and installed it. Then I removed the ./build folder from PIL's source cache, and recompiled it (using sudo python setup.py install).
Now the selftest.py thing is failing with *** The _imaging C module is not installed. I have no idea what the issue is.
There are no symbol errors.
The _imaging module is importable
All dylibs are loaded properly (according to -v)
The decoder error is still there.
Does anyone know what could be causing this? I'm on OS X Lion. | 0 | python,python-imaging-library | 2011-12-21T13:41:00.000 | 0 | 8,590,714 | What would I try:
Remove old PIL and install new it from scratch (maybe it did not override properly).
If you missed something when compiling the libjpeg, like path specifications it will not find some of the libraries, so I recomend trying MacPorts py27-pil port for PIL installation, which will place all dependancies. | 0 | 672 | true | 0 | 1 | Can't get PIL to work on Mac OS X | 8,591,901 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | hi every one I was wondering how to go about deploying a small time python database web application. Is buying some cheap hardware and installing a server good on it a good route to go? | 0 | python,web | 2011-12-21T14:43:00.000 | 0 | 8,591,602 | You can get a virtual server instance on amazon or rackspace or many others for a very small fee per month, $20 - $60. This will give you a clean install of the OS of your choice. No need to invest in hardware. From there you can follow any of the many many tutorials on deploying a django app. | 0 | 215 | true | 1 | 1 | deploying a python web application | 8,592,171 |
1 | 1 | 0 | 1 | 2 | 1 | 0.197375 | 0 | I just want to find out what the user has set for the timeout period for autolock, which is in Settings->General->Auto-Lock in iOS. How can I find this?
Thanks, | 0 | ios,settings,python-idle,auto-lock | 2011-12-22T05:17:00.000 | 0 | 8,599,703 | Actually, just found out from apple...there is no public API for this. | 0 | 592 | false | 0 | 1 | can we access the autolock setting that the user has set in iOS? | 8,608,563 |
1 | 1 | 0 | 2 | 1 | 0 | 0.379949 | 0 | It gives me the error on line 23 of the repo file:
exec: python: not found.
the thing is, I have python installed in C:\Python27 (the default)
I'm using the Git Bash when typing in these commands. I've tried to move the python folder into the git directory to run the repo file and it still says the same thing.
I've tried to run the python interpreter and then run the repo file, but it says the same thing.
Anybody have any suggestions? I just wanna download the android source code through the git and repo. | 0 | android,python,git,repository | 2011-12-23T14:31:00.000 | 1 | 8,617,208 | It seems to me that you should add python to your path variable. | 0 | 593 | false | 0 | 1 | "Error: Python not found" when trying to access the android repository | 8,617,691 |
1 | 4 | 0 | 3 | 0 | 1 | 0.148885 | 0 | I'm stucked on the chapter 3.3 "Math functions" of "Think Python".
It tells me to import math (through the interpreter).
Then print math and that I should get something like this:
<module 'math' from '/usr/lib/python2.5/lib-dynload/math.so'>
Instead I get <module 'math' <built-in>>
Anyway that's not the problem. Though I wasn't able to find a 'math.so' file in my python folder. The most similar file is named test_math.
The problem is that I'm supposed to write this:
>>> ratio = signal_power / noise_power
>>> decibels = 10 * math.log10(ratio)
>>> radians = 0.7
>>> height = math.sin(radians)
When I write the first line it tells me this:
Traceback <most recent call last>:
File "<stdin>", line 1, in <module>
NameError: name 'signal_power' is not defined
On the book says "The first example uses log10 to compute a signal-to-noise ratio in decibels (assuming that signal_power and noise_power are defined)."
So I assume that the problem might be that I didn't defined 'signal_power', but I don't know how to do it and what to assign to it...
This is the first time that I feel that this book is not holding my hand and I'm already lost. To be honest I don't understand this whole chapter.
By the way, I'm using Python2.7 and Windows XP. I may copy and paste the whole chapter if anyone feels that I should do it.
Python is my first language and I already tried to learn it using "Learn Python the hard way" but got stucked on chapter 16. So I decided to use "Think Python" and then go back to "Learn Python the hard way". | 0 | python,function,module | 2011-12-24T23:22:00.000 | 0 | 8,627,414 | You've figured it out - you have to set signal_power's value before using it. As to what you have to set it to - it's not really a Python related question, but 1 is always a safe choice :) While you are at it, don't forget to define noise_power. | 0 | 529 | false | 0 | 1 | Python's math module & "Think Python" | 8,627,435 |
1 | 1 | 0 | 0 | 1 | 0 | 1.2 | 0 | In my server written in Stackless Python, I occasionally am getting large spikes in CPU usage for 5-10 seconds durations. This happens sporadically so I'm having trouble tracking it down.
I've used cProfile to try and determine where these spikes are coming from but cProfile gives an overall picture of where time is being spent per function. What I would really like to know is whether the CPU spikes are due to some processing occurring in a single tasklet (and stalling other tasklets) or if there are multiple tasklets doing a lot of processing (ie. as each becomes active, each is doing a lot of work).
Is there a convenient way to hook into the scheduler in Stackless Python so that I can add some timing code? In other words, is there a function that is invoked when a tasklet becomes active and when it becomes inactive that I can hook into? | 0 | python,profiling,stackless,green-threads | 2011-12-25T19:11:00.000 | 0 | 8,631,091 | I haven't found an explicit function to hook into when a tasklet blocks/resumes, but since Channel.receive() is typically when the block/resume occurs, I hooked into every occurrence of that happening. | 0 | 184 | true | 0 | 1 | Stackless Python - profile single tasklet execution time | 8,660,039 |
3 | 4 | 0 | 1 | 4 | 1 | 0.049958 | 0 | What's the easiest way to calculate the execution time of a Python script? | 0 | python,performance-testing | 2011-12-25T21:45:00.000 | 0 | 8,631,743 | Under linux:
time python script.py | 0 | 6,333 | false | 0 | 1 | Easiest way to calculate execution time of a python script? | 8,631,751 |
3 | 4 | 0 | 5 | 4 | 1 | 0.244919 | 0 | What's the easiest way to calculate the execution time of a Python script? | 0 | python,performance-testing | 2011-12-25T21:45:00.000 | 0 | 8,631,743 | Using Linux time command like this : time python file.py
Or you can take the times at start and at end and calculate the difference. | 0 | 6,333 | false | 0 | 1 | Easiest way to calculate execution time of a python script? | 8,631,753 |
3 | 4 | 0 | 0 | 4 | 1 | 0 | 0 | What's the easiest way to calculate the execution time of a Python script? | 0 | python,performance-testing | 2011-12-25T21:45:00.000 | 0 | 8,631,743 | How about using time?
example:
time myPython.py | 0 | 6,333 | false | 0 | 1 | Easiest way to calculate execution time of a python script? | 8,631,755 |
1 | 1 | 0 | 0 | 2 | 0 | 0 | 0 | If I use command ssh -f server 'cp /file1 /file2 & >/dev/null 2>/dev/null ; disown;' it detaches well and command is runned in the background. But is there any variant of -f key for paramiko? | 0 | python,ssh,paramiko | 2011-12-27T23:00:00.000 | 0 | 8,650,148 | You have to create new thread/process, for example using Python threading module. Paramiko has to write to socket after finishing Django request. | 0 | 1,175 | false | 0 | 1 | python-paramiko - run command in background | 8,650,409 |
1 | 1 | 0 | 4 | 0 | 0 | 1.2 | 0 | I am wondering exactly how a thermostat program would work and wanted to see if anyone had a better opinion on it. From what I know, there are a few control algorithms that could be used, some being Bang-Bang (On/Off), Proportional Control Algorithms, and PID Control. Looking on Wikipedia, there is a great deal of explanations for all three in which I understand completely. However, when trying to implement a proportional control algorithm, I feel that I am missing the need or the use of the proportional gain (K) and the output. Since today's thermostats do not include the need to vary power or current, how do I manipulate the output so that I can trigger the controls ON/OFF of the thermostat? Also, what is the value of the proportional gain or K? | 0 | python,algorithm | 2011-12-28T01:51:00.000 | 0 | 8,651,063 | The issue is overshooting the setpoint temperature.
If you simply run the device until the set point temperature is reached, you will overshoot, wasting energy (and possibly doing damage, depending on what the thermostat controls.)
You need to "ease up to" the setpoint so that you arrive at the set point just as the device is shutting down so that no more energy goes in to rise above the set point. | 0 | 2,614 | true | 0 | 1 | Thermostat Control Algorithms | 8,651,274 |
1 | 3 | 0 | 2 | 2 | 0 | 0.132549 | 0 | Not sure what the above error means. I just installed ghmm on my mac and get this error every time I do a import ghmm. I do not get this message on my ghmm install on my linux machine and other than that all functions appear to be fine.
I wondering if anyone has seen this before and if there's anything I can do to get rid of this. The only thing I did different between the two installs was the autogen.sh file was refering to "libtoolize" which doesn't exist on my mac so I changed it to its replacement "glibtoolize" which allowed it to compile and install fine.
Any suggestions on what this error actually means(and hopefully how I can solve it) would be great.
(I couldn't find the answer on google but this program does not appear to be specific to ghmm) | 0 | python,linux,macos | 2011-12-28T17:05:00.000 | 1 | 8,658,934 | eaj is correct that initstate needs more than 8 bytes for state information. The best way to do this for ghmm is with either the --enable-gsl or --with-rng=bsd option for ./configure. --with-rng=bsd makes the type "ghmm_rng_state_t" 8 bytes instead of 1. See rng.h in the ghmm directory. | 0 | 403 | false | 0 | 1 | random: not enough state (1 bytes); ignored | 9,056,422 |
1 | 3 | 0 | 1 | 3 | 1 | 0.066568 | 0 | I have two scripts, a python script and a perl script.
How can I make the perl script run the python script and then runs itself? | 0 | python,perl | 2011-12-28T17:34:00.000 | 1 | 8,659,226 | It may be simpler to run both scripts from a shell script, and use pipes (assuming that you're in a Unix environment) if you need to pass the results from one program to the other | 0 | 11,721 | false | 0 | 1 | Run a python script in perl | 8,659,430 |
1 | 2 | 1 | 0 | 1 | 0 | 1.2 | 0 | I have an NSMutableArray filled with BeziarPaths. I'd like to serialize it so that its accessible on Python. Someone suggested to me that I can try GZIP + InkML or GZIP +JSON. I was wondering what the best way to do this is. I am also really new to this, so example code would be extremely helpful.
Thanks | 0 | python,objective-c,serialization | 2011-12-28T21:15:00.000 | 0 | 8,661,415 | Choose what you like most. Both are standards, but JSON is a generic format used for serializing dictionaries and arrays, while InkML focuses on drawing related objects.
JSON support is available in both Python and Objective-C, while InkML has no built-in support in either. | 0 | 318 | true | 0 | 1 | Objective C to Python Serialization | 8,661,467 |
2 | 2 | 0 | 0 | 2 | 1 | 0 | 0 | For a math fair project I want to make a program that will generate a Julia set fractal. To do this i need to plot complex numbers on a graph. Does anyone know how to do this? Remember I am using complex numbers, not regular coordinates. Thank You! | 0 | python,graph,fractals,complex-numbers | 2011-12-28T23:36:00.000 | 0 | 8,662,501 | Julia set renderings are generally 2D color plots, with [x y] representing a complex starting point and the color usually representing an iteration count. | 0 | 1,786 | false | 0 | 1 | Plotting Complex Numbers in Python? | 8,662,563 |
2 | 2 | 0 | 2 | 2 | 1 | 0.197375 | 0 | For a math fair project I want to make a program that will generate a Julia set fractal. To do this i need to plot complex numbers on a graph. Does anyone know how to do this? Remember I am using complex numbers, not regular coordinates. Thank You! | 0 | python,graph,fractals,complex-numbers | 2011-12-28T23:36:00.000 | 0 | 8,662,501 | You could plot the real portion of the number along the X axis and plot the imaginary portion of the number along the Y axis. Plot the corresponding pixel with whatever color makes sense for the output of the Julia function for that point. | 0 | 1,786 | false | 0 | 1 | Plotting Complex Numbers in Python? | 8,662,525 |
2 | 2 | 0 | 1 | 1 | 1 | 0.099668 | 0 | Is there a shared binary format between iOS and Python? I found binary property lists.
I have a list of UIBeziarPaths in an array that I want to be able to send to Python. I am just looking for something that will very efficiently be able to do that. I looked into the text based formats like JSON except they seem less efficient than a binary format for this purpose. | 0 | python,ios,binary | 2011-12-28T23:40:00.000 | 0 | 8,662,519 | Python has a struct standard module that allows easy manipulation of simple binary formats with conversion to python types (struct.unpack) or in the opposite direction (struct.pack). | 0 | 217 | false | 0 | 1 | Shared binary format between iOS and Python | 8,662,573 |
2 | 2 | 0 | 1 | 1 | 1 | 1.2 | 0 | Is there a shared binary format between iOS and Python? I found binary property lists.
I have a list of UIBeziarPaths in an array that I want to be able to send to Python. I am just looking for something that will very efficiently be able to do that. I looked into the text based formats like JSON except they seem less efficient than a binary format for this purpose. | 0 | python,ios,binary | 2011-12-28T23:40:00.000 | 0 | 8,662,519 | There are no formats specifically designed for iOS/Python. There are numerous data interchange formats you could use, including protocol buffers, BSON, ASN.1 (if you're that way inclined) and even a range of binary XML serialisation formats.
OTOH, I would strongly favour JSON (a textual format) unless bandwidth is exceptionally tight.
EDIT: I was awfully remiss not to mention another strong contender for binary transmission: BERT. I would favour BERT over any other binary format, but note my comments to the original question regarding encoding size. | 0 | 217 | true | 0 | 1 | Shared binary format between iOS and Python | 8,662,606 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I have a problem with links on my website. Please forgive me if this is asked somewhere else, but I have no idea how to search for this.
A little background on the current situation:
I've created a python program that randomly generates planets for a sci-fi game. Each created planet is placed in a text file to be viewed at a later time. The program asks the user how many planets he/she wants to create and makes that many text files. Then, after all the planets are created, the program zips all the files into a file 'worlds.zip'. A link is then provided to the user to download the zip file.
The problem:
The first time I run this everything works perfectly fine. When run a second time, however, and I click the link to download the zip file it gives me the exact same zip file as I got the first time. When I ftp in and download the zip file directly I get the correct zip file, despite the link still being bad.
Things I've tried:
When I refresh the page the link is still bad. When I delete all my browser history the link is still bad. I've tried a different browser and that didn't work. I've attempted to delete the file from the web server and that didn't solve the problem. Changing the html file providing the link worked once, but didn't work a second time.
Simplified Question:
How can I get a link on my web page to update to the correct file?
I've spent all day trying to fix this. I don't mind looking up information or reading articles and links, but I don't know how to search for this, so even if you guys just give me links to other sites I'll be happy (although directly answering my question will always be appreciated :)). | 0 | python,html,web | 2011-12-29T22:08:00.000 | 0 | 8,674,077 | I don't know anything about Python, but in PHP, in some fopen modes, if a file is trying to be made with the same name as an existing file, it will cancel the operation. | 0 | 45 | false | 1 | 1 | Website Links to Downloadable Files Don't Seem to Update | 8,674,108 |
4 | 4 | 0 | 0 | 11 | 1 | 0 | 0 | I'm developing a C++ application that is extended/ scriptable with Python. Of course C++ is much faster than Python, in general, but does that necessarily mean that you should prefer to execute C++ code over Python code as often as possible?
I'm asking this because I'm not sure, is there any performance cost of switching control between code written in C++ and code written in Python? Should I use code written in C++ on every occasion, or should I avoid calling back to C++ for simple tasks because any speed gain you might have from executing C++ code is outmatched by the cost of switching between languages?
Edit: I should make this clear, I'm not asking this to actually solve a problem. I'm asking purely out of curiosity and it's something worth keeping in mind for the future. So I'm not interested in alternative solutions, I just want to know the answer, from a technical standpoint. :) | 0 | c++,python,performance | 2011-12-30T00:46:00.000 | 0 | 8,675,062 | The best metric should be something that wieghs up for you....
Makes development, debugging and testing easier (lowers dev cost)
Lowers the cost of maintenance
meets the performance requirement (provides solution) | 0 | 991 | false | 0 | 1 | Price of switching control between C++ and Python | 8,675,094 |
4 | 4 | 0 | 1 | 11 | 1 | 1.2 | 0 | I'm developing a C++ application that is extended/ scriptable with Python. Of course C++ is much faster than Python, in general, but does that necessarily mean that you should prefer to execute C++ code over Python code as often as possible?
I'm asking this because I'm not sure, is there any performance cost of switching control between code written in C++ and code written in Python? Should I use code written in C++ on every occasion, or should I avoid calling back to C++ for simple tasks because any speed gain you might have from executing C++ code is outmatched by the cost of switching between languages?
Edit: I should make this clear, I'm not asking this to actually solve a problem. I'm asking purely out of curiosity and it's something worth keeping in mind for the future. So I'm not interested in alternative solutions, I just want to know the answer, from a technical standpoint. :) | 0 | c++,python,performance | 2011-12-30T00:46:00.000 | 0 | 8,675,062 | The cost is present but negligible. That's because you probably do a fair bit of work converting python's high level datatypes to C++-compatible representations. Of course this is similar to the cost of calling one C++ function from another, there's some overhead. The rules for when it's a good idea to switch from python to C++ are:
A function with few arguments
A function which does a large amount of processing on a small amount of data
A function which is called as rarely as possible - consolidate function calls if possible | 0 | 991 | true | 0 | 1 | Price of switching control between C++ and Python | 8,675,109 |
4 | 4 | 0 | 2 | 11 | 1 | 0.099668 | 0 | I'm developing a C++ application that is extended/ scriptable with Python. Of course C++ is much faster than Python, in general, but does that necessarily mean that you should prefer to execute C++ code over Python code as often as possible?
I'm asking this because I'm not sure, is there any performance cost of switching control between code written in C++ and code written in Python? Should I use code written in C++ on every occasion, or should I avoid calling back to C++ for simple tasks because any speed gain you might have from executing C++ code is outmatched by the cost of switching between languages?
Edit: I should make this clear, I'm not asking this to actually solve a problem. I'm asking purely out of curiosity and it's something worth keeping in mind for the future. So I'm not interested in alternative solutions, I just want to know the answer, from a technical standpoint. :) | 0 | c++,python,performance | 2011-12-30T00:46:00.000 | 0 | 8,675,062 | Keep it simple and tune performance as needed. The primary reason for embedding an interpreter in a C++ app is to allow run-time configuration/data to specify some processing - i.e. you can modify the script without recompiling the C++ program - that's your guide for when to call into the interpreter. Once in some interpreter call, the primary reasons to call back into C++ are:
to access or update some data that can't reasonably be exposed as a parameter to the call (or via some other registration process the interpreter supports)
to get better performance during some critical part of the processing
For the latter, try the script first (assuming it's as easy to develop there), then if it's slow identify where and how some C++ code might help. If/where performance does prove a problem - as a general guideline when calling from C++ to the interpreter or vice versa: try to line up as much work as possible then make the call into the other system. If you get stuck, come back to stackoverflow with a specific problem and actual code. | 0 | 991 | false | 0 | 1 | Price of switching control between C++ and Python | 8,675,394 |
4 | 4 | 0 | 8 | 11 | 1 | 1 | 0 | I'm developing a C++ application that is extended/ scriptable with Python. Of course C++ is much faster than Python, in general, but does that necessarily mean that you should prefer to execute C++ code over Python code as often as possible?
I'm asking this because I'm not sure, is there any performance cost of switching control between code written in C++ and code written in Python? Should I use code written in C++ on every occasion, or should I avoid calling back to C++ for simple tasks because any speed gain you might have from executing C++ code is outmatched by the cost of switching between languages?
Edit: I should make this clear, I'm not asking this to actually solve a problem. I'm asking purely out of curiosity and it's something worth keeping in mind for the future. So I'm not interested in alternative solutions, I just want to know the answer, from a technical standpoint. :) | 0 | c++,python,performance | 2011-12-30T00:46:00.000 | 0 | 8,675,062 | I don't know there is a concrete rule for this, but a general rule that many follow is to:
Prototype in python. This is quicker to write, and may be easier to read/reason about.
Once you have a prototype, you can now identify the slow portions that should be written in c++ (through profiling).
Depending on the domain of your code, the slow bits are usually isolated to the 'inner loop' types of code, so the number of switches between python an this code should be relatively small.
If your program is sufficiently fast, you've successfully avoided prematurely optimizing your code by writing too much in c++. | 0 | 991 | false | 0 | 1 | Price of switching control between C++ and Python | 8,675,112 |
1 | 1 | 0 | 1 | 3 | 0 | 0.197375 | 0 | The question is, how do you make a robot for Robocode using Python? There seem to be two options:
Robocode + Jython
Robocode for .NET + Iron Python
There's some info for the first, but it doesn't look very robust, and none for the latter. Step by step, anyone? | 0 | python,jython,robocode | 2011-12-30T20:00:00.000 | 0 | 8,683,501 | As long as your java-class extends robocode.Robot everything is recognized as robot.
It doesn't matter where you put the class. | 0 | 1,489 | false | 1 | 1 | Robocode + Python | 18,469,631 |
1 | 3 | 0 | 2 | 38 | 0 | 0.132549 | 0 | I've recently started experimenting with using Python for web development. So far I've had some success using Apache with mod_wsgi and the Django web framework for Python 2.7. However I have run into some issues with having processes constantly running, updating information and such.
I have written a script I call "daemonManager.py" that can start and stop all or individual python update loops (Should I call them Daemons?). It does that by forking, then loading the module for the specific functions it should run and starting an infinite loop. It saves a PID file in /var/run to keep track of the process. So far so good. The problems I've encountered are:
Now and then one of the processes will just quit. I check ps in the morning and the process is just gone. No errors were logged (I'm using the logging module), and I'm covering every exception I can think of and logging them. Also I don't think these quitting processes has anything to do with my code, because all my processes run completely different code and exit at pretty similar intervals. I could be wrong of course. Is it normal for Python processes to just die after they've run for days/weeks? How should I tackle this problem? Should I write another daemon that periodically checks if the other daemons are still running? What if that daemon stops? I'm at a loss on how to handle this.
How can I programmatically know if a process is still running or not? I'm saving the PID files in /var/run and checking if the PID file is there to determine whether or not the process is running. But if the process just dies of unexpected causes, the PID file will remain. I therefore have to delete these files every time a process crashes (a couple of times per week), which sort of defeats the purpose. I guess I could check if a process is running at the PID in the file, but what if another process has started and was assigned the PID of the dead process? My daemon would think that the process is running fine even if it's long dead. Again I'm at a loss just how to deal with this.
Any useful answer on how to best run infinite Python processes, hopefully also shedding some light on the above problems, I will accept
I'm using Apache 2.2.14 on an Ubuntu machine.
My Python version is 2.7.2 | 0 | python,apache,daemon,infinite-loop | 2011-12-31T01:39:00.000 | 1 | 8,685,695 | I assume you are running Unix/Linux but you don't really say. I have no direct advice on your issue. So I don't expect to be the "right" answer to this question. But there is something to explore here.
First, if your daemons are crashing, you should fix that. Only programs with bugs should crash. Perhaps you should launch them under a debugger and see what happens when they crash (if that's possible). Do you have any trace logging in these processes? If not, add them. That might help diagnose your crash.
Second, are your daemons providing services (opening pipes and waiting for requests) or are they performing periodic cleanup? If they are periodic cleanup processes you should use cron to launch them periodically rather then have them run in an infinite loop. Cron processes should be preferred over daemon processes. Similarly, if they are services that open ports and service requests, have you considered making them work with INETD? Again, a single daemon (inetd) should be preferred to a bunch of daemon processes.
Third, saving a PID in a file is not very effective, as you've discovered. Perhaps a shared IPC, like a semaphore, would work better. I don't have any details here though.
Fourth, sometimes I need stuff to run in the context of the website. I use a cron process that calls wget with a maintenance URL. You set a special cookie and include the cookie info in with wget command line. If the special cookie doesn't exist, return 403 rather than performing the maintenance process. The other benefit here is login to the database and other environmental concerns of avoided since the code that serves normal web pages are serving the maintenance process.
Hope that gives you ideas. I think avoiding daemons if you can is the best place to start. If you can run your python within mod_wsgi that saves you having to support multiple "environments". Debugging a process that fails after running for days at a time is just brutal. | 0 | 36,296 | false | 0 | 1 | How do I run long term (infinite) Python processes? | 8,685,801 |
1 | 2 | 0 | 2 | 1 | 0 | 0.197375 | 0 | For a git repository that is shared with others, is it a vulnerability to expose your database password in the settings.py file? (My initial thought was no, since you still need the ssh password.) | 0 | python,django | 2012-01-02T00:09:00.000 | 0 | 8,696,469 | That assumes your database is only accessible from one specific host, and even then, why would you want to give a potential attacker another piece of information? Suppose you deploy this to a shared host and I have an account on there, I could connect to your database just by logging into my account on that box.
Also, depending on who you are writing this for and what kind of auditing they need to go through (PCI, state audits, etc), this might just not be allowed.
I would try to find a way around checking in the password. | 0 | 104 | false | 1 | 1 | Exposing passwords in django | 8,696,496 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | I have searched tutorials and documentation for gevent, but seems that there isn't lots of it.
I have coded Python for several years, also I can code PHP + JavaScript + jQuery.
So, how would I create Omeglish chat, where one random person connects and then waits for another one to connect? I have understood that Omegle uses gevent, but my site would have to hold 200 - 1000 people simultaneously.
Besides the server side, there should be fully functional client side too and I think it should be created with jQuery/JavaScript.
I would need little help with the coding part. I can code Python well, but I have no idea how I would make that kind of chat system nor what would be the best Python library for it.
The library doesn't have to be gevent but I have heard that it's very good for stuff like this.
Thanks. | 0 | python,gevent | 2012-01-02T14:39:00.000 | 0 | 8,702,080 | If I've understood you right, you just need to link the second person with someone connected before. Think it's simple.
The greenlet working with a person who comes first ('the first greenlet') just register somewhere it's inbound and outbound queues. The greenlet working with the second person gets this queues, unregister them and use for chat messages exchange.
The next person's greenlet finds out that there is no registered in/out queues, registers its own and waits for the fourth. An so on.
Is it what you need? | 0 | 2,300 | true | 1 | 1 | How would I create "Omegle"-like random chat with gevent? | 8,712,328 |
1 | 2 | 0 | 0 | 3 | 0 | 0 | 0 | I'm a fan of Ruby but I don't oppose Python. ( I have 2+ years of Ruby experience and maybe 2 months of Python ).
Anyway, I need to create a service for both the Mac and Windows (and Linux, actually) that takes certain files from different directories and sends them to S3. I could use .NET on Windows but I don't want to use Objective-C and I would love to keep my code-base the same on all platforms.
So after digging around a little, it looks like I should be able to compile either Ruby or Python to byte-code and distribute an interpreter to run the code.
But, am I wrong in assuming that Python has better support for compiling code? As in .pyc byte code?
Also, I would prefer the end user not be able to read my source code but I'm not going to the end of the world to try and stop them.
Thanks! | 0 | python,ruby,compilation,distribution | 2012-01-03T02:16:00.000 | 1 | 8,707,238 | Use whatever language you know well, I know python and use that to develop windows desktop applications and end user can't distinguish it with say a C# or C++ app | 0 | 1,058 | false | 0 | 1 | Should I use Python or Ruby for creating a cross-platform, compiled application? | 8,716,554 |
2 | 5 | 0 | 0 | 0 | 1 | 0 | 0 | How can i copy a .exe file through python? I tried to read the file and then write the contents to another but everytime i try to open the file it say ioerror is directory. Any input is appreciated.
EDIT:
ok i've read through the comments and i'll edit my code and see what happens. If i still get an error i'll post my code. | 0 | python,executable | 2012-01-03T03:42:00.000 | 1 | 8,707,663 | Use shutil.copyfile(src, dst) or shutil.copy(src, dst). It may not work in case of files in the C:\Program Files\ as they are protected by administrator rights by default. | 0 | 2,980 | false | 0 | 1 | how to copy an executable file with python? | 8,709,784 |
2 | 5 | 0 | 1 | 0 | 1 | 0.039979 | 0 | How can i copy a .exe file through python? I tried to read the file and then write the contents to another but everytime i try to open the file it say ioerror is directory. Any input is appreciated.
EDIT:
ok i've read through the comments and i'll edit my code and see what happens. If i still get an error i'll post my code. | 0 | python,executable | 2012-01-03T03:42:00.000 | 1 | 8,707,663 | Windows Vista and 7 will restrict your access to files installed into the Programs directories. Unless you run with UAC privileges you will never be able to open them.
I hope I'm interpreting your error properly. In the future it is best to copy and paste the actual error message into your question. | 0 | 2,980 | false | 0 | 1 | how to copy an executable file with python? | 8,707,865 |
1 | 1 | 0 | 1 | 2 | 0 | 1.2 | 0 | I'm starting a web project in Python and I'm looking for a process manager that offers reloading in the same manner as PHP-FPM.
I've built stuff with Python before and Paste seems similar to what I want, but not quite.
The need for the ability to reload the process rather than restart is to allow long-running tasks to complete uninterrupted where necessary. | 0 | python,nginx,php | 2012-01-03T21:28:00.000 | 0 | 8,718,870 | How about supervisor with uwsgi? | 0 | 1,732 | true | 1 | 1 | Is there a Python equivalent to PHP-FPM? | 8,718,932 |
1 | 2 | 0 | 0 | 6 | 0 | 0 | 0 | I'm relatively new to both Python and bash. However, I am finding Python much more intuitive and easier than bash. I have a few bash scripts I have managed to cobble together, but I would like to replace them with Python scripts - for ease of maintenance etc.
The bash scripts essentially run python scripts, check the returned status code and act appropriately (e.g. log a message, fire off an email etc) - this is functionality that I thing I can for the most part, reproduce in a Python script.
The one thing I am not sure of how to do, is how to run a python script from another python script and get the returned status code.
Can anyone post a snippet here that will show how to run a small python script 'test.py' from a main python script 'master.py' and correctly retrieve the return code after running test.py from master.py? | 0 | python,bash | 2012-01-04T09:26:00.000 | 1 | 8,724,557 | I would suggest you to look at the subprocess module in python. You can start another process using it, manipulate its streams and get the return code. | 0 | 2,400 | false | 0 | 1 | How to run a python script from another python script and get the returned status code? | 8,724,602 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 1 | Is it possible to open a non-blocking ssh tunnel from a python app on the heroku cedar stack? I've tried to do this via paramiko and also asyncproc with no success.
On my development box, the tunnel looks like this:
ssh -L local_port:remote_server:remote_port another_remote_server | 0 | python,ssh,heroku,paramiko,cedar | 2012-01-04T23:16:00.000 | 0 | 8,735,487 | Can you please post the STDERR of ssh -v -L .....? May be you need to disable the tty allocation and run ssh in batch mode. | 0 | 476 | false | 1 | 1 | open an ssh tunnel from heroku python app on the cedar stack? | 8,735,515 |
1 | 4 | 1 | 1 | 0 | 1 | 0.049958 | 0 | I'm trying to use a file that uses PIL and when I try to run it I get the following error:
ImportError: The _imaging C module is not installed
I know theres a bunch of threads online about this but most of them see pretty specific. I'm 100% sure there is no problem with the code I'm running. Python version 2.7.2 64bit windows 7. I've been trying to fix it for almost an hour or so and I'm losing my mind. Any suggestions? | 0 | python,module,imaging | 2012-01-05T17:56:00.000 | 0 | 8,747,299 | try to install pillow. you can install it with the command: pip install pillow
you have install the python-imaging??
sudo apt-get install python-imaging. install first the python-imaging and next install the pillow | 0 | 6,417 | false | 0 | 1 | Python: PIL _imaging C module | 11,829,919 |
1 | 2 | 0 | 8 | 17 | 1 | 1 | 0 | I have a folder A which contains some Python files and __init__.py.
If I copy the whole folder A into some other folder B and create there a file with "import A", it works. But now I remove the folder and move in a symbolic link to the original folder. Now it doesn't work, saying "No module named foo".
Does anyone know how to use symlink for importing? | 0 | python,import,symlink | 2012-01-05T20:13:00.000 | 0 | 8,749,108 | This kind of behavior can happen if your symbolic links are not set up right. For example, if you created them using relative file paths. In this case the symlinks would be created without error but would not point anywhere meaningful.
If this could be the cause of the error, use the full path to create the links and check that they are correct by lsing the link and observing the expected directory contents. | 0 | 22,941 | false | 0 | 1 | Python: import symbolic link of a folder | 41,005,775 |
3 | 3 | 0 | 1 | 1 | 1 | 0.066568 | 0 | I'm trying to add a python egg to my eclipse pydev path via Eclipse Settings -> PyDev -> Interpreter - Python -> New Egg/Zip(s), and in the dialog where I browse to the egg file, and click the "open" button on the dialog, it simply keeps the dialog open and browses into the egg.
This is on OS X with Helios SR 2. | 0 | python,eclipse,pydev | 2012-01-05T22:07:00.000 | 1 | 8,750,530 | Go to:
Window > Preferences > PyDev > Interpreter - (Python, Iron python, or Jython) > Libraries
You can add a new folder, using the New folder Button,
You can add a new egg, using the New Egg button,
or remove.
If you are experimenting with new versions of libraries, I suggest to remove the old versions, restart eclipse, install the new ones, then restart eclipse again.
Cheers !! happy PyDeving !! | 0 | 2,346 | false | 0 | 1 | How to add Python Egg to Eclipse Pydev paths? New Egg button not behaving as expected | 21,244,922 |
3 | 3 | 0 | 2 | 1 | 1 | 0.132549 | 0 | I'm trying to add a python egg to my eclipse pydev path via Eclipse Settings -> PyDev -> Interpreter - Python -> New Egg/Zip(s), and in the dialog where I browse to the egg file, and click the "open" button on the dialog, it simply keeps the dialog open and browses into the egg.
This is on OS X with Helios SR 2. | 0 | python,eclipse,pydev | 2012-01-05T22:07:00.000 | 1 | 8,750,530 | On my Mac I have some .egg's that are files, and some .egg's that are folders, for example my SQLObject is a folder but my oauth is a file. I am not exactly sure why, it could be because of how I downloaded and installed them.
The ones that are folders can't be chosen by the "Add zip/jar/egg" chooser, but the simple solution is just to include those via "Add source folder". | 0 | 2,346 | false | 0 | 1 | How to add Python Egg to Eclipse Pydev paths? New Egg button not behaving as expected | 21,922,683 |
3 | 3 | 0 | 1 | 1 | 1 | 1.2 | 0 | I'm trying to add a python egg to my eclipse pydev path via Eclipse Settings -> PyDev -> Interpreter - Python -> New Egg/Zip(s), and in the dialog where I browse to the egg file, and click the "open" button on the dialog, it simply keeps the dialog open and browses into the egg.
This is on OS X with Helios SR 2. | 0 | python,eclipse,pydev | 2012-01-05T22:07:00.000 | 1 | 8,750,530 | Perhaps this only on OS X, but simple solution was to just add the egg as a folder via New Folder. I'm guessing the New Egg/Zip button is for OS's that don't treat Zips/Eggs as folders. | 0 | 2,346 | true | 0 | 1 | How to add Python Egg to Eclipse Pydev paths? New Egg button not behaving as expected | 8,750,576 |
1 | 3 | 0 | 0 | 7 | 0 | 0 | 0 | I'm installing python on custom location on a internal server.
Unfortunately, I can't make full internet connection here. Most of sites are block by firewall. (essentially pypi repository!) Please don't ask the reason. And I don't have root account, so I have to install python from source.
I did install python from source successfully! But the problem is any of easy_install or pip is not installable because the sites are not accessible form here. :(
How can I install them under current situation? | 0 | python,installation | 2012-01-06T07:06:00.000 | 0 | 8,754,520 | Download the source tarballs of the relevant modules and install them locally. | 0 | 18,613 | false | 0 | 1 | How to install python from source without internet connection? | 8,754,536 |
1 | 1 | 0 | 4 | 2 | 0 | 1.2 | 0 | I installed python2.7 as an alternate version of python. I was attempting to utilize a newer version of mod_python and I needed 2.7. The default python (/bin/python) is 2.6. Unfortunately now, calling python from the command line calls /usr/local/bin/python2.7. I realize that I can set up a number of links pointing back to /bin/python--I just don't think this is a great idea. The OS (CentOS6) uses 2.6.2 by default, and I don't want the OS to use another version of python. I installed 2.7 from source, but forgot to specify 'make altinstall' rather than 'make install'. This is a semi-work related server, so I need to implement something that will permanently fix the problem. I realize .profile and .bashrc have paths for python, but these appear to be more for bash logins via ssh. I need to find a way to change the system's default python path back to 2.6.2. How would one go about doing this? Thank you for your help. | 0 | python,linux,centos | 2012-01-06T21:08:00.000 | 1 | 8,764,562 | This is because /usr/local/bin comes before /bin in your $PATH.
What does which python say? I suspect it gives a symlink /usr/local/bin/python to /usr/local/bin/python2.7. Changing that symlink to /bin/python or removing it altogether should fix your problem. | 0 | 5,778 | true | 0 | 1 | Installed a python2.7 as an alternate, but path to default 2.6 is destroyed. System path file for default interpreter? | 8,764,672 |
2 | 2 | 0 | 1 | 1 | 0 | 1.2 | 0 | If I run a python script via python foo.py then I can get the contents of the script by reading the file sys.argv[0]. Is it possible to get the contents of the script (e.g., as a string) if the script is passed to the python interpreter via python -c "$(cat foo.py)"? | 0 | python | 2012-01-07T00:39:00.000 | 1 | 8,766,312 | As far as I know, it's not possible. | 0 | 48 | true | 0 | 1 | Get the text of a python script that is passed via the `-c` option | 8,766,362 |
2 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | If I run a python script via python foo.py then I can get the contents of the script by reading the file sys.argv[0]. Is it possible to get the contents of the script (e.g., as a string) if the script is passed to the python interpreter via python -c "$(cat foo.py)"? | 0 | python | 2012-01-07T00:39:00.000 | 1 | 8,766,312 | No. As far as I know, It wont be possible. When you call "$(cat foo.py)", shell will get only the contents and the reference is lost. | 0 | 48 | false | 0 | 1 | Get the text of a python script that is passed via the `-c` option | 8,766,445 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I am looking for an open source monitoring solution (preferably in Python) that works with ssh or snmp and does not require the installation of an agent (like Nagios, ZenOSS, munin).
Are you aware of such a solution? | 0 | python,open-source,monitoring | 2012-01-07T15:53:00.000 | 1 | 8,770,914 | All tools that allow you to run scripts to gather metrics can basically run commands over a ssh connection on the target box.
The question is though if this makes a lot of sense as you rely on the network connection always being available and for each (set of) property(s) you need to run a new remote connection with all its overhead.
Snmp does by definition of the protocol require you to run an snmp agent on the target box. | 0 | 482 | true | 0 | 1 | open source monitoring solution without the need of an agent | 8,777,056 |
1 | 2 | 0 | 1 | 2 | 0 | 0.099668 | 0 | I've heard a lot about ZeroMQ and its advantages, but I'm not really sure what it is. What are some example uses, what is it trying to replace (if anything), what problem does it solve, what are the alternatives out there, etc.? And, what is a "messaging library"? | 0 | python,networking,zeromq | 2012-01-09T05:36:00.000 | 0 | 8,784,336 | ZeroMQ, as it's name suggests most probably is a messaging provider. A messaging API is needed to send and recieve messages using these message providers. And you need to integrate these providers with your application server (view documentation). Some MQ supports multiple platforms like Ruby, Java, Php and Others. It is used for loose coupling between two modules in an enterprise application. If you are a Java Programmer, refer to JMS Specifications (Java Mesaging Service) at Oracle's site. | 0 | 2,626 | false | 0 | 1 | What is ZeroMQ? | 8,784,472 |
3 | 3 | 0 | 14 | 10 | 1 | 1.2 | 0 | I tried googling this, couldn't find an answer, searched here, couldn't find an answer. Has anyone looked into whether it's thread safe to write to a Serial() object (pyserial) from thread a and do blocking reads from thread b?
I know how to use thread synchronization primitives and thread-safe data structures, and in fact my current form of this program has a thread dedicated to reading/writing on the serial port and I use thread-safe data structures to coordinate activities in the app.
My app would benefit greatly if I could write to the serial port from the main thread (and never read from it), and read from the serial port using blocking reads in the second thread (and never write to it). If someone really wants me to go into why this would benefit the app I can add my reasons. In my mind there would be just one instance of Serial() and even while thread B sits in a blocking read on the Serial object, thread A would be safe to use write methods on the Serial object.
Anyone know whether the Serial class can be used this way?
EDIT: It occurs to me that the answer may be platform-dependent. If you have any experience with a platform like this, it'd be good to know which platform you were working on.
EDIT: There's only been one response but if anyone else has tried this, please leave a response with your experience. | 0 | python,serial-port,pyserial | 2012-01-09T23:46:00.000 | 0 | 8,796,800 | I have done this with pyserial. Reading from one thread and writing from another should not cause problems in general, since there isn't really any kind of resource arbitration problem. Serial ports are full duplex, so reading and writing can happen completely independently and at the same time. | 0 | 10,755 | true | 0 | 1 | pyserial - possible to write to serial port from thread a, do blocking reads from thread b? | 8,808,218 |
3 | 3 | 0 | 0 | 10 | 1 | 0 | 0 | I tried googling this, couldn't find an answer, searched here, couldn't find an answer. Has anyone looked into whether it's thread safe to write to a Serial() object (pyserial) from thread a and do blocking reads from thread b?
I know how to use thread synchronization primitives and thread-safe data structures, and in fact my current form of this program has a thread dedicated to reading/writing on the serial port and I use thread-safe data structures to coordinate activities in the app.
My app would benefit greatly if I could write to the serial port from the main thread (and never read from it), and read from the serial port using blocking reads in the second thread (and never write to it). If someone really wants me to go into why this would benefit the app I can add my reasons. In my mind there would be just one instance of Serial() and even while thread B sits in a blocking read on the Serial object, thread A would be safe to use write methods on the Serial object.
Anyone know whether the Serial class can be used this way?
EDIT: It occurs to me that the answer may be platform-dependent. If you have any experience with a platform like this, it'd be good to know which platform you were working on.
EDIT: There's only been one response but if anyone else has tried this, please leave a response with your experience. | 0 | python,serial-port,pyserial | 2012-01-09T23:46:00.000 | 0 | 8,796,800 | I would recommend to modify Thread B from "blocking read" to "non blocking read/write". Thread B would become your serial port "Daemon".
Thread A could run at full speed for a friendly user interface or perform any real time operation.
Thread A would write a message to Thread B instead of trying to write directly to the serial port. If the size/frequency of the messages is low, a simple shared buffer for the message itself and a flag to indicate that a new message is present would work. If you need higher performance, you should use a stack. This is actually implemented simply using an array large enough to accumulate many message to be sent and two pointers. The write pointer is updated only by Thread A. The read pointer is updated only by Thread B.
Thread B would grab the message and sent it to the serial port. The serial port should use the timeout feature so that the read serial port function release the CPU, allowing you to poll the shared buffer and, if any new message is present, send it to the serial port. I would use a sleep at that point to limit the CPU time used by Thread B.. Then, you can make Thread B loop to the read serial port function. If the serial port timeout is not working right, like if the USB-RS232 cable get unplugged, the sleep function will make the difference between a good Python code versus the not so good one. | 0 | 10,755 | false | 0 | 1 | pyserial - possible to write to serial port from thread a, do blocking reads from thread b? | 14,660,364 |
3 | 3 | 0 | 4 | 10 | 1 | 0.26052 | 0 | I tried googling this, couldn't find an answer, searched here, couldn't find an answer. Has anyone looked into whether it's thread safe to write to a Serial() object (pyserial) from thread a and do blocking reads from thread b?
I know how to use thread synchronization primitives and thread-safe data structures, and in fact my current form of this program has a thread dedicated to reading/writing on the serial port and I use thread-safe data structures to coordinate activities in the app.
My app would benefit greatly if I could write to the serial port from the main thread (and never read from it), and read from the serial port using blocking reads in the second thread (and never write to it). If someone really wants me to go into why this would benefit the app I can add my reasons. In my mind there would be just one instance of Serial() and even while thread B sits in a blocking read on the Serial object, thread A would be safe to use write methods on the Serial object.
Anyone know whether the Serial class can be used this way?
EDIT: It occurs to me that the answer may be platform-dependent. If you have any experience with a platform like this, it'd be good to know which platform you were working on.
EDIT: There's only been one response but if anyone else has tried this, please leave a response with your experience. | 0 | python,serial-port,pyserial | 2012-01-09T23:46:00.000 | 0 | 8,796,800 | I've used pyserial in this way on Linux (and Windows), no problems ! | 0 | 10,755 | false | 0 | 1 | pyserial - possible to write to serial port from thread a, do blocking reads from thread b? | 8,814,172 |
1 | 2 | 0 | 0 | 0 | 0 | 1.2 | 0 | I have a simple python server script which forks off multiple instances (say N) of C++ program. The C++ program generates some events that need to be captured.
The events are currently being captured in a log file (1 logfile per forked process). In addition, i need to periodically (T minutes) get the rate at which the events are being produced across all child processes to either the python server or some other program listening for these events (still not sure). Based on rate of these events, some "re-action" may be taken by the server (say reduce the number of forked instances)
Some pointers i have briefly looked at:
grep log files - go through the running process log files (.running), filter those entries generated in the last T minutes, analyse the data and report
socket ipc - add code to c++ program to send the events to some server program which analyses the data after T minutes, reports and starts all over again
redis/memcache (not sure completely) - add code to c++ program to use some distributed store to capture all the generated data, analyses the data after T minutes, reports and starts all over again
Please let me know your suggestions.
Thanks | 0 | c++,python,ipc,redis,distributed | 2012-01-09T23:59:00.000 | 0 | 8,796,882 | if time is not of the essence (T minutes sounds like it is long compared to whatever events are happening in the C++ programs that are kicked off) then dont make things any more complicated than they need to be. forget IPC (sockets, shared mem, etc), just have each C++ program log what you need to know about time/performance and let the python script check logs every T minutes that you need the data. dont waste time overcomplicating something that you can do in a simple manner | 0 | 160 | true | 0 | 1 | Request for suggestions on doing IPC/event capture | 8,810,895 |
2 | 2 | 0 | 1 | 5 | 0 | 0.099668 | 0 | There is probably a nice document that will help me. Please point to it.
If I write a Thrift server using Python what is the best way to deploy it in a production environment? All I can find is examples of using the Python based servers that come with the distribution. How can I use Apache as the server platform for example? Will it support persistent connections?
Thanks in advance. | 0 | python,thrift | 2012-01-10T18:27:00.000 | 0 | 8,808,476 | I've read that you can deploy it behind nginx using the upstream module to point to the thrift server. You should have at least one CPU core per thrift server and one left for the system (i.e. if you're on a quad-core, you should only run 3 thrift servers, leaving one left over for the system). | 0 | 1,005 | false | 0 | 1 | how to deploy a hardened Thrift server for python? | 8,826,673 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.