Available Count
int64
1
31
AnswerCount
int64
1
35
GUI and Desktop Applications
int64
0
1
Users Score
int64
-17
588
Q_Score
int64
0
6.79k
Python Basics and Environment
int64
0
1
Score
float64
-1
1.2
Networking and APIs
int64
0
1
Question
stringlengths
15
7.24k
Database and SQL
int64
0
1
Tags
stringlengths
6
76
CreationDate
stringlengths
23
23
System Administration and DevOps
int64
0
1
Q_Id
int64
469
38.2M
Answer
stringlengths
15
7k
Data Science and Machine Learning
int64
0
1
ViewCount
int64
13
1.88M
is_accepted
bool
2 classes
Web Development
int64
0
1
Other
int64
1
1
Title
stringlengths
15
142
A_Id
int64
518
72.2M
2
3
0
0
0
1
0
0
I have a simple cgi script in python collecting a value from form fields submitted through post. After collecting this, iam dumping these values to a single text file. Now, when multiple users submit at the same time, how do we go about it? In C\C++ we use semaphore\mutex\rwlocks etc? Do we have anything similar in python. Also, opening and closing the file multiple times doesnt seem to be a good idea for every user request. We have our code base for our product in C\C++. I was asked to write a simple cgi script for some reporting purpose and was googling with python and cgi. Please let me know. Thanks! Santhosh
0
python,cgi
2013-02-11T17:17:00.000
0
14,817,290
If you're concerned about multiple users, and considering complex solutions like mutexes or semaphores, you should ask yourself why you're planning on using an unsuitable solution like CGI and text files in the first place. Any complexity you're saving by doing this will be more than outweighed by whatever you put in place to allow multiple users. The right way to do this is to write a simple WSGI app - maybe using something like Flask - which writes to a database, rather than a text file.
0
300
false
0
1
multiple users doing form submission with python CGI
14,817,362
2
3
0
2
0
0
0.132549
0
I am very new to linux, and i want to learn scripting. It seems like there are quite a few options to learn about scripting from bash shell scripting, python, perl lisp, and probably more that i dont know about. I am just wonder what are the the advantage and disadvantage of all of them, and what would be a good place to start?
0
python,linux,perl,scripting
2013-02-12T03:22:00.000
1
14,824,862
Every programmer will have a biased answer to this, but one thing to keep in mind is what your goal is. For instance, if you're only looking to be a successful sysadmin, then your goals might best be served by learning languages that are more conducive to sysadmin tasks (e.g. bash). However, if you're looking to do more general programming, including data analysis, you might be better served focusing your study on more general-purpose languages like Python or Perl. For web development, Ruby might be worth studying, etc. It really depends on why you're interested in learning scripting. If you don't really have a specific reason and are looking for general advice, it's probably wise to start with one language and get proficient at it and then expand to other languages. The canonical path would probably be bash --> Python, these days. Of course, this is just one person's opinion. :-)
0
966
false
0
1
different types of scripting in linux
14,824,929
2
3
0
1
0
0
0.066568
0
I am very new to linux, and i want to learn scripting. It seems like there are quite a few options to learn about scripting from bash shell scripting, python, perl lisp, and probably more that i dont know about. I am just wonder what are the the advantage and disadvantage of all of them, and what would be a good place to start?
0
python,linux,perl,scripting
2013-02-12T03:22:00.000
1
14,824,862
I think a lot of times, people new to programming see all the options out there and don't know where to start. You listed a bunch of different languages in your post. My advice would be to pick one of those languages and find a book or tutorial and work through it. I became interested in "scripting" from just trying to come up with a mIRC script that would fit my needs; however, after completing that, I changed OS from windows to Linux and mIRC scripting no longer would work for me. So I started playing with Perl and Python to see which would work best for xChat. Eventually, what it all boils down with is that you'll need to experiment with a language and do some hands on learning. I eventually completed project, and used PHP for it. While completing that, I also was working through Michael Hartl's tutorial and worked with Ruby on Rails some. Now I'm in the process of rewriting it using Node.js (javascript). Best bet, just pick one language and start playing with it.
0
966
false
0
1
different types of scripting in linux
14,825,010
1
1
0
0
0
0
0
0
I am trying to connect a remote machine in python. I used telnetlib module and could connect to machine after entering login id and password as tn = Telnet("HOST IP") tn.write("UID") tn.write("PWD") After entering password, the terminal connects to the remote machine which is a linux based software [having its own IP address(HOST IP).] Then after If I try to give a command e.g. tn.write("cd //tmp/media/..) to go to its various folders then it does not work and when checked to see what the screen is showing with tn.read_very_eager() error comes up as : ""\r\n\r\n\r\nBusyBox v1.19.4 (2012-07-19 22:27:43 CEST) built-in shell (ash)\r\n Enter 'help' for a list of built-in commands.\r\n\r\n~ # "" I wanted to know if there is any method in Python as we have in PERL as $telnet->cmd ("cd //tmp/media/..) Any suggestions are welcomed if you can give an example!!!
0
python-2.7,telnetlib
2013-02-12T04:15:00.000
1
14,825,262
You should try to login to the machine using telnet, then you will notice you will login into BusyBox. That string you print not an error it is hte normal BusyBox prompt. It might not be what you expected, I only know BusyBox from Linux boxes that were unable to properly boot.
0
602
false
0
1
How to write on terminal after login with telnet to remote machine using python
15,581,083
1
1
0
1
1
0
0.197375
1
url = "www.someurl.com" request = urllib2.Request(url,header={"User-agent" : "Mozilla/5.0"}) contentString = urllib2.url(request).read() contentFile = StringIO.StringIO(contentString) for i in range(0,2): html = contentFile.readline() print html The above code runs fine from commandline but if i add it to a cron job it throws the following error: File "/usr/lib64/python2.6/urllib2.py", line 409, in _open '_open', req) File "/usr/lib64/python2.6/urllib2.py", line 369, in _call_chain result = func(*args) File "/usr/lib64/python2.6/urllib2.py", line 1186, in http_open return self.do_open(httplib.HTTPConnection, req) File "/usr/lib64/python2.6/urllib2.py", line 1161, in do_open raise URLError(err) urllib2.URLError: I did look at some tips on the other forums and tried it but it has been of no use. Any help will be much appreciated.
0
python,urllib2,python-2.6
2013-02-12T07:10:00.000
0
14,827,296
The environment variables that were used by crontab and from the command line were different. I fixed this by adding */15 * * * * . $HOME/.profile; /path/to/command. This made the crontab to pick up enivronment variables that were specified for the system.
0
603
false
0
1
Urllib2 runs fine if i run the program independently but throws error when i add it to a cronjob
14,847,414
1
1
0
0
0
0
0
1
I installed paramiko in my Ubuntu box "sudo apt-get install python-paramkio". But when import the paramiko module i am getting error. ImportError:No Module named paramiko When i list the python modules using help('modules'). i couldn't find paramiko listed.
0
python,ubuntu,import,paramiko
2013-02-12T10:47:00.000
0
14,830,722
To use python libraries, you must have development version of python like python2.6-dev, which can be installed using sudo apt-get install python2.6-dev. Then you may install any additional development libraries that you want in your code to run. Whatever you install using sudo apt-get install python-paramkio or python setup.py install will then be available to you.
0
4,876
false
0
1
paramiko installation " Unable to import ImportError"
14,832,457
1
1
0
0
1
1
1.2
0
I have to decode a string. This one comes from Flash AS3 and I want to decode it in Python. I don't have any problems with PHP, but I cannot decode the following string with Python 2.6 'base64.b64decode'. f3hvQgQaBFp9IC4NQhYZQiAhNhxBAkwIJC0pDR8fBl12ZjkWXwMEWn57bU0dGgBfcWdsTwAbGB4xLmVLAh0FXXd5a0gGHQRWdy5iQANNVAl/KmNLAhUBXyV8PkFQHwNefntjGgpPU18nK21OURtSC35wPE4FHFUJdi4/TlMUVFwlez9JVxtVDH0TB0IGHAc%Pr Python returns "TypeError: Incorrect Padding". It seems to have superfious characters at the end of the string (from the '%'). But why Python base64 library do not manage this? Thank you for your answer.
0
python,actionscript-3,base64,decode
2013-02-12T16:33:00.000
0
14,837,251
It seems to me you are not feeding a valid string to the function - it tells you so. You can't expect a function to "guess" what you wanted, and base its response on that. You have to use a valid parameter, or the function doesn't work.
0
471
true
0
1
Base64 decode does not work every time in Python
14,837,350
2
2
0
0
0
1
0
0
When I open my Command Prompt, the default path is C:\Users\acer> so I want to change the path to C:\Python27 the method is as follows i enter cd.. 2 times.. then I enter cd.. Python27 as my Python27 folder located in C:\ however, I got this message "the system cannot find the path specified" Can anyone help me?
0
python,windows-7
2013-02-13T04:20:00.000
1
14,846,333
Instead of cd.. Python27 you need to type cd \python27
0
7,184
false
0
1
The system cannot find the path specified in cmd
14,846,374
2
2
0
1
0
1
0.099668
0
When I open my Command Prompt, the default path is C:\Users\acer> so I want to change the path to C:\Python27 the method is as follows i enter cd.. 2 times.. then I enter cd.. Python27 as my Python27 folder located in C:\ however, I got this message "the system cannot find the path specified" Can anyone help me?
0
python,windows-7
2013-02-13T04:20:00.000
1
14,846,333
No need for cd .. mumbo jumbo, just go cd C:/Python27.
0
7,184
false
0
1
The system cannot find the path specified in cmd
14,846,408
1
1
0
1
0
1
0.197375
0
I'm generating Python script with c#. But I have to know if word is keyword. The question : is there any library for c# which i can get the python keywords ?
0
c#,python
2013-02-13T11:46:00.000
0
14,852,823
No, there isn't. Roll your own.
0
114
false
0
1
How can I get python keywords in c#
14,852,879
1
1
0
0
0
1
0
0
I'm trying to write a python script which will run when Maya loads. The script should check a number stored in the file somewhere, possibly just a names object, and compare it to the latest revision of the file in perforce. if the number stored in maya is not the latest revision, it should show a warning. is this possible?
0
python,perforce,maya
2013-02-13T18:00:00.000
0
14,859,956
To ask if the contents of your file on your workstation matches the content of the current head revision of the file on the server, you can do something like 'p4 diff -f //depot/path/to/file#head'.
0
813
false
0
1
get the revision number of a specific file in perforce from a python script
14,867,059
1
1
0
0
0
0
0
0
I have a cron who execute 2 python scripts. How I can see with the "ps" command if the process are running ? my scripts names are: json1.py json2.py
0
python,linux,process
2013-02-13T22:23:00.000
1
14,864,378
ps aux | grep json ought to do it, or just pgrep -lf json.
0
127
false
0
1
Unix process running python
14,864,397
1
1
0
3
1
0
1.2
0
I am working on a data intensive project where I have been using PHP for fetching data and encrypting it using phpseclib. A chunk of the data has been encrypted in AES with the ECB mode -- however the key length is only 10. I am able to decrypt the data successfully. However, I need to use Python in the later stages of the project and consequently need to decrypt my data using it. I tried employing PyCrypto but it tells me the key length must be 16, 24 or 32 bytes long, which is not the case. According to the phpseclib documentation the "keys are null-padded to the closest valid size", but I'm not sure how to implement that in Python. Simply extending the length of the string with 6 spaces is not working. What should I do?
0
php,python,aes,pycrypto,phpseclib
2013-02-15T09:22:00.000
0
14,891,492
I strongly recommend you adjust your PHP code to use (at least) a sixteen byte key, otherwise your crypto system is considerably weaker than it might otherwise be. I would also recommend you switch to CBC-mode, as ECB-mode may reveal patterns in your input data. Ensure you use a random IV each time you encrypt and store this with the ciphertext. Finally, to address your original question: According to the phpseclib documentation the "keys are null-padded to the closest valid size", but I'm not sure how to implement that in Python. Simply extending the length of the string with 6 spaces is not working. The space character 0x20 is not the same as the null character 0x00.
0
975
true
0
1
Key length issue: AES encryption on phpseclib and decryption on PyCrypto
14,892,493
1
1
0
2
3
0
1.2
0
I am writing a CLI python application that has dependencies on a few libraries (Paramiko etc.). If I download their source and just place them under my main application source, I can import them and everything works just fine. Why would I ever need to run their setup.py installers or deal with python package managers? I understand that when deploying server side applications it is OK for an admin to run easy_install/pip commands etc to install the prerequsites, but for a script like CLI apps that have to be distributed as a self-contained apps that only depend on a python binary, what is the recommented approach?
0
python
2013-02-15T12:57:00.000
0
14,895,234
Several reasons: Not all packages are pure-python packages. It's easy to include C-extensions in your package and have setup.py automate the compilation process. Automated dependency management; dependencies are declared and installed for you by the installer tools (pip, easy_install, zc.buildout). Dependencies can be declared dynamically too (try to import json, if that fails, declare a dependency on simplejson, etc.). Custom resource installation setups. The installation process is highly configurable and dynamic. The same goes for dependency detection; the cx_Oracle has to jump through quite some hoops to make installation straightforward with all the various platforms and quirks of the Oracle library distribution options it needs to support, for example. Why would you still want to do this for CLI scripts? That depends on how crucial the CLI is to you; will you be maintaining this over the coming years? Then I'd still use a setup.py, because it documents what the dependencies are, including minimal version needs. You can add tests (python setup.py test), and deploy to new locations or upgrade dependencies with ease.
0
251
true
0
1
why run setup.py, can I just embed the code?
14,895,361
1
1
0
1
2
0
1.2
0
I am using Apatana Studio 3.3.1 with PyDev 2.7 and writing code in Python 3.3. I was debugging my code by setting up break point in my code and click on Run>Debug, but the code has not been stopped at the breakpoint and has run through till the end. In the Interpreter - Python setting, I have included the following in my libraries > System PYTHONPATH: C:\Python33\DLLs C:\Python33\lib C:\Python33 C:\Python33\lib\site-packages Thanks for any help.
0
debugging,python-3.x,aptana,pydev
2013-02-17T17:23:00.000
0
14,923,821
You need to make sure that the run/debug configuration uses the correct main module otherwise it will take the current windows source file to be the main module. If there is no executable code in that file, i.e. there is nothing in global scope, the file will simply run to completion.
0
260
true
0
1
Does Aptana Studio 3.3.1 support debugging of Python 3.3?
19,110,870
1
1
0
2
0
0
1.2
0
I have a few thousand of very big radio-telemetry array fields of the same area in a database. The georeference of the pixels is the same for all of the array fields. An array can be loaded into memory in an all or nothing approach. I want to extract the pixel for a specific geo-coordinate from all the array fields. Currently I query for the index of the specific pixel for a specific geocoordinate and then load all array fields from the database into memory. However that is very IO intensive and overloads our systems. I'd imagine the following: I save the arrays to disk and then sequentially open them and seek to the byte-position corresponding to the pixel. I imagine that this is far less wasteful and much speedier than loading them all to memory. Is seeking to a position considered a fast operation or would one not do such a thing?
0
python,arrays,file,file-io,multidimensional-array
2013-02-18T10:04:00.000
0
14,933,700
The time it takes for a seek operation would be measured in low milliseconds, probably less than 10 in most cases. So that wouldn't be a bottleneck. However, if you have to retrieve and save all the records from the database either way, you may end up with roughly the same IO load and perhaps greater. The IO time for writing a file is certainly greater than reading into memory. Time for a small-ish experiment :) Try it with a few arrays and time the performance, then you can do the math to see how it would scale.
0
95
true
0
1
Save byte-arrays to disk to reduce memory consumption and increase speed?
14,934,694
1
1
0
0
3
0
0
0
Aptana Studio 3 keeps adding .pydevproject file, how can I disable Python or whatever it's doing this
0
python,aptana,aptana3
2013-02-18T16:06:00.000
0
14,940,443
I accidentally somehow set the project as PyDev project. To disable, right click on the project > PyDev > Remove PyDev Project Config
0
459
false
0
1
Aptana Studio 3 keeps adding .pydevproject file, how can I disable Python?
56,855,673
1
2
0
3
7
0
0.291313
1
I can see messages have a sent time when I view them in the SQS message view in the AWS console. How can I read this data using Python's boto library?
0
python,amazon-web-services,boto,amazon-sqs
2013-02-18T21:29:00.000
0
14,945,604
When you read a message from a queue in boto, you get a Message object. This object has at attribute called attributes. It is a dictionary of attributes that SQS keeps about this message. It includes SentTimestamp.
0
4,489
false
0
1
SQS: How can I read the sent time of an SQS message using Python's boto library
14,967,271
1
1
0
2
1
0
0.379949
0
Is there a "good way" to install Pyramid without the templating systems? The templating systems I speak of are Mako and Chameleon. In Single Page Applications (SPA) there is very little need for server-side templating since all of the templates are rendered on the client-side with javascript. I like the power of Pyramid but the template system is unnecessary baggage in some cases. I have a feeling that the only way to accomplish this task is to fork Pyramid and modify the setup.py to remove these dependencies. That may break things,but then again, Pyramid is built in such a way that it may not care as long as nothing tries to call a renderer for one of these templates. Who knows?
0
python,pyramid
2013-02-19T03:57:00.000
0
14,949,586
There is a project to eventually remove those templating dependancies and make them available as separate packages. The work started at last year pycon sprints and can be continued this year, who knows. OTOH having those packages installed in your venv doesn't really affect your app so just avoid using them and only use the JSON renderer or any other renderers. Instead of forking Pyramid and removing those dependancies in setup.py I propose you to join us and work on the removal project so we can all benefit the same features.
0
314
false
1
1
Installing Pyramid without the template systems (Mako and Chameleon)
14,950,809
1
3
0
0
2
0
0
0
I'm trying to get caught up on unit testing, and I've looked over a few books - Debugging Django, Web Dev. with Django, and the official docs, but none seem to cover unit testing thoroughly enough for me. I'm also not an expert in Python web development, so maybe that's why. What I'm looking for is something that starts at an intermediate level of python skill/knowledge and covers Django unit testing from scratch, with a few good real-world examples. Any recommendations on such resources? Much appreciated.
0
python,django,unit-testing
2013-02-19T15:29:00.000
0
14,961,151
Did you try core developer Karen Tracy's book Django 1.1 Testing And Debugging? Although the title implies it's out of date, most of the advice is still applicable.
0
311
false
1
1
Learning Django unit testing
14,961,957
1
1
0
1
1
0
1.2
1
I have an Amazon Ubuntu instance which I stop and start (not terminate). I was wondering if it is possible to run a script on start and stop of the server. Specifically, I am looking at writting a python boto script to take my RDS volume offline when the EC2 server is not running. Can anyone tell me if this is possible please?
0
python,linux,amazon-web-services,amazon-ec2,amazon-rds
2013-02-19T16:26:00.000
1
14,962,414
It is possible. You just have to write an init script and setup proper symbolic links in /etc/rc#.d directories. It will be started with a parameter start or stop depending on if machine is starting up or shutting down.
0
1,561
true
1
1
Running a script on EC2 start and stop
14,966,165
1
1
0
0
0
0
0
0
I have a python script that runs once a day, connects to our Zabbix monitoring database and pulls out all the active monitoring checks and documents them into Confluence. My problem is that each hosts' confluence page gets updated every time the script runs, even if the monitoring hasn't changed. A quick hack would be to get a hash of the page content and compare it with a hash of the script-generated content and only replace when the hashes don't match. Obviously the problems with this are that the script still needs to generate whole page content for comparison, and that it replaces the whole page or not at all, loosing confluence's built-in diff checker. I'm hoping to find a more elegant solution, especially one that may allow me to update only the differences...
0
python,string-comparison,confluence,zabbix
2013-02-21T08:12:00.000
0
14,997,400
This might not be the solution you are looking for, but you could have the updates generate an external html page and then use an {html-include} in confluence. So the confluence pages wouldn't be updated, but their displayed content would be correct. The problem with this is that none of the confluence pages would be updated, so if you want a feed to notify people of the changes on confluence it wouldn't get the job done.
0
412
false
1
1
Automatic Zabbix -> Confluence, creating too many updates
15,069,936
1
3
0
0
3
1
0
0
What is the best way to include a 'helper' shell script in setup.py that is used by a python module? I don't want to include is as a script since it is not run on it's own. Also, data_files just copies things in the the install path (not the module install path) so that does not really seem like the best route. I guess the question is: is there a way of including non-python (non-C) scripts/binaries in a python distutils package in a generic way?
0
python
2013-02-21T17:59:00.000
1
15,009,146
Another issue might be that such pypi packages containing Bash scripts might not run correctly on e.g. Windows?
0
1,820
false
0
1
python distutils include shell scripts in module directory
47,823,714
1
1
0
8
0
1
1
0
I want to create a serialized Python object from outside of Python (in this case, from Java) in such a way that Python can read it and treat it as if it were an object in Python. I'll start with simpler objects (int, float, String, and so on) but I'd love to know if this can be done with classes as well. Functionality is first, but being able to do it quickly is a close second. The idea is that I have some data in Java land, but some business logic in Python land. I want to be able to stream data through the python logic as quickly as possible...right now, this data is being serialized as strings and I think this is fairly wasteful. Thank you in advance
0
python
2013-02-22T15:32:00.000
0
15,027,601
The best answer is to use a standardized format, such as JSON, and write up something to create the objects from that format in Python, and produce the data from Java. For simple things, this will be virtually no effort, but naturally, it'll scale up. Trying to emulate pickle from within Java will be more effort than it's worth, but I guess you could look into Jython if you were really set on the idea.
0
65
false
1
1
Is there a way to efficiently create Python objects from outside of Python?
15,027,654
1
2
0
2
0
1
0.197375
0
I've written an irc bot that runs some commands when told so, the commands are predefined python functions that will be called on the server where the bot is running. I have to call those functions without knowing exactly what they'll do (more I/O or something computationally expensive, nothing harmful since I review them when I accept them), but I need to get their return value in order to give a reply back to the irc channel. What module do you recommend for running several of these callbacks in parallel and why? The threading or multiprocessing modules, something else? I heard about twisted, but I don't know how it will fit in my current implementation since I know nothing about it and the bot is fully functional from the point of view of the protocol. Also requiring the commands to do things asynchronously is not an option since I want the bot to be easily extensible.
0
python,multithreading,parallel-processing,multiprocessing
2013-02-22T18:59:00.000
0
15,031,315
There is no definitive answer to your question: it really depends what the functions do, how often they are called and what level of parallelism you need. The threading and multiprocessing modules work in radically different ways. threading implements native threads within the Python interpreter: fairly inexpensive to create but limited in parallelism due to Python's Global Interpreter Lock (GIL). Threads share the same address space, so may interfere with each other (e.g. if a thread causes the interpreter to crash, all threads, including your app, die), but inter-thread communication is cheap and fast as a result. multiprocessing implements parallelism using distinct processes: the setup is far more expensive than threads (required creation of a new process), but each process runs its own copy of the interpreter (hence no GIL related locking issues) and run in different address spaces (isolating your main app). The child processes communicate with the parent over IPC channels and required Python objects to be pickled/unpickled - so again, more expensive than threads. You need to figure out what trade-off is best suited to your purpose.
0
221
false
0
1
What module to use for calling user-defined functions in parallel
15,031,533
1
1
0
1
5
0
0.197375
0
What I'm trying to do is to combine two approaches, two frameworks into one solid scope, process ... I have a bunch of tests in python with self-written TestRunner over proboscis library which gave me a good way to write my own Test Result implementation (in which I'm using jinja). This framework is now a solid thing. These tests are for tesing UI (using Selenium) on ASP.NET site. On another hand I have to write tests for business logic. Apparently it would be right to use NUnit or TestDriven.NET for C#. Could you please give me a tip, hint, advice of how I should integrate these two approaches in one final solution? May be the answer would be just to set up a CI server, donno... Please note, the reason I'm using Python for ASP.Net portal is in its flexibility and opportunity to build any custom Test Runner, Test Loader, Test Discovery and so on... P.S. Using IronPython is not an option for me. P.P.S. For the sake of clarity: proboscis is the python library which allows to set test order and dependency of a choosen test. And these two options are the requirements! Thank you in advance!
0
c#,python,asp.net,unit-testing
2013-02-23T03:43:00.000
0
15,036,815
I don't know if you can fit them in one runner or process. I'm also not that familiar with Python. It seems to me that the Python written tests are more on a high level though. Acceptance tests or integration tests or whatever you want to call them. And the NUnit ones are unit test level. Therefore I would suggest that you first run the unit tests and if they pass the Python ones. You should be able to integrate that in a build script. And as you already suggested, if you can run that on a CI server, that would be my preferred approach in your situation.
0
923
false
0
1
Integrating tests written in Python and tests in C# in one solid solution
16,286,713
1
2
0
0
3
0
0
0
I am using PyEphem to calculate the location of the Sun in the sky at various times. I have an Observer point (happens to be at Stonehenge) and can use PyEphem to calculate sunrise, sunset, and the altitude angle and azimuth (degrees from N) for the Sun at any hour of the day. Brilliant, no problem. However, what I really need is to be able to calculate the altitude angle of the Sun from an known azimuth. So I would set the same observer point (long/lat/elev/date (just yy/mm/dd, not time)) and an azimuth for the Sun. And from this input, calculate the altitude of the Sun and the time it is at that azimuth. I had hoped I would be able to just set Sun.date and Sun.az and work backwards from those values, but alas. Any thoughts on how to approach this (and if it even is approachable) with PyEphem? The only other option I'm seeing available is to "sneak up" on the azimuth by iterating over a sequence of times until I get within a margin of error of the azimuth I desire, but that is just gross. thanks in advance, Dave
0
python,pyephem,azimuth,altitude
2013-02-24T20:27:00.000
0
15,056,269
Without knowing the details of the internal calculations that PyEphem is doing I don't know how easy or difficult it would be to invert those calculations to give the result you want. With regards to the "sneaking up on it" option however, you could pick two starting times (eg sunrise and noon) where the azimuth is known to be either side (one greater and one less than) the desired value. Then just use a simple "halving the interval" approach to quickly find an approximate solution.
0
1,610
false
0
1
PyEphem: can I calculate Sun's altitude from azimuth
15,056,730
1
1
0
7
2
0
1.2
0
I am using pytest to run tests and, during the execution of a test, interrupted with ctrl-C. No matter how many times I ctrl-C to get out of the test session (I've also tried ctrl-D to get out of the environment I'm using), my terminal prompt does not return. I accidentally pressed F as well... test.py ^CF^C Does the F have something to do with my being stuck in the captured stderr section and the prompt not returning? Are there any logic explanations why I'm stuck here, and if so, are there any alternatives to exiting this state without closing the window and force exiting the session?
0
python,terminal,pytest
2013-02-25T17:52:00.000
1
15,073,210
I would suggest trying control-Z. That should suspend it; you can then do kill %1 (or kill -9 %1) to kill it (assuming you don't have anything else running in the background) What I'm guessing is happening (from personal experience) is that one of your tests is running in a try / except that is catching all exceptions (including the keyboard interrupt which control c triggers) and is inside a while loop / ignoring the exception.
0
1,083
true
0
1
Unable to exit with ^C
15,073,287
1
1
0
0
6
0
1.2
0
Is there a bug in the Mayavi font rendering that prevents changing the font size? I am using the Mayavi2 GUI to change the font size of the axis labels on a volumetric plot. To get there I go to: Scene -> Scalar Field -> Colors and Legends -> Axes -> Label Text (tab) -> Font Size Changing this number does not affect the size of the fonts in the image. Is this a known bug? I have seen no reference to it on Google. How do you change the text size on your mayavi figures?
0
python,text,fonts,mayavi,mayavi.mlab
2013-02-25T22:44:00.000
0
15,077,984
I've just downloaded, and installed it, and I seem to be having the same problem. On Windows 8 right now. This is probably a bug.
0
1,563
true
0
1
Does Mayavi "Font Size" text property work?
19,346,840
1
1
0
1
1
0
1.2
0
In ipython terminal : %pastebin -d "my description" 1-150 returns the url to the gist. However, I want to paste it as a logged in user, into my github account. Additionally, is there a way to create private gist (rather than public) from within ipython.
0
github,ipython,gist
2013-02-26T11:06:00.000
0
15,087,499
It is not possible with the current code, it is not even planned on the Roadmap. Still this could be done as an extension. You can also propose patches to current magic, Pull Request are always welcomed.
0
472
true
0
1
How to put gist as user (and not anonymous)?
15,089,524
1
2
0
0
0
0
0
0
Working with large in memory objects and was wondering if there's a way to check how much memory a python CGI process is allocated from within a script?
0
python,memory,cgi
2013-02-27T06:57:00.000
0
15,105,983
It is very unlikely there is a standard way to do this. If the value is not in the environment, you can not find it programmatically. How is the script run (server, module...)?
0
149
false
0
1
find memory limit programmatically in cgi python script?
15,106,075
1
2
0
2
0
0
0.197375
0
I need to extract process details from top command on a few *nix systems I monitor. The details needed are username, command executed, PID, PPID, username and resident memory consumption. If memory usage is greater than a threshold or command is illegal, I need to send a warning to the user at [email protected] I am writing a script to do this in python and get the required data by executing 'top -bc -n 1' and grepping for command keyword. However, I also need to extract username for the illegal processes to send the mail warning. However, top automatically truncates usernames greater than 8 characters. How do I retrieve the full user names?
0
python,unix
2013-02-27T11:29:00.000
1
15,110,982
Consider using ps instead of top as I don't know any reasons why top would be better for this task. You can configure ps output much more flexibly than top one.
0
2,320
false
0
1
How to get full user name in the output of 'top' command in *nix?
15,111,135
1
2
0
2
2
0
0.197375
0
I have a Python script that normally runs out of cron. Sometimes, I want to run it myself in a (Unix) shell, and if so, have it write its output to the terminal instead of writing to a log file. What is the pythonic way of determining if a script is running out of cron or in an interactive shell (I mean bash, ksh, etc. not the python shell)? I could check for the existence of the TERM environment variable perhaps? That makes sense but seems deceptively simple... Could os.isatty somehow be used? I'm using Python 2.6 if it makes a difference. Thanks!
0
python,python-2.6
2013-02-27T20:13:00.000
1
15,121,468
If you really need to check this, Pavel Anossov's answer is the way to do it, and it's pretty much the same as your initial guess. But do you really need to check this? Why not just write a Python script that writes to stdout and/or stderr, and your cron job can just redirect to log files? Or, even better, use the logging module and let it write to syslog or whatever else is appropriate and also write to the terminal if there is one?
0
206
false
0
1
How to check if I'm running in a shell (have a terminal) in Python?
15,121,639
1
4
0
1
0
0
1.2
0
What I want to do is protect a Python program from being stolen by people with no computer knowledge. I accept the inevitability of the program being pirated, all I want to do is protect it from average users. I have come up with two ideas. 1.)Set a time restriction by checking online for the date and time. I.E. 10 days from downloaded time. 2.)Checking the IP or Name of the computer that downloaded it and make the program only runs on that computer. (to prevent friends from simply sharing the file). The problem with both of these is that I'll need to create a .py file "on the fly" and then use something like pytoexe to make it into an .exe so that the user doesn't need to have Python installed. The problem with the second is that to my understanding ip's change and getting the computer name is a security risk and might scare away users. So to sum it up, here are my two questions: 1.) Is there a good way in python to only allow the program to run on that single computer? 2.) What is the best way to implement a "on the fly" creation of the exe? (I was going to host the website on my computer and learn php(?)/servers. I have moderate c/c++ and basic html/css, java, and python experience. Thank you for your time!
0
python,html,css,server-side
2013-03-01T07:34:00.000
0
15,152,785
Give each user a customized installer that has a unique key in it. When it runs, it contacts a server (with the key) and requests the actual program. Server-side, you check if the key is valid and if so, serve the program customized with the key, and mark the key as used. The installer saves the program somewhere the user can access it, and creates a hidden file that contains the key somewhere deep in the bowels of the computer, where the "average user" won't think of looking. When the program is run, the first thing it does is check if the hidden file exists and if it contains the correct key, and refuses to run if not. (I am assuming that unzipping an executable and reading source code is beyond the ability of the "average user" (read: "grandma"), so using py2exe is ok.)
0
2,860
true
0
1
How to protect my Python program
15,152,990
1
2
0
0
2
0
0
0
I could use cron, but I can't figure out if there's a way to set the right schedule. I can also check the date in Python, running the script via cron everyday, but checking the right date inside my (Python) script (which I assume has more powerful conditions). I thought on limiting one run on fridays between 1 and 7, and the other one on fridays between 15 and 21. But this option would have a problem on months like 3/2013 which have 5 fridays.
0
python,cron,scheduled-tasks
2013-03-01T13:33:00.000
0
15,159,027
Why not run the cron job each friday, but add code to write the last date ran in a file. Check to see if two weeks has passed, rewrite the file, and run the rest of the cron job
0
2,630
false
0
1
How do I run a script on friday, once every two weeks?
15,159,989
1
1
0
5
3
0
1.2
0
I'm a newbie to boost and one of its libraries which I can't understand it is Boost.Python. Can anyone explain me in details how does this interoperability achieved?In the documentation there only a few words about metaprogramming. P.S. I tried to look code but because of my lack of C++ knowledge I didn't understand principles. Thanks in advance
0
c++,boost,boost-python
2013-03-02T23:24:00.000
0
15,180,611
There are two ways to interoperate: 1) from a "Python process", call functions written in C++. Python already has a system to load dlls, they're called "extension modules". Boost.Python can compile your source to produce one. Basically you write a little wrapper to declare a function callable from Python, and the "metaprogramming" is there to do stuff like detecting what types the C++ function takes and returns, so that it can emit the right code to convert those from/to the equivalent Python types. 2) from a "C++ process", launch and control the Python interpreter. Python provides a C API to do this, and Boost.Python knows how to use it.
0
301
true
0
1
How does boost::python work?Any ideas about the realisation details?
15,180,650
1
1
0
0
3
0
0
0
I have several thousand tests that I want to run in parallel. The tests are all compiled binaries that give a return code of 0 or non-zero (on failure). Some unknown subsets of them try to use the same resources (files, ports, etc). Each test assumes that it is running independently and just reports a failure if a resources isn't available. I'm using Python to launch each test using the subprocess module, and that works great serially. I looked into Nose for parallelizing, but I need to autogenerate the tests (to wrap each of the 1000+ binaries into Python class that uses subprocess) and Nose's multiprocessing module doesn't support parallelizing autogenerated tests. I ultimately settled on PyTest because it can run autogenerated tests on remote hosts over SSH with the xdist plugin. However, as far as I can tell, it doesn't look like xdist supports any kind of control of how the tests get distributed. I want to give it a pool of N machines, and have one test run per machine. Is what I want possible with PyTest/xdist? If not, is there a tool out there that can do what I'm looking for?
0
python,testing,nose,pytest
2013-03-06T23:45:00.000
1
15,260,422
I am not sure if this would help. But if you know ahead of time how you want to divide up your tests, instead of having pytest distribute your tests, you could use your continuous integration server to call a different run of pytest for each different machine. Using -k or -m to select a subset of tests, or simply specifying different test dir paths, you could control which tests are run together.
0
1,185
false
0
1
Controlling the distribution of tests with py.test xdist
25,073,350
1
1
0
0
3
0
1.2
0
I am calling a Python script from my Java code. This is the code : import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; public class JavaRunCommand { public static void main(String args[]) throws IOException { // set up the command and parameter String pythonScriptPath = "my-path"; String[] cmd = new String[2]; cmd[0] = "python2.6"; cmd[1] = pythonScriptPath; // create runtime to execute external command Runtime rt = Runtime.getRuntime(); Process pr = rt.exec(cmd); // retrieve output from python script BufferedReader bfr = new BufferedReader(new InputStreamReader( pr.getInputStream())); String line = ""; while ((line = bfr.readLine()) != null) { // display each output line form python script System.out.println(line); } } } python.py which works import os from stat import * c = 5 print c python.py which does not works import MySQLdb import os from stat import * c = 5 print c # some database code down So, I am at a critical stage where I have a deadline for my startup and I have to show my MVP project to the client and I was thinking of calling Python script like this. It works when I am printing anything without dB connection and MySQLdb library. But when I include them, it does not run the python script. Whats wrong here. Isnt it suppose to run the process handling all the inputs. I have MySQLdb installed and the script runs without the java code. I know this is not the best way to solve the issue. But to show something to the client I need this thing working. Any suggestions ?
0
java,python,jakarta-ee
2013-03-07T05:33:00.000
1
15,263,854
So, I discovered that the issue was with the arguments that I was passing in Java to run the python program. The first argument was - python 2.6 but it should have rather been just python not some version number because there was compatibility issue with MySQLdB and python. I finally decided to use MySQL Python connector instead of MySQLdB in python code. It worked like charm and the problems got solved !
0
534
true
1
1
Calling Python script from JAVA MySQLdb imports
15,318,731
1
2
0
0
1
0
0
0
I want to search inside a simulink model for a particular object and point out it's directory.(Model path).
0
python,matlab,simulink
2013-03-07T11:05:00.000
0
15,269,513
You could also just run a MATLAB script that does a find_system on model filenames passed in and spits out the names of the blocks, thus avoiding any compatibility issues.
0
575
false
0
1
How can i access simulink by python?
15,504,906
1
1
0
0
2
0
1.2
0
I'm working on a Raspberry Pi project and I have a python script that accepts some serial input and plays sounds depending on the input. I have the script set up and it works just fine when I run it from within the GUI (i.e. startx). If I log out of the GUI and try to run the script from the command line the script executes just fine but my sounds don't play. I just get a momentary static click. I can tell the script is running because I have it printing debug code and the print's work just fine. Is there a way to get the sounds to work from the command line? I want this script to execute when the Raspberry Pi is turned on without user input which I believe means it will be running from the command line. If there is some reason the sounds simply won't play until the GUI starts up how would I set it up to load the GUI and then execute the script on startup without any user input? This will be embedded in a prop and will play sounds when some buttons (connected through arduino i.e. serial input) are pressed. So I need a solution that will have it from power on automatically run the script and be able to play the sounds with no keyboard, mouse, or monitor attached.
0
python,audio,raspberry-pi
2013-03-07T22:21:00.000
0
15,282,925
Turns out it was file path naming. If I have the command line test to the root directory it doesn't work but if I "cd Desktop/containingFolder" then the sounds play. I'll play with how I have the files set up in the python script so it will work. Updating the path names fixed the issue. I just needed them to be full paths instead of relative ones.
0
1,725
true
0
1
pygame.mixer sound not playing when script run from command line
15,306,454
1
2
0
1
5
0
0.099668
0
I have written a very simple command line utility for myself. The setup consists of: A single .py file containing the application/source. A single executable (chmod +x) shell script which runs the python script. A line in my .bash_profile which aliases my command like so: alias cmd='. shellscript' (So it runs in the same terminal context.) So effectively I can type cmd to run it, and everything works great. My question is, how can I distribute this to others? Obviously I could just write out these instructions with my code and be done with it, but is there a faster way? I've occasionally seen those one-liners that you paste into your console to install something. How would I do that? I seem to recall them involving curl and piping to sh but I can't remember.
0
python,bash,shell
2013-03-07T22:59:00.000
1
15,283,483
chmod +x cmd.py then they can type ./cmd.py they can also use it piped. I would add that unix users would probably already know how to make a file executable and run it, so all you'd have to do is make the file available to them. Do make sure they know what version(s) of python they need to run your script.
0
894
false
0
1
How to distribute my Python/shell script?
15,283,656
1
4
0
3
2
1
0.148885
0
For any particular bit of code, is there a way to easily get a breakdown of how long it took each line to execute?
0
python,performance
2013-03-09T00:57:00.000
0
15,305,899
Python code is not executed as is, the program you typed in is compiled into an intermediate format that is optimized. So the same line can very well take different times depending on the surrounding lines. Also, Python has complex operations on its data, the time an operation takes will depend on the exact values handled.
0
2,579
false
0
1
In Python, what is the best way to determine how long each line of code takes to execute?
15,305,953
1
2
0
1
0
0
1.2
0
I'm building a small tool that I want to scan over a music collection, read the ID3 info of a track, and store it as long as that particular artist does not have a song that has been accessed more than twice. I'm planning on using Mutagen for reading the tags. However, the music collections of myself and many others are massive, exceeding 20,000 songs. As far as I know, libraries like Mutagen have to open and close every song to get the ID3 info from it. While MP3s aren't terribly performance-heavy, that's a lot of songs. I'm already planning a minor optimization in the form of keeping a count of each artist and not storing any info if their song count exceeds 2, but as far as I can tell I still need to open every song to check the artist ID3 tag. I toyed with the idea of using directories as a hint for the artist name and not reading any more info in that directory once the artist song count exceeds 2, but not everyone has their music set up in neat Artist/Album/Songs directories. Does anyone have any other optimizations in mind that might cut down on the overhead of opening so many MP3s?
0
python,id3,mutagen
2013-03-10T17:11:00.000
0
15,325,056
Beware of premature optimization. Are you really sure that this will be a performance problem? What are your requirements -- how quickly does the script need to run? How fast does it run with the naïve approach? Profile and evaluate before you optimize. I think there's a serious possibility that you're seeing a performance problem where none actually exists. You can't avoid visiting each file once if you want a guaranteed correct answer. As you've seen, optimizations that entirely skip files will basically amount to automated guesswork. Can you keep a record of previous scans you've done, and on a subsequent scan use the last-modified dates of the files to avoid re-scanning files you've already scanned once? This could mean that your first scan might take a little bit of time, but subsequent scans would be faster. If you need to do a lot of complex queries on a music collection quickly, consider importing the metadata of the entire collection into a database (for instance SQLite or MySQL). Importing will take time -- updating to insert new files will take a little bit of time (checking the last-modified dates as above). Once the data is in your database, however, everything should be fairly snappy assuming that the database is set up sensibly.
0
330
true
0
1
Optimizing a Mass ID3 Tag Scan
15,325,204
1
2
0
1
1
0
0.099668
0
I am running this python pyramid server. Strangely, when I moved my server code to a different machine, pserve stopped serving flash videos in my static folder. Whereas it serves other static files, like images, fine ! What could be a reason for this ?
0
python,flash,pyramid
2013-03-11T04:05:00.000
0
15,331,039
I possibly ran into a similar problem on my pyramid app. I'm using TinyMCE and had placed the files in the static folder. Everything worked on my dev server, but moved to test and prod and static .html files related to TinyMCE couldn't be found. My web host had me add a symlink basically I think hardcoding to the server software (nginix in this case) the web address to my static HTML to the server path and that worked. I'll have to check out the mimetypes thing, though, too.
0
138
false
1
1
Pyramid server not serving flash files
15,347,393
1
4
0
29
63
1
1
0
Are Python Decorators the same or similar, or fundamentally different to Java annotations or something like Spring AOP, or Aspect J?
0
java,python,python-decorators,java-annotations
2013-03-11T19:40:00.000
0
15,347,136
This is a very valid question that anyone dabbling in both these languages simultaneously, can get. I have spent some time on python myself, and have recently been getting myself up to speed with Java and here's my take on this comparison. Java annotations are - just that: annotations. They are markers; containers of additional metadata about the underlying object they are marking/annotating. Their mere presence doesn't change execution flow of the underlying, or doesn't add encapsulation/wrapper of some sort on top of the underlying. So how do they help? They are read and processed by - Annotation Processors. The metadata they contain can be used by custom-written annotation processors to add some auxiliary functionality that makes lives easier; BUT, and again, they NEITHER alter execution flow of an underlying, NOR wrap around them. The stress on "not altering execution flow" will be clear to someone who has used python decorators. Python decorators, while being similar to Java annotations in look and feel, are quite different under the hood. They take the underlying and wrap themselves around it in any which way, as desired by the user, possibly even completely avoiding running the underlying itself as well, if one chooses to do so. They take the underlying, wrap themselves around it, and replace the underlying with the wrapped ones. They are effectively 'proxying' the underlying! Now that is quite similar to how Aspects work in Java! Aspects per se are quite evolved in terms of their mechanism and flexibility. But in essence what they do is - take the 'advised' method (I am talking in spring AOP nomenclature, and not sure if it applies to AspectJ as well), wrap functionality around them, along with the predicates and the likes, and 'proxy' the 'advised' method with the wrapped one. Please note these musings are at a very abstract and conceptual level, to help get the big picture. As you start delving deeper, all these concepts - decorators, annotations, aspects - have quite an involving scope. But at an abstract level, they are very much comparable. TLDR In terms of look and feel, python decorators can be considered similar to Java annotations, but under the hood, they work very very similar to the way Aspects work in Java.
0
19,358
false
1
1
Is a Python Decorator the same as Java annotation, or Java with Aspects?
49,356,738
1
1
0
3
2
1
0.53705
0
I am trying to write a dictionary containing values with unicode characters to a text file and was thinking of using UnicodeWriter as mentioned in the python csv documentation. But I am unable to import it as the module is not recognized by python. I was wondering whether this is a problem with my version of python? Also if it is not possible to do it this way, is there any way to specify encoding while using the dictWriter class in python.
0
python,csv,unicode,dictionary
2013-03-11T19:55:00.000
0
15,347,414
UnicodeWriter isn't an actual module in any version of Python. The code given in the documentation is an example which you'll have to copy into your own project.
0
743
false
0
1
unable to import UnicodeWriter python 2.7
15,347,536
1
1
0
0
2
0
0
0
I'm building a website using Python which uses LaTeX to generate PDF files. But I want to put most of the website on Google App Engine, and I can't run LaTeX on that. So I want to do the LaTeX part on another server. It seemed like a simple problem at first---I thought the best way to do it would be to POST the LaTeX to the server and have it respond with the PDF. But LaTeX files can take a while to compile sometimes if they're long, so I'm starting to think this isn't the best way to do it. What's the standard way of doing something like this? It must be a pretty common problem.
0
python,http,pdf,web-applications,latex
2013-03-11T21:10:00.000
0
15,348,710
can you send it by email just like amazon , send a file to server , when it's ok , the server send it by email
0
368
false
1
1
Web server which generates PDF files
15,352,337
3
3
0
1
2
0
1.2
0
I want to co-ordinate telling Server B to start a process from Server A, and then when its complete, run an import script on Server A. I'm having a hard time working out how I should be using SQS correctly in this scenario. Server A: Main Dedicated Server Server B: Cloud Process Server Server A sends message to SQS via SNS to say "Start Process" Server B constantly polls SQS for "Start Process" message Server B finds "Start Process" message on SQS Server B runs "process.sh" file Server B completes running "process.sh" file Server B removes "Start Process" from SQS Server B sends message to SQS via SNS to say "Start Import" Server A polls constantly polls SQS for "Start Import" message Server A finds "Start Import" message on SQS Server A runs import.sh Server A completes running "import.sh" Server A removes "Start Import" from SQS Is this how SQS should be used or am I missing the point completely?
0
python,amazon-web-services,amazon-sqs,amazon-sns
2013-03-13T09:15:00.000
0
15,381,092
What you laid out will work in theory, but I am moved away from putting messages directly into queues, and instead put those messages in to SNS topics, and then subscribe the queues to the topics to get them there - gives you more flexibility to change things down the road without every touching the code or the servers that are in production. For the what you are doing now, the SNS piece is unnecessary, but using will allow you to change functionality without touching you existing servers down the road. For example: needs change and you want to add a process C that also kicks off every time the 'Start Process' runs on Sever B. Right thru the AWS SNS console you could direct a second copy of the message to another Queue that previously did not exist, and setup a server C that polls from that Queue (a fan out pattern). Also, what I often like to do during initial rollout is add notifications to SNS so I know whats going on, i.e. every time the 'start process' event occurs, I subscribe my cell phone (or email address) to the topic so I get notified - I can monitor in real time what is (or isn't) happening. Once a period of time has gone by after a production deployment, I can go into AWS console and simply unsubscribe my email/cell from the process - without every touching any servers or code.
0
1,359
true
0
1
How should Amazon SQS be used? Import / Process Scenario
15,879,297
3
3
0
1
2
0
0.066568
0
I want to co-ordinate telling Server B to start a process from Server A, and then when its complete, run an import script on Server A. I'm having a hard time working out how I should be using SQS correctly in this scenario. Server A: Main Dedicated Server Server B: Cloud Process Server Server A sends message to SQS via SNS to say "Start Process" Server B constantly polls SQS for "Start Process" message Server B finds "Start Process" message on SQS Server B runs "process.sh" file Server B completes running "process.sh" file Server B removes "Start Process" from SQS Server B sends message to SQS via SNS to say "Start Import" Server A polls constantly polls SQS for "Start Import" message Server A finds "Start Import" message on SQS Server A runs import.sh Server A completes running "import.sh" Server A removes "Start Import" from SQS Is this how SQS should be used or am I missing the point completely?
0
python,amazon-web-services,amazon-sqs,amazon-sns
2013-03-13T09:15:00.000
0
15,381,092
Well... SQS doesn't not support message routing, in order to assign message to server A or B that why one of the available solutions: create SNS topics "server a" and "server b". These topics should put messages to SQS, which your application will pull. Also it possible to implement web hook - the subscriber on SNS events which will analyze message and do callback to your application.
0
1,359
false
0
1
How should Amazon SQS be used? Import / Process Scenario
15,397,063
3
3
0
3
2
0
0.197375
0
I want to co-ordinate telling Server B to start a process from Server A, and then when its complete, run an import script on Server A. I'm having a hard time working out how I should be using SQS correctly in this scenario. Server A: Main Dedicated Server Server B: Cloud Process Server Server A sends message to SQS via SNS to say "Start Process" Server B constantly polls SQS for "Start Process" message Server B finds "Start Process" message on SQS Server B runs "process.sh" file Server B completes running "process.sh" file Server B removes "Start Process" from SQS Server B sends message to SQS via SNS to say "Start Import" Server A polls constantly polls SQS for "Start Import" message Server A finds "Start Import" message on SQS Server A runs import.sh Server A completes running "import.sh" Server A removes "Start Import" from SQS Is this how SQS should be used or am I missing the point completely?
0
python,amazon-web-services,amazon-sqs,amazon-sns
2013-03-13T09:15:00.000
0
15,381,092
I'm almost sorry that Amazon offers SQS as a service. It is not a "simple queue", and probably not the best choice in your case. Specifically: it has abysmal performance in low volume messaging (some messages will take 90 seconds to arrive) message order is not preserved it is fond of delivering messages more than once they charge you for polling The good news is it scales well. But guess what, you don't have a scale problem, so dealing with the quirky behavior of SQS is just going to cause you pain for no good reason. I highly recommend you check out RabbitMQ, it is going to behave exactly like you want a simple queue to behave.
0
1,359
false
0
1
How should Amazon SQS be used? Import / Process Scenario
15,391,518
1
1
0
4
1
1
1.2
0
I would like to know if there are any documented performance differences between a Python interpreter that I can install from an rpm (or using yum) and a Python interpreter compiled from sources (with a priori well set flags for compilations). I am using a Redhat 6.3 machine as Django/Apache/Mod_WSGI production server. I have already properly compiled everything in different setups and in different orders. However, I usually keep the build-dev dependencies on such machine. For some various ego-related (and more or less practical) reasons, I would like to use Python-2.7.3. By default, Redhat comes with Python-2.6.6. I think I could go with it but it would hurt me somehow (I would have to drop and find a replacement for a few libraries and my ego). However, besides my ego and dependencies, I would like to know what would be the impact in terms of performance for a Django server.
0
python,django,performance,apache,redhat
2013-03-13T21:40:00.000
0
15,397,024
If you compile with the exact same flags that were used to compile the RPM version, you will get a binary that's exactly as fast. And you can get those flags by looking at the RPM's spec file. However, you can sometimes do better than the pre-built version. For example, you can let the compiler optimize for your specific CPU, instead of for "general 386 compatible" (or whatever the RPM was optimized for). Of course if you don't know what you're doing (or are doing it on purpose), it's always possible to build something slower than the pre-built version, too. Meanwhile, 2.7.3 is faster in a few areas than 2.6.6. Most of them usually won't affect you, but if they do, they'll probably be a big win. Finally, for the vast majority of Python code, the speed of the Python interpreter itself isn't relevant to your overall performance or scalability. (And when it is, you probably want to try PyPy, Jython, or IronPython to replace CPython.) This is especially true for a WSGI service. If you're not doing anything slow, Apache will probably be the bottleneck. If you are doing anything slow, it's probably something I/O bound and well outside of Python's control (like reading files). Ultimately, the only way you can know how much gain you get is by trying it both ways and performance testing. But if you just want a rule of thumb, I'd say expect a 0% gain, and be pleasantly surprised if you get lucky.
0
139
true
0
1
Performance differences between python from package and python compiled from source
15,397,078
1
1
1
1
0
0
1.2
0
I have written a very simple script for my raspberry pi that loads an uncompressed WAV and plays it - however when I run the script as root (to be able to use GPIO and ServoBlaster), there is no sound output. I have set the default audio device to a USB sound card, and this works - I have tested this using aplay fx.wav. Running the pygame script without sudo, the sound plays fine. What is going on here?
0
python,audio,pygame,sudo,raspberry-pi
2013-03-14T09:08:00.000
0
15,405,082
The issue was the sudo command changing the directory in which the script was being run - so running python with sudo -s or simply using an absolute path for the sound fixed it.
0
1,087
true
0
1
When running a pygame script as root, no sound is output?
15,428,577
2
4
0
1
8
0
0.049958
0
I doubt this is even possible, but here is the problem and proposed solution (the feasibility of the proposed solution is the object of this question): I have some "global data" that needs to be available for all requests. I'm persisting this data to Riak and using Redis as a caching layer for access speed (for now...). The data is split into about 30 logical chunks, each about 8 KB. Each request is required to read 4 of these 8KB chunks, resulting in 32KB of data read in from Redis or Riak. This is in ADDITION to any request-specific data which would also need to be read (which is quite a bit). Assuming even 3000 requests per second (this isn't a live server so I don't have real numbers, but 3000ps is a reasonable assumption, could be more), this means 96KBps of transfer from Redis or Riak in ADDITION to the already not-insignificant other calls being made from the application logic. Also, Python is parsing the JSON of these 8KB objects 3000 times every second. All of this - especially Python having to repeatedly deserialize the data - seems like an utter waste, and a perfectly elegant solution would be to just have the deserialized data cached in an in-memory native object in Python, which I can refresh periodically as and when all this "static" data becomes stale. Once in a few minutes (or hours), instead of 3000 times per second. But I don't know if this is even possible. You'd realistically need an "always running" application for it to cache any data in its memory. And I know this is not the case in the nginx+uwsgi+python combination (versus something like node) - python in-memory data will NOT be persisted across all requests to my knowledge, unless I'm terribly mistaken. Unfortunately this is a system I have "inherited" and therefore can't make too many changes in terms of the base technology, nor am I knowledgeable enough of how the nginx+uwsgi+python combination works in terms of starting up Python processes and persisting Python in-memory data - which means I COULD be terribly mistaken with my assumption above! So, direct advice on whether this solution would work + references to material that could help me understand how the nginx+uwsgi+python would work in terms of starting new processes and memory allocation, would help greatly. P.S: Have gone through some of the documentation for nginx, uwsgi etc but haven't fully understood the ramifications per my use-case yet. Hope to make some progress on that going forward now If the in-memory thing COULD work out, I would chuck Redis, since I'm caching ONLY the static data I mentioned above, in it. This makes an in-process persistent in-memory Python cache even more attractive for me, reducing one moving part in the system and at least FOUR network round-trips per request.
0
python,optimization,nginx,redis,uwsgi
2013-03-15T23:31:00.000
1
15,443,732
"python in-memory data will NOT be persisted across all requests to my knowledge, unless I'm terribly mistaken." you are mistaken. the whole point of using uwsgi over, say, the CGI mechanism is to persist data across threads and save the overhead of initialization for each call. you must set processes = 1 in your .ini file, or, depending on how uwsgi is configured, it might launch more than 1 worker process on your behalf. log the env and look for 'wsgi.multiprocess': False and 'wsgi.multithread': True, and all uwsgi.core threads for the single worker should show the same data. you can also see how many worker processes, and "core" threads under each, you have by using the built-in stats-server. that's why uwsgi provides lock and unlock functions for manipulating data stores by multiple threads. you can easily test this by adding a /status route in your app that just dumps a json representation of your global data object, and view it every so often after actions that update the store.
0
4,976
false
1
1
Persistent in-memory Python object for nginx/uwsgi server
45,383,617
2
4
0
1
8
0
0.049958
0
I doubt this is even possible, but here is the problem and proposed solution (the feasibility of the proposed solution is the object of this question): I have some "global data" that needs to be available for all requests. I'm persisting this data to Riak and using Redis as a caching layer for access speed (for now...). The data is split into about 30 logical chunks, each about 8 KB. Each request is required to read 4 of these 8KB chunks, resulting in 32KB of data read in from Redis or Riak. This is in ADDITION to any request-specific data which would also need to be read (which is quite a bit). Assuming even 3000 requests per second (this isn't a live server so I don't have real numbers, but 3000ps is a reasonable assumption, could be more), this means 96KBps of transfer from Redis or Riak in ADDITION to the already not-insignificant other calls being made from the application logic. Also, Python is parsing the JSON of these 8KB objects 3000 times every second. All of this - especially Python having to repeatedly deserialize the data - seems like an utter waste, and a perfectly elegant solution would be to just have the deserialized data cached in an in-memory native object in Python, which I can refresh periodically as and when all this "static" data becomes stale. Once in a few minutes (or hours), instead of 3000 times per second. But I don't know if this is even possible. You'd realistically need an "always running" application for it to cache any data in its memory. And I know this is not the case in the nginx+uwsgi+python combination (versus something like node) - python in-memory data will NOT be persisted across all requests to my knowledge, unless I'm terribly mistaken. Unfortunately this is a system I have "inherited" and therefore can't make too many changes in terms of the base technology, nor am I knowledgeable enough of how the nginx+uwsgi+python combination works in terms of starting up Python processes and persisting Python in-memory data - which means I COULD be terribly mistaken with my assumption above! So, direct advice on whether this solution would work + references to material that could help me understand how the nginx+uwsgi+python would work in terms of starting new processes and memory allocation, would help greatly. P.S: Have gone through some of the documentation for nginx, uwsgi etc but haven't fully understood the ramifications per my use-case yet. Hope to make some progress on that going forward now If the in-memory thing COULD work out, I would chuck Redis, since I'm caching ONLY the static data I mentioned above, in it. This makes an in-process persistent in-memory Python cache even more attractive for me, reducing one moving part in the system and at least FOUR network round-trips per request.
0
python,optimization,nginx,redis,uwsgi
2013-03-15T23:31:00.000
1
15,443,732
You said nothing about writing this data back, is it static? In this case, the solution is every simple, and I have no clue what is up with all the "it's not feasible" responses. Uwsgi workers are always-running applications. So data absolutely gets persisted between requests. All you need to do is store stuff in a global variable, that is it. And remember it's per-worker, and workers do restart from time to time, so you need proper loading/invalidation strategies. If the data is updated very rarely (rarely enough to restart the server when it does), you can save even more. Just create the objects during app construction. This way, they will be created exactly once, and then all the workers will fork off the master, and reuse the same data. Of course, it's copy-on-write, so if you update it, you will lose the memory benefits (same thing will happen if python decides to compact its memory during a gc run, so it's not super predictable).
0
4,976
false
1
1
Persistent in-memory Python object for nginx/uwsgi server
45,384,113
1
3
0
1
2
1
0.066568
0
I am doing maintenance for a python code. Python is installed in /usr/bin, the code installed in /aaa, a python 2.5 installed under /aaa/python2.5. Each time I run Python, it use /usr/bin one. How to make it run /aaa/python2.5? Also when I run Python -v; import bbb; bbb.__file__; it will automatically show it use bbb module under /usr/ccc/(don't know why), instead of use bbb module under /aaa/python2.5/lib How to let it run python2.5 and use `/aaa/python2.5/lib' module? The reason I asking this is if we maintain a code, but other people is still using it, we need to install the code under a new directory and modify it, run it and debug it.
0
python,linux
2013-03-18T22:06:00.000
1
15,487,848
Do /aaa/python2.5 python_code.py. If you use Python 2.5 more often, consider changing the $PATH variable to make Python 2.5 the default.
0
1,363
false
0
1
How to run python in different directory?
15,487,877
2
2
0
0
0
0
0
0
I will be creating a connection between my Linux server and a cellular modem where the modem will act as a server for serial over TCP. The modem itself is connected to a modbus device (industrial protocol) via an RS232 connection. I would like to use pymodbus to facilitate talking to the end modbus device. However, I cannot use the TCP modbus option in PyModbus as the end device speaks serial modbus (Modbus RTU). And I cannot use the serial modbus option in Pymodbus as it expects to open an actual local serial port (tty device) on the linux server. How can I bridge the serial connection such that the pymodbus library will see the connection as a local serial device?
0
python,serial-port,tty,modbus
2013-03-19T00:20:00.000
1
15,489,371
There is no straightforward solution to trick your linux server into thinking that a MODBUS RTU is actually of MODBUS TCP connection. In all cases, your modem will have to transfer data from TCP to serial (and the other way around). So I assume that: 1) somehow you can program your modem and instruct it to do whatever you want 2) the manufacturer of the modem has provided a built-in mechanism to do that. If 1): you should program your modem so that it can replace TCP ADUs by RTU ADUs (and the other way around) when copying data from the TCP connection to the RS link. If 2): simply provide your RTU frame to whatever API the manufacturer devised.
0
1,866
false
0
1
Pymodbus (Serial) over a tcp serial connection
15,494,099
2
2
0
0
0
0
0
0
I will be creating a connection between my Linux server and a cellular modem where the modem will act as a server for serial over TCP. The modem itself is connected to a modbus device (industrial protocol) via an RS232 connection. I would like to use pymodbus to facilitate talking to the end modbus device. However, I cannot use the TCP modbus option in PyModbus as the end device speaks serial modbus (Modbus RTU). And I cannot use the serial modbus option in Pymodbus as it expects to open an actual local serial port (tty device) on the linux server. How can I bridge the serial connection such that the pymodbus library will see the connection as a local serial device?
0
python,serial-port,tty,modbus
2013-03-19T00:20:00.000
1
15,489,371
I actually was working on something similar and decided to make my own Serial/TCP bridge. Using virtual serial ports to handle the communication with each of the modems. I used the minimalmodbus library although I had to modify it a little in order to handle the virtual serial ports. I hope you solved your problem and if you didn't I can try to help you out.
0
1,866
false
0
1
Pymodbus (Serial) over a tcp serial connection
16,742,894
1
1
0
1
1
0
0.197375
0
I have a library (PyModbus) I would like to use that requires a tty device as it will be communicating with a device using serial connection. However, the device I am going to talk to is going to be behind a modem that supports serial over tcp (the device plugs into a com port on the modem). Without the modem in the way it would be trivial. I would connect a usb serial cable to the device and the other end to the computer. With the modem in the way, the server has to connect to a tcp port on the modem and pump serial data through that. The modem passes the data received to the device connected to the com port. In linux, whats the best way to create a fake tty from the "serial over tcp connection" for momentary use and then be destroyed. This would happen periodically, and an individual linux server may have 10~500 of these emulated device open at any given time.
0
python,serial-port,tty,modbus
2013-03-19T04:07:00.000
1
15,491,308
if i do understand, you need make a connection of this manner: [pyModbus <-(fake serial)->process]<-(tcp/ip)->[modem<-(serial)->device] I suggest use socat for this
0
1,308
false
0
1
Create a fake TTY device from a serial-over TCP connection
15,680,046
1
3
0
0
13
1
0
0
How to I get Emacs to use rst-mode inside of docstrings in Python files? I vaguely remember that different modes within certain regions of a file is possible, but I don't remember how it's done.
0
python,emacs,restructuredtext
2013-03-19T06:59:00.000
0
15,493,342
As far as for edit-purposes, narrowing to docstring and activating rst-mode should be the way to go. python-mode el provides py--docstring-p, which might be easily adapted for python.el Than binding the whole thing to some idle-timer, would do the narrowing/switching. Remains some expression which toggles-off rst-mode and widens.
0
1,425
false
0
1
Have Emacs edit Python docstrings using rst-mode
28,541,254
1
1
0
1
0
0
0.197375
0
I'm doing a file sync between a client, server and Dropbox (Mac client, Debian server). I'm looking at the mod times of files to determine which is newest. On the client I'm using os.path.getmtime(filePath) to get the modified time. When I check the last modification time of the file on the client and then, after uploading I check again on the server or Dropbox there is a varying difference in the time between them all for the same file. I thought file mod times were associated with the file rather than os they are on, so if the file was last modified on the client, that mod time stamp should be the same when checked on the server? Could anyone clarify if uploading the file has an impact on the mod time, or suggest where this variation in time for one file could be coming from? Any advice would be greatly appreciated!
0
python,unix,python-2.7,unix-timestamp,dropbox-api
2013-03-19T20:55:00.000
1
15,510,254
The modified time on the Dropbox server isn't necessarily going to be the modified time on the client, but rather the time the file was uploaded to the server. You can use the 'rev' property on files from the /metadata call to keep track of files instead.
0
77
false
0
1
File Mod Time Discrepancies On Upload
15,529,123
1
1
0
1
2
0
0.197375
0
I'm looking for ideas, on how to display sensor data in a webpage, hosted by a Synology Diskstation, where the data comes from sensors connected to a Raspberry pi. This is going to be implemented in Python. I have put together the sensors, and have these connected to the Raspberry. I have also the Python code, so I can read the sensors. I have a webpage up and running on the Diskstation using Python. But how do I get the data from the rasp to the Diskstation. The reading is just done, when the webpage is displayed. Guess some kind of WebServices on the Rasp ? I have looked at Pyro4, but doesn't look like it can be installed at the Diskstation. And I would prefer not to install a whole WebServer Framework on the rasp. Do you have a suggestion ?
0
python,service,web
2013-03-20T20:42:00.000
0
15,534,297
I'm not experiment on this topic but what I would do is setup a database in between (on the Synology rather than on the Raspberry Pi). Let's call your Synology server, and Raspberry Pi a sensor client. I would host a database on the server, and push the from the sensor client. The data would be pushed either using an API through webservices or a more low level if you need it faster (some code needed on server side for this) or, since the client computer is under your control, it could directly push in the database. Your concrete choice between database, webservice or other API depends on: How much data have to be pushed? How fast data have to pushed? How much do you trust your network? How much do you trust your sensor client? I've never used it but I suggest you use SQLAlchemy for connecting to the database (from both side). If in some use case the remote server can be down, the sensor client would store sensor data in some local file and push them when the server come back online.
0
699
false
1
1
Move data from Raspberry pi to a synology diskstation to present in a webpage
15,534,482
1
1
0
0
1
0
0
0
I just recently installed the PyDev 2.6 plugin for Eclipse (I run Eclipse SDK 4.2.1) and when I try to configure the Python interpreter to the path: > C:\Python27\python.exe , it gives me an "Error info on interpreter" and in error log it says: com.sun.org.apache.xerces.internal.impl.io.MalformedByteSequenceException: unvalid Byte 2 of the sequence UTF-8 of 3 bytes I have read other similar questions on this website about the same issue but the solutions do not suit my situation, as I don't have any unicode char in my path. I run Python 2.7.3. I would really appreciate any help or advice on how to solve this issue, as I would really love to start coding Python in Eclipse soon. Cheers.
0
python,eclipse,configuration,pydev,interpreter
2013-03-21T03:17:00.000
1
15,538,867
I've faced same problem. The solution was reinstalling Aptana (or Eclipse, tested also on Kepler 4.2.x). The source of problem was in path to your eclipse/aptana installition. I think that trouble here is determined by diacritic symbols in your name 'Andres Diaz', according to your username here))) (my case is: cyrillic username and user's home folder 'Михаил' in Windows8). Path to your python interpreter does not matter here. The cure is: move/reinstall your Eclipse to folder with the path which does not contain any non-acsii character. In my case I've moved Aptana Studio from C:\Users\Михаил\Aptana3 to C:\Aptana3 and (maybe it's not necesarry, I don't know) its' workspace also to root C:\ folder. P.S. I think it can be useful for those who also faced such problem cause I was not able to find any answer about how to solve this troubles but a lot of similar questions. P.P.S. Sorry for my English, languages are not my leading skill)))
0
362
false
0
1
Error when configuring Python interpreter for PyDev in Eclipse
18,466,358
1
2
1
3
11
0
0.291313
0
Does embedding c++ code in python using ctypes, boost.python, etc make your python application faster? Suppose I am making an application in pygtk and I need some functions which need to be fast. So if I use c++ for certain tasks in my application will it be beneficial? And what are other options to make python code faster?
0
c++,python,c,ctypes,embedding
2013-03-21T09:34:00.000
0
15,543,783
It depends, there's not a definitive answer. If you write bad code in C++ it could be even slower than well written Python code. Assuming that you can write good quality C++ code, you can expect speedups up to 20x in the performance critical parts. As the other answer says, NumPy is a good option for numerical bottlenecks (if you think in matrix operations rather than loops!); and SciPy comes with weaver, that allows you to embed inline C++ and other goodies.
0
624
false
0
1
Does embedding c++ code in python make your python application faster?
15,544,287
1
2
0
1
4
0
1.2
0
I've got some tests which log to stdout, and I'd like to change the log level in my test script based on the verbosity that nose is running on. How can I access the verbosity of the running nose instance, from within one of the tests being run?
0
python,nose
2013-03-21T18:35:00.000
1
15,555,468
It looks like the expected way to handle this in nose is to use the logger framework within your tests, and then control the level to be captured with the --logging-level option. By default nose will capture all logs made by the tests, but a filter can be specified using --logging-filter config parameter.
0
930
true
0
1
Accessing nose verbosity programmatically
15,581,683
6
8
0
0
9
1
0
0
I have a test suit for my app. As the test suit grew organically, the tests have a lot of repeated code which can be refactored. However I would like to ensure that the test suite doesn't change with the refactor. How can test that my tests are invariant with the refactor. (I am using Python+UnitTest), but I guess the answer to this can be language agnostic.
0
python,unit-testing,testing,language-agnostic
2013-03-22T08:49:00.000
0
15,566,117
I can't see an easy way to refactor a test suite, and depending on the extent of your refactor you're obviously going to have to change the test suite. How big is your test suite? Refactoring properly takes time and attention to detail (and a lot of Ctrl+C Ctrl+V!). Whenever I've refactored my tests I don't try and find any quick ways of doing things, besides find & replace, because there is too much risk involved. You're best of doing things properly and manually albeit slowly if you want to make keep the quality of your tests.
0
432
false
0
1
How do i test/refactor my tests?
15,566,501
6
8
0
2
9
1
0.049958
0
I have a test suit for my app. As the test suit grew organically, the tests have a lot of repeated code which can be refactored. However I would like to ensure that the test suite doesn't change with the refactor. How can test that my tests are invariant with the refactor. (I am using Python+UnitTest), but I guess the answer to this can be language agnostic.
0
python,unit-testing,testing,language-agnostic
2013-03-22T08:49:00.000
0
15,566,117
Interesting question - I'm always keen to hear discussions of the type "how do I test the tests?!". And good points from @marksweb above too. It's always a challenge to check your tests are actually doing what you want them to do and testing what you intend, but good to get this right and do it properly. I always try to consider the rule-of-thumb that testing should make up 1/3 of development effort in any project... regardless of project time constraints, pressures and problems that inevitably crop up. If you intend to continue and grow your project have you considered refactoring like you say, but in a way that creates a proper test framework that allows test driven development (TDD) of any future additions of functionality or general expansion of the project?
0
432
false
0
1
How do i test/refactor my tests?
15,566,738
6
8
0
0
9
1
0
0
I have a test suit for my app. As the test suit grew organically, the tests have a lot of repeated code which can be refactored. However I would like to ensure that the test suite doesn't change with the refactor. How can test that my tests are invariant with the refactor. (I am using Python+UnitTest), but I guess the answer to this can be language agnostic.
0
python,unit-testing,testing,language-agnostic
2013-03-22T08:49:00.000
0
15,566,117
Don't refactor the test suite. The purpose of refactoring is to make it easier to maintain the code, not to satisfy some abstract criterion of "code niceness". Test code doesn't need to be nice, it doesn't need to avoid repetition, but it does need to be thorough. Once you have a test that is valid (i.e. it really does test necessary conditions on the code under test), you should never remove it or change it, so test code doesn't need to be easy to maintain en masse. If you like, you can rewrite the existing tests to be nice, and run the new tests in addition to the old ones. This guarantees that the new combined test suite catches all the errors that the old one did (and maybe some more, as you expand the new code in future). There are two ways that a test can be deemed invalid -- you realise that it's wrong (i.e. it sometimes fails falsely for correct code under test), or else the interface under test has changed (to remove the API tested, or to permit behaviour that previously was a test failure). In that case you can remove a test from the suite. If you realise that a whole bunch of tests are wrong (because they contain duplicated code that is wrong), then you can remove them all and replace them with a refactored and corrected version. You don't remove tests just because you don't like the style of their source. To answer your specific question: to test that your new test code is equivalent to the old code, you would have to ensure (a) all the new tests pass on your currently-correct-as-far-as-you-known code base, which is easy, but also (b) the new tests detect all the errors that the old tests detect, which is usually not possible because you don't have on hand a suite of faulty implementations of the code under test.
0
432
false
0
1
How do i test/refactor my tests?
15,566,925
6
8
0
1
9
1
0.024995
0
I have a test suit for my app. As the test suit grew organically, the tests have a lot of repeated code which can be refactored. However I would like to ensure that the test suite doesn't change with the refactor. How can test that my tests are invariant with the refactor. (I am using Python+UnitTest), but I guess the answer to this can be language agnostic.
0
python,unit-testing,testing,language-agnostic
2013-03-22T08:49:00.000
0
15,566,117
In theory you could write a test for the test, mocking the actualy object under test.But I guess that is just way to much work and not worth it. So what you are left with are some strategies, that will help, but not make this fail safe. Work very carefully and slowly. Use the features of you IDEs as much as possible in order to limit the chance of human error. Work in pairs. A partner looking over your shoulder might just spot the glitch that you missed. Copy the test, then refactor it. When done introduce errors in the production code to ensure, both tests find the the problem in the same (or equivalent) ways. Only then remove the original test. The last step can be done by tools, although I don't know the python flavors. The keyword to search for is 'mutation testing'. Having said all that, I'm personally satisfied with steps 1+2.
0
432
false
0
1
How do i test/refactor my tests?
15,567,104
6
8
0
0
9
1
0
0
I have a test suit for my app. As the test suit grew organically, the tests have a lot of repeated code which can be refactored. However I would like to ensure that the test suite doesn't change with the refactor. How can test that my tests are invariant with the refactor. (I am using Python+UnitTest), but I guess the answer to this can be language agnostic.
0
python,unit-testing,testing,language-agnostic
2013-03-22T08:49:00.000
0
15,566,117
Test code can be the best low level documentation of your API since they do not outdate as long as they pass and are correct. But messy test code doesn't serve that purpose very well. So refactoring is essential. Also might your tested code change over time. So do the tests. If you want that to be smooth, code duplication must be minimized and readability is a key. Tests should be easy to read and always test one thing at once and make the follwing explicit: what are the preconditions? what is being executed? what is the expected outcome? If that is considered, it should be pretty safe to refactor the test code. One step at a time and, as @Don Ruby mentioned, let your production code be the test for the test. For many refactoring you can often safely rely on advanced IDE tooling – if you beware of side effects in the extracted code. Although I agree that refactoring without proper test coverage should be avoided, I think writing tests for your tests is almost absurd in usual contexts.
0
432
false
0
1
How do i test/refactor my tests?
15,587,332
6
8
0
3
9
1
0.07486
0
I have a test suit for my app. As the test suit grew organically, the tests have a lot of repeated code which can be refactored. However I would like to ensure that the test suite doesn't change with the refactor. How can test that my tests are invariant with the refactor. (I am using Python+UnitTest), but I guess the answer to this can be language agnostic.
0
python,unit-testing,testing,language-agnostic
2013-03-22T08:49:00.000
0
15,566,117
Coverage.py is your friend. Move over all the tests you want to refactor into "system tests" (or some such tag). Refactor the tests you want (you would be doing unit tests here right?) and monitor the coverage: After running your new unit tests but before running the system tests After running both the new unit tests and the system tests. In an ideal case, the coverage would be same or higher but you can thrash your old system tests. FWIW, py.test provides mechanism for easily tagging tests and running only the specific tests and is compatible with unittest2 tests.
0
432
false
0
1
How do i test/refactor my tests?
15,649,053
1
1
0
1
1
1
1.2
0
I was wondering if anyone knew a code fix to the pstorm python script where you could exclude directories from being indexed in a directory when you open it from the command line. I know this is not currently a feature in the IDE but maybe there is a work around someone knows of. Thanks
0
python,phpstorm
2013-03-22T12:34:00.000
1
15,570,452
Use Settings | File Types | Ignore Files and Folders to exclude directories by name or pattern.
0
174
true
0
1
Exclude Directories when using Pstorm in PhpStorm
16,013,655
1
1
0
14
12
0
1.2
0
Is there a way to get a good call hierarchy in PyDev? I want to be able to select a function and see in which files it is called and eventually by which other functions. I tried the Hierarchy View in Eclipse by pressing F4, but it does not output what I want.
0
python,eclipse,pydev
2013-03-22T14:04:00.000
1
15,572,295
PyDev has a find references with Ctrl+Shift+G (not sure that'd be what you're calling a call hierarchy).
0
4,318
true
0
1
Good Call Hierarchy in Eclipse/PyDev
15,580,217
1
4
0
3
4
0
1.2
0
Why modulo operator is not working as intended in C and Java?
0
java,python,c,modulo
2013-03-22T18:13:00.000
0
15,577,185
Python's %-operator calculates the mathematical remainder, not the modulus. The remainder is by definition a number between 0 and the divisor, it doesn't depend on the sign of the dividend like the modulus.
0
963
true
0
1
Why -1%26 = -1 in Java and C, and why it is 25 in Python?
15,577,257
2
3
0
6
0
0
1
1
I wonder, what is the advantage of using selenium for automation if at the end of the test he emits no reports where the test passed or failed?
0
python,selenium
2013-03-22T20:02:00.000
0
15,578,942
Selenium isn't actually a testing framework, it's a browser driver. You don't write tests in Selenium any more than you write GUI apps in OpenGL. You usually write tests in a unit testing framework like unittest, or something like nose or lettuce built on top of it. Your tests then use Selenium to interact with a browser, as they use a database API to access the DB or an HTTP library to communicate with web services.
0
590
false
0
1
Selenium Webdriver Testing - Python
15,579,077
2
3
0
0
0
0
0
1
I wonder, what is the advantage of using selenium for automation if at the end of the test he emits no reports where the test passed or failed?
0
python,selenium
2013-03-22T20:02:00.000
0
15,578,942
Its up to the discretion of the user what to do with the selenium webdriver automation and how to report the test results. Selenium webdriver will give you the power to control your web browser and to automate your web application tests. Same as how you have to program in any other automation tool the conditions for checking your pass or fail criteria for any tests, in Selenium also it has to be programmed.It is totally up to the programmer how to report their results and the template to be followed. You will have to write your own code to format and store the test results.
0
590
false
0
1
Selenium Webdriver Testing - Python
15,594,935
1
4
0
1
10
0
0.049958
0
So, I have decided to write my next project with python3, why? Due to the plan for Ubuntu to gradually drop all Python2 support within the next year and only support Python3. (Starting with Ubuntu 13.04) gevent and the memcached modules aren't officially ported to Python3. What are some alternatives, already officially ported to Python3, for gevent and pylibmc or python-memcached?
0
python,python-3.x,gevent,python-memcached
2013-03-25T06:29:00.000
1
15,608,933
for memcached you probably know alternative: redis+python3
0
5,262
false
0
1
Python3: Looking for alternatives to gevent and pylibmc/python-memcached
20,068,405
1
1
0
0
1
1
0
0
I'm using python scripts to execute simple but long measurements. I as wondering if (and how) it's possible to edit a running script. An example: Let's assume I made an error in the last lines of a running script.These lines have not yet been executed. Now I'd like to fix it without restarting the script. What should I do? Edit: One Idea I had was loading each line of the script in a list. Then pop the first one. Feed it to an interpreter instance. Wait for it to complete and pop the next one. This way I could modify the list. I guess I can't be the first one thinking about it. Someone must have implemented something like this before and I don't wan't to reinvent the weel. I one of you knows about a project please let me know.
0
python
2013-03-25T06:49:00.000
0
15,609,211
I am afraid there's no easy way to arbitrarily modify a running Python script. One approach is to test the script on a small amount of data first. This way you'll reduce the likelihood of discovering bugs when running on the actual, large, dataset. Another possibility is to make the script periodically save its state to disk, so that it can be restarted from where it left off, rather than from the beginning.
0
109
false
0
1
Modifying a running script
15,609,275
1
4
0
1
2
0
0.049958
0
This is more of a design question. I was planning on writing some web-services which implement CPU intensive algorithms. The problem that I am trying to solve is - higher level languages such as python, perl or java make it easy to write web services. While lower level languages such as C, C++ make it possible to fine tune the performance of your code. So I was looking at what I could do bridge two languages. Here's the options I came up with: Language specific bindings Use something like perl-xs or python's ctypes/loadlibrary or java's JNI. The up-side is that I can write extensions which can execute in the same process. There is small overhead of converting between the native language types to C and back. Implement a separate daemon Use something like thrift / avro and have a separate daemon that runs the C/C++ code. The upside is, it's loosely coupled from the higher level language. I can quickly replace the high level language. The downside being that the overhead of serializing and local unix domain sockets might be higher than executing the code in the same address space (offered by the previous option.) What do you guys think?
0
java,c++,python,c,thrift
2013-03-25T07:43:00.000
1
15,609,918
If your C/C++ code already exists, your best bet is to publish it as a service, with an API matching what functionality you already have. You can then write new services in the language of your choice, matching the API you need, and they can call the C/C++ services. If your C/C++ code does not exist yet, and you are set to create the majority of code in a higher level language such as Java or C#, consider implementing the performance critical parts initially in that language as well. Only after profiling shows a particular performance problem, and after you exhaust the most basic optimization techniques within the language, such as avoiding allocations inside the hottest loops, you should consider rewriting the bits that have been proven to consume the most cycles into another language using glue such as JNI. In other words, do not optimize until you have numbers in hand. There is also no fundamental reason why you couldn't squeeze out (almost) the same performance level from Java as you can from C++, with enough trying. You have a real chance to end up with a simpler architecture than you expect.
0
1,376
false
0
1
Bridging between different programming languages
15,610,243
4
4
0
6
8
0
1.2
0
Launching Python has encounterd a problem. Unable to get project for the run It would let me put the word problem in the title. The title is the exact message i get when i try to run/debug a file in Aptana 3. I have always been able to run Python in Eclipse without problems. Does anyone know what causes this error? For testing purposes i just created a new Pydev project with only 1 file in it.
0
python,python-3.x,aptana
2013-03-25T13:25:00.000
0
15,616,093
I had the same problem with Aptana and just solved it. In my case I had configured another interpreter (IronPython) for running another script. When I got back to a previous script I got the same error message as you "Unable to get project for the run" because it was trying to run it with IronPython instead of Python. I would therefore recommand the following: 1) Check your interpreter configuration. -> Window -> Preferences -> Pydev -> Interpreter Python If you have no interpreter there try autoconfig. If it doesn't work you will have to browse it yourself by clicking New (then it should be somewhere like C:\Python27\python.exe) 2) If you have an interpreter, it means that Aptana is trying to run your script with another interpreter. In that case right click on your script file in Aptana -> Run as -> Python run. That worked for me. Good luck !
0
10,294
true
0
1
Launching Python has encounterd a. Unable to get project for the run
26,059,272
4
4
0
0
8
0
0
0
Launching Python has encounterd a problem. Unable to get project for the run It would let me put the word problem in the title. The title is the exact message i get when i try to run/debug a file in Aptana 3. I have always been able to run Python in Eclipse without problems. Does anyone know what causes this error? For testing purposes i just created a new Pydev project with only 1 file in it.
0
python,python-3.x,aptana
2013-03-25T13:25:00.000
0
15,616,093
Go to Run -> Run configurations -> Python run delete "New configuration" then it must work
0
10,294
false
0
1
Launching Python has encounterd a. Unable to get project for the run
48,852,418
4
4
0
0
8
0
0
0
Launching Python has encounterd a problem. Unable to get project for the run It would let me put the word problem in the title. The title is the exact message i get when i try to run/debug a file in Aptana 3. I have always been able to run Python in Eclipse without problems. Does anyone know what causes this error? For testing purposes i just created a new Pydev project with only 1 file in it.
0
python,python-3.x,aptana
2013-03-25T13:25:00.000
0
15,616,093
It occurs when you create a New configuration for run a program. go to Run > run configuration > python run select "New configuration" press on delete icon and again Run the program . this is worked for me .
0
10,294
false
0
1
Launching Python has encounterd a. Unable to get project for the run
52,572,793
4
4
0
0
8
0
0
0
Launching Python has encounterd a problem. Unable to get project for the run It would let me put the word problem in the title. The title is the exact message i get when i try to run/debug a file in Aptana 3. I have always been able to run Python in Eclipse without problems. Does anyone know what causes this error? For testing purposes i just created a new Pydev project with only 1 file in it.
0
python,python-3.x,aptana
2013-03-25T13:25:00.000
0
15,616,093
I had similar issue. Below action solved my problem. Go to Run > run configuration > python run Delete all the configurations below python run - it may not be a great option if you have any custom configuration settings.
0
10,294
false
0
1
Launching Python has encounterd a. Unable to get project for the run
55,316,981
1
2
0
0
0
0
0
0
I need to analyze a set of GPS coordinates in python. I need to find out what is the most frequent location. Given precision issues of the GPS data, the precision of the locations is not very high. Difficult to explan (and to search for infos on google), therefore an example: I drive from home to work every day for 2 months I start my gps logger for each trip and stop at the end of the trip Occasionally, I go somewhere else If I run the script I need to analyse the coordinates where drives started and stopped, with a location radius precision of let's say 20m, I'll find out that the most frequent place is my home and my work (each with a radius of 20m). It does not matter where did I park within this radius. Is there any library in python that can perform such operations? What do you recommend? Thanks
0
python,geolocation,gps
2013-03-25T20:13:00.000
0
15,623,866
For counting most frequent locations, a simple approach is to use only the first 3 digits after the latitdue/longitude decimal point, or better round to 3 digits after comma. At aequator: 4 digits: 11 m 3 digits 111m 2 digits 1.1km 1 digits 11.1km 0 digits 111.111 km (distance between two meridians): 40 000 000 / 360 Then you could use as hashtable, multiply with e,g 1000 to get rid of the 3 decimal points, and store as java.awt.Point in the hashtable. There are better solutions, but this gives an first idea.
0
1,235
false
0
1
Python: Find out most frequent locations on a set of gps coordinates
15,624,093
2
2
0
3
2
0
1.2
0
Right now I'm testing the waters with Apache Thrift, and I'm currently using a TThreadedServer written in Python, but when I run the server, it is not daemonized. Is there any way to make it run as a daemon, or is there another way to run thrift in a production environment?
0
python,thrift
2013-03-26T01:18:00.000
1
15,627,698
Daemonizing processes has nothing to do with thrift. Thrift only provides the communication layer for different platforms and you can run the server in one of the several programming languages thrift supports (that is - great majority of what you can think of). No matter if you write the server in Java, C++ (I've tried those so far) or python, none of them will create a daemon. This feature is not supported (e.g. PHP natively doesn't support neither multithreading nor daemonizing). I've just seen supervisord, didn't play with it much, but it seems to be a good choice to manage processes like thrift servers.
0
1,136
true
0
1
Running thrift server as daemon
15,634,347
2
2
0
1
2
0
0.099668
0
Right now I'm testing the waters with Apache Thrift, and I'm currently using a TThreadedServer written in Python, but when I run the server, it is not daemonized. Is there any way to make it run as a daemon, or is there another way to run thrift in a production environment?
0
python,thrift
2013-03-26T01:18:00.000
1
15,627,698
I think you are looking for this: nohup hbase thrift start & This is the only way I found to keep thrift working after my disconnect from Linuxsession.
0
1,136
false
0
1
Running thrift server as daemon
15,873,194
1
2
0
0
0
0
0
0
I wanted to know if there is a way to find out the status of the ssh server in the system using Python. I just want to know if the server is active or not (just yes/no). It would help even if it is just a linux command so that I can use python's popen from subprocess module and run that command. Thanks PS: I'm using openssh-server on linux (ubuntu 12.04)
0
python,python-2.7,ssh,openssh
2013-03-26T13:56:00.000
1
15,638,882
Run service sshd status (e.g. via Popen()) and read what it tells you.
0
1,422
false
0
1
SSH Server status in Python
15,639,004
1
1
0
4
1
1
1.2
0
My idea is to track a specific file on a file-system over time between two points in time, T1 and T2. The emphasis here lies on looking at a file as a unique entity on a file-system. One that can change in data and attributes but still maintain its unique identity. The ultimate goal is to determine whether or not the data of a file has (unwillingly) changed between T1 and T2 by capturing and recording the data-hash and creation/modification attributes of the file at T1 and comparing them with the equivalents at T2. If all attributes are unchanged but the hash doesn't validate we can say that there is a problem. In all other cases we might be willing to say that a changed hash is the result of a modification and an unchanged hash and unchanged modification-attribute the result of no change on the file(data) at all. Now, there are several ways to refer to a file and corresponding drawbacks: The path to the file: However, if the file is moved to a different location this method fails. A data-hash of the file-data: Would allow a file, or rather (a) pointer to the file-data on disk, to be found, even if the pointer has been moved to a different directory, but the data cannot change or this method fails as well. My idea is to retrieve a fileId for that specific file at T1 to track the file at T2, even if it has changed its location so it doesn't need to be looked at as a new file. I am aware of two methods pywin offers. win32file.GetFileInformationByHandle() and win32file.GetFileInformationByHandleEx(), but they obviously are restricted to specific file-systems, break cross-platform-compatibility and sway away from a universal approach to track the file. My question is simple: Are there any other ideas/theories to track a file, ideally accross platforms/FSs? Any brainstormed food for thought is welcome!
0
python,file,file-io,filesystems
2013-03-27T03:58:00.000
0
15,651,666
It's not really feasible in general, because the idea of file identity is an illusion (similar to the illusion of physical identity, but this isn't a philosophy forum). You cannot track identity using file contents, because contents change. You cannot track by any other properties attached to the file, because many file editors will save changes by deleting the old file and creating a new one. Version control systems handle this in three ways: (CVS) Don't track move operations. (Subversion) Track move operations manually. (Git) Use a heuristic to label operations as "move" operations based on changes to the contents of a file (e.g., if a new file differs from an existing file by less than 50%, then it's labeled as a copy). Things like inode numbers are not stable and not to be trusted. Here, you can see that editing a file with Vim will change the inode number, which we can examine with stat -f %i: $ touch file.txt $ stat -f %i file.txt 4828200 $ vim file.txt ...make changes to file.txt... $ stat -f %i file.txt 4828218
0
303
true
0
1
Tracking a file over time
15,651,767
1
1
0
16
8
1
1
0
Is there any performance difference between from package import * and import package?
0
python,performance,python-import
2013-03-27T09:14:00.000
0
15,655,224
No, the difference is not a question of performance. In both cases, the entire module must be parsed, and any module-level code will be executed. The only difference is in namespaces: in the first, all the names in the imported module will become names in the current module; in the second, only the package name is defined in the current module. That said, there's very rarely a good reason to use from foo import *. Either import the module, or import specific names from it.
0
181
false
0
1
Performance between "from package import *" and "import package"
15,655,265
1
1
0
1
0
1
1.2
0
The CPython headers define a macro to declare a method that is run to initialize your module on import: PyMODINIT_FUNC My initializer creates references to other python objects, what is the best way to ensure that these objects are properly cleaned up / dereferenced when my module is unloaded?
0
python,cpython,python-c-extension
2013-03-28T17:59:00.000
0
15,688,954
You can't unload C extension modules at all. There is just no way to do it, and I know for sure that most of the standard extension modules would leak like crazy if there was.
0
353
true
0
1
What's the proper way to clean up static python object references in a CPython extension module?
15,692,895
1
2
0
1
1
0
0.099668
0
I have a ton of scripts I need to execute, each on a separate machine. I'm trying to use Jenkins to do this. I have a Python script that can execute a single test and handles time limits and collection of test results, and a handful of Jenkins jobs that run this Python script with different args. When I run this script from the command line, it works fine. But when I run the script via Jenkins (with the exact same arguments) the test times out. The script handles killing the test, so control is returned all the way back to Jenkins and everything is cleaned up. How can I debug this? The Python script is using subprocess.popen to launch the test. As a side note, I'm open to suggestions for how to do this better, with or without Jenkins and my Python script. I just need to run a bunch of scripts on different machines and collect their output.
0
python,testing,jenkins,distributed
2013-03-28T22:53:00.000
1
15,693,565
To debug this: Add set -x towards the top of your shell script. Set a PS4 which prints the line number of each line when it's invoked: PS4='+ $BASH_SOURCE:$FUNCNAME:$LINENO:' Look in particular for any places where your scripts assume environment variables which aren't set when Hudson is running. If your Python scripts redirect stderr (where logs from set -x are directed) and don't pass it through to Hudson (and so don't log it), you can redirect it to a file from within the script: exec 2>>logfile There are a number of tools other than Jenkins for kicking off jobs across a number of machines, by the way; MCollective (which works well if you already use Puppet), knife ssh (which you'll already have if you use Chef -- which, in my not-so-humble opinion, you should!), Rundeck (which has a snazzy web UI, but shouldn't be used by anyone until this security bug is fixed), Fabric (which is a very good choice if you don't have mcollective or knife already), and many more.
0
1,510
false
0
1
Shell scripts have different behavior when launched by Jenkins
15,693,722
2
2
0
1
0
0
1.2
0
I have a CGI script that I wrote in python to use as the home page of the website I am creating. Everything works properly except when you view the page instead of seeing the page that it outputs you see the source code of the page, why is this? I dont mean that it shows me the source code of the .py file, it shows me all the printed information as if I were looking at a .htm file in notepad.
0
python,html,apache,cgi
2013-03-31T06:01:00.000
0
15,726,843
The default Content Type is text, and if you forgot to send the appropriate header in your CGI file, you will end up with what you are seeing.
0
426
true
1
1
Python CGI - Script outputs source of generated page
15,726,928
2
2
0
2
0
0
0.197375
0
I have a CGI script that I wrote in python to use as the home page of the website I am creating. Everything works properly except when you view the page instead of seeing the page that it outputs you see the source code of the page, why is this? I dont mean that it shows me the source code of the .py file, it shows me all the printed information as if I were looking at a .htm file in notepad.
0
python,html,apache,cgi
2013-03-31T06:01:00.000
0
15,726,843
Add the following before you print anything print "Content-type: text/html" Probably your script is not getting executed. Is your python script executable? Check whether you have the script under cgi-bin directory.
0
426
false
1
1
Python CGI - Script outputs source of generated page
15,726,936
1
1
0
0
1
0
0
0
I am making a pyramid webapp running in apache webserver using mod_wsgi. Is there anyway I could make user session never timed out? (The idea is so that once user logged in, the system will never kicked them out unless they logged out themselves). I cant find any information regarding this in apache, mod_wsgi or pyramid documentation. Thanks!
0
python,apache,session,mod-wsgi,pyramid
2013-04-01T05:17:00.000
0
15,737,993
This entirely depends on the authentication policy that you use. The default AuthTktAuthenticationPolicy sets a cookie in the browser which (by default) does not expire. Again though, this depends on how you are tracking authenticated users.
0
284
false
1
1
Making Pyramid application without session timeout
15,778,904
1
2
0
0
2
0
0
0
My setup looks like this: A 64-bit box running Windows 7 Professional is connected to a Beaglebone running Angstrom Linux. I'm currently controlling the beaglebone via a putty command line on the windows box. What I'd like to do is run an OpenCV script to pull some vision information, process it on the windows box, and send some lightweight data (e.g a True or False, a triplet, etc.) over the (or another) USB connection to the beaglebone. My OpenCV program is running using Python bindings, so any piping I can do with python would be preferable. I've played around with pyserial to receive data on a windows box via a COM port, so it seems like I could use that on the windows side... at a total loss though on the embedded linux front
0
python,linux,windows,usb,pyserial
2013-04-01T13:34:00.000
1
15,744,495
Normally on the linux front, if the usb dongle is of the right type, you will see something like /dev/usbserial or similar device. Maybe check dmesg after plugging the cable. (on linux you can run find /dev | grep usb to list all usb related devices) Just a side note, I've seen the beaglebone has an ethernet port, why not just using a network socket? It's all easier than reinventing a protocol on usb.
0
956
false
0
1
How to send data from Windows to embedded linux over USB
15,745,200
1
1
0
8
7
1
1
0
Wondering what the real difference is when writing files from Python. From what I can see if I use w or wb I am getting the same result with text. I thought that saving as a binary file would show only binary values in a hex editor, but it also shows text and then ASCII version of that text. Can both be used interchangably when saving text? (Windows User)
0
python,text,binary,ascii
2013-04-01T19:50:00.000
0
15,750,660
Only in Windows, in the latter case, .write('\n') writes one byte with a value of 10. In the former case, it writes two bytes, with the values 13 and 10. You can prove this to yourself by looking at the resulting file size, and examining the files in a hex editor. In POSIX-related operating systems (UNIX, SunOS, MacOS, Linux, etc.), there is no difference beetween 'w' and 'wb'.
0
19,466
false
0
1
Python file IO 'w' vs 'wb'
15,750,957
1
1
0
1
0
0
0.197375
0
I am trying to use z3 in pydev, I add the path of z3py and libz3.dll to window/preferences/pydev/jython-interpreter, but i got the error as the following Traceback (most recent call last): File "C:\Users\linda\workspace\LearningPyDev\main.py", line 11, in import z3 File "C:\Users\linda\z3\python\z3.py", line 45, in from z3printer import * File "C:\Users\linda\z3\python\z3printer.py", line 8, in import sys, io, z3 ImportError: No module named io What is the io module anyway? Is it possible to run z3 in pydev?
0
python,z3
2013-04-02T19:40:00.000
1
15,772,909
io is a core Python module. It was added in 2.6 and has been present in every subsequent version. Are you on a very old version of Python? If you're running Python version 2.5 or earlier (you can check with python --version in any commandline), you'll need to update Python to a newer version.
0
1,273
false
0
1
error in import z3
15,773,638
1
2
0
2
3
0
0.197375
1
I have an email interface client, and I am using IMAP for my requests. I want to be able to show, in real-time, basic email data, for a list view. As in, for example, the GMail list view. For that, I need to do an IMAP request to obtain the subject of all emails, the date of all emails, etc. This works so far. The problem is, I want to also show the first characters of the body text. If I use the BODYSTRUCTURE call to obtain the index of the text/HTML part it takes too long (for emails with thousands of characters it might take well over a second per email, while using only the subject/date/etc calls takes about 0.02 seconds max. I tried using the BODY[INDEX]<0.XYZ> bytes where XYZ is the number of the first bytes we want to obtain, but to my dismay, it takes as long as using the BODY[INDEX] call. Sometimes even more. Is there another way to obtain the first text characters, but in a quick manner? If I want to list 300 emails on my interface I cannot afford to spend 1 second per email just to obtain the first text characters. I'm using Python with imaplib for this, even though probably not relevant.
0
python,imap,imaplib
2013-04-03T15:54:00.000
0
15,792,128
If you really want to fetch the beginning of the first textual part of a message, you will have to parse the BODYSTRUCTURE. After you obtain the part ID of the desired textual part, use the BODY[number]<0.size> syntax. The suggestion given in the other answer will fail on multipart messages (like if you have a text/plain and text/html, which is most common format today.
0
1,047
false
0
1
Obtain partial IMAP text part
15,875,488
1
2
0
88
99
1
1.2
0
I understand that ".pyc" files are compiled versions of the plain-text ".py" files, created at runtime to make programs run faster. However I have observed a few things: Upon modification of "py" files, program behavior changes. This indicates that the "py" files are compiled or at least go though some sort of hashing process or compare time stamps in order to tell whether or not they should be re-compiled. Upon deleting all ".pyc" files (rm *.pyc) sometimes program behavior will change. Which would indicate that they are not being compiled on update of ".py"s. Questions: How do they decide when to be compiled? Is there a way to ensure that they have stricter checking during development?
0
python,python-internals,pyc
2013-04-05T17:05:00.000
0
15,839,555
The .pyc files are created (and possibly overwritten) only when that python file is imported by some other script. If the import is called, Python checks to see if the .pyc file's internal timestamp is not older than the corresponding .py file. If it is, it loads the .pyc; if it isn't or if the .pyc does not yet exist, Python compiles the .py file into a .pyc and loads it. What do you mean by "stricter checking"?
0
54,648
true
0
1
When are .pyc files refreshed?
15,839,646
2
2
0
0
0
0
0
1
I searched a lot for built web service like Google Talk, using Google Application Engine and Python. For that first step is to check the status of online user on the Gmail. I found many code of it on python using XMPP library but it work only on python not using Google Application Engine. There is also suggestion of using XMPP python API but for sending message we have to provide JID like [email protected] and message send.We can not send message from one email Id to another Email Id directly. Now I want to perform Oauth authentication in python for gtalk at domain level can anyone tell me how to do this?
0
google-app-engine,python-2.7,google-talk
2013-04-09T09:51:00.000
0
15,898,775
I think you are confused. Python runs ON appengine. Also theres a working java xmpp example provided.
0
323
false
0
1
Gtalk Service On Google App Engine Using Python
15,903,171
2
2
0
0
0
0
0
1
I searched a lot for built web service like Google Talk, using Google Application Engine and Python. For that first step is to check the status of online user on the Gmail. I found many code of it on python using XMPP library but it work only on python not using Google Application Engine. There is also suggestion of using XMPP python API but for sending message we have to provide JID like [email protected] and message send.We can not send message from one email Id to another Email Id directly. Now I want to perform Oauth authentication in python for gtalk at domain level can anyone tell me how to do this?
0
google-app-engine,python-2.7,google-talk
2013-04-09T09:51:00.000
0
15,898,775
You can only send messages from your app. There are two options: [email protected] or anything@your_app_id.appspotchat.com. If you wanted to behave like an arbitrary xmpp client, you'll have to use a third party xmpp library running over HTTP and handle the authentication with the user's XMPP server.
0
323
false
0
1
Gtalk Service On Google App Engine Using Python
15,904,726