Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 4 | 0 | 3 | 18 | 0 | 0.148885 | 0 | I am writing a scientific program in Python and C with some complex physical simulation algorithms. After implementing algorithm, I found that there are a lot of possible optimizations to improve performance. Common ones are precalculating values, getting calculations out of cycle, replacing simple matrix algorithms with more complex and other. But there arises a problem. Unoptimized algorithm is much slower, but its logic and connection with theory look much clearer and readable. Also, it's harder to extend and modify optimized algorithm.
So, the question is - what techniques should I use to keep readability while improving performance? Now I am trying to keep both fast and clear branches and develop them in parallel, but maybe there are better methods? | 0 | python,performance,algorithm,optimization,code-readability | 2011-09-04T17:25:00.000 | 0 | 7,300,903 | Yours is a very good question that arises in almost every piece of code, however simple or complex, that's written by any programmer who wants to call himself a pro.
I try to remember and keep in mind that a reader newly come to my code has pretty much the same crude view of the problem and the same straightforward (maybe brute force) approach that I originally had. Then, as I get a deeper understanding of the problem and paths to the solution become clearer, I try to write comments that reflect that better understanding. I sometimes succeed and those comments help readers and, especially, they help me when I come back to the code six weeks later. My style is to write plenty of comments anyway and, when I don't (because: a sudden insight gets me excited; I want to see it run; my brain is fried), I almost always greatly regret it later.
It would be great if I could maintain two parallel code streams: the naïve way and the more sophisticated optimized way. But I have never succeeded in that.
To me, the bottom line is that if I can write clear, complete, succinct, accurate and up-to-date comments, that's about the best I can do.
Just one more thing that you know already: optimization usually doesn't mean shoehorning a ton of code onto one source line, perhaps by calling a function whose argument is another function whose argument is another function whose argument is yet another function. I know that some do this to avoid storing a function's value temporarily. But it does very little (usually nothing) to speed up the code and it's a bitch to follow. No news to you, I know. | 1 | 434 | false | 0 | 1 | Preserve code readability while optimising | 7,301,095 |
1 | 1 | 0 | 2 | 4 | 0 | 1.2 | 0 | Let's says I have a Djano app. Users can sign up, get a activation mail, activate their accounts and log in. After logging in, users can can create, update and delete objects rhough a custom Form which uses the Manager to handle the Model.
What should I be testing here — should I use the request framework to make requests and test the whole chain via the Views and Forms or should I be writing unit tests to test the Manager and the Model?
When testing the whole chain, I get to see that the URLs are configured properly, the Views work as expecvted, the Form cleans the data properly and it would also test the Models and Managers. It seems that the Django test framework is more geared toward unit-testing than this kind of test. (Is this something that should be tested with Twill and Selenium?)
When writing unit tests, I would get to test the Manger and the Models but the URLs and the Forms don't really come into play, do they?!
A really basic question but I'd like to get some of the fundamentals correct.
Thank you everyone. | 0 | python,django,unit-testing,testing,django-testing | 2011-09-04T19:41:00.000 | 0 | 7,301,681 | Yes, Django unit tests, using the Client feature, are capable of testing whether or not your routes and forms are correct.
If you want full-blown behavior-driven testing from the outside, you can used a BDD framework like Zombie.
As for which tests you need, Django author Jacob Kaplan-Moss answered the question succinctly: "All of them."
My general testing philosophy is to work until something stupid happens, then write a test to make sure that stupid thing never happens again. | 0 | 288 | true | 1 | 1 | What kind of tests should one write in Django | 7,301,849 |
1 | 1 | 1 | 1 | 4 | 0 | 0.197375 | 0 | I am using MonkeyRunner to automate some UI test cases.
I need to collect logs from the device using tool like QXDM.
I see that win32com python module can be used to launch QXDM and collecting logs.
But when i use from win32com.client import Dispatch in python script which is passed as argument to MonkeyRunner, MonkeyRunner throws:
"Import Error: No Module named win32com".
I have installed win32com on my machine, and when i use win32com in a python script which ran using "python test.py" its working fine.
Do we need to install win32com python module on Android device also? or what need to be done to make this work? | 0 | python,monkeyrunner | 2011-09-05T18:40:00.000 | 0 | 7,311,676 | Monkeyrunner use Jython as its Python interface (jython.jar under tools\lib folder).
It uses 2.5.0 version. Now the latest Jython version is 2.5.2.
Either one does not support pywin32 or any other modules. It only supports standard Python modules in version 2.5. | 0 | 2,667 | false | 0 | 1 | MonkeyRunner::How to install python modules? | 8,291,126 |
1 | 1 | 0 | 0 | 5 | 0 | 0 | 0 | I don't know why, but I cant find it anywhere. All i need is the command to disable javascript in python's mechanize. | 0 | javascript,python,html,mechanize | 2011-09-06T23:07:00.000 | 0 | 7,327,182 | Mechanize doesn't deal with Javascript. It only take care of HTML. So you can't stop Javascript running using Mechanize. You need to find some other solution. | 0 | 663 | false | 1 | 1 | How do you disable javascript in python's mechanize? | 9,391,655 |
1 | 2 | 1 | 0 | 0 | 1 | 0 | 0 | I am writing a python interface to a c++ library and am wondering about the correct design of the library.
I have found out (the hard way) that all methods passed to python must be declared static. If I understand correctly, this means that all functions basically must be defined in the same .cpp file. My interface has many functions, so this gets ugly very quickly.
What is the standard way to deal with this problem? Possibilities I could think of:
don't worry about it and use one looong .cpp file
compile into more than one library (.so file)
write a .cpp for each group of functions and #include that .cpp into the body of the main defining cpp file (the one with the PyMethodDef)
both of them seem very ugly | 0 | c++,python,interface | 2011-09-07T07:30:00.000 | 0 | 7,330,279 | Why do you say that all functions called by Python have to be
static? It's usual for that to be the case, in order to avoid
name conflicts (since any namespace, etc. will be ignored
because of the extern "C"), but whether the function is static
or not is of no consequence.
When interfacing a library in C++, in my experience, it's
generally not a big problem to make it static, and to put all of
the functions in a single translation unit, because the
functions will be just small wrappers which call the actual C++,
and normally, will be automatically generated from some sort of
descripter file; you surely aren't going to write all of the
necessary boilerplate by hand. | 0 | 61 | false | 0 | 1 | What is the pythonic structure of the code of a python-c++ interface with many functions? | 7,330,536 |
1 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | Many people say developing in Python, Ruby, PHP ...etc is much faster than Java.
Question is, why? In terms of coding, IDEs, available libraries... etc. Or is the speed in making the first prototype only?
I'm interested in answers from people who worked long time on Java and long time on other languages.
Note: I have developed for .Net before Java and yes it was faster to make some apps, but on the long run (large web projects) it will become like Java. | 0 | java,php,.net,python,ruby | 2011-09-07T11:01:00.000 | 0 | 7,332,758 | For rapid prototyping the more dynamic the language the better. Something like excel is a good for rapid prototyping. You can have a formula and a graph with a dozen clicks.
However in the long run you may need to migrate your system to something more enterprise friendly. This doesn't always mean you should start this way.
Even if you start in Java you may find you want to migrate some of your code to C for performance reasons. | 0 | 970 | false | 1 | 1 | What makes other languages faster than Java in terms of Rapid Development? | 7,332,862 |
1 | 1 | 0 | 3 | 1 | 0 | 0.53705 | 0 | I need to run a bunch of long running processes on a CENTOS server.
If I leave the processes (python/php scripts) to run sometimes the processes will stop running because of trivial errors eg. string encoding issues or sometimes because the process seems to get killed by the server.
I try to use nohup and fire the jobs from the crontab
Is there any way to keep these processes running in such a way that all the variables are saved and I can restart the script from where it stopped?
I know I can program this into the code but would prefer a generalised utility which could just keep these things running so that the script completed even if there were trivial errors.
Perhaps I need some sort of process-management tool?
Many thanks for any suggestions | 0 | php,python,process,centos,process-management | 2011-09-07T13:23:00.000 | 1 | 7,334,587 | is there any way to keep these processes running in such a way that all the variables are saved and i can restart the script from where it stopped?
Yes. It's called creating a "checkpoint" or "memento".
i know i can program this
Good. Get started. Each problem is unique, so you have to create, save, and reload the mementos.
but would prefer a generalised utility which could just keep these things running so that the script completed even if there were trivial errors.
It doesn't generalize well. Not all variables can be saved. Only you know what's required to restart your process in a meaningful way.
perhaps i need some sort of process-management tool?
Not really.
trivial errors eg. string encoding issues
Usually, we find these by unit testing. That saves a lot of programming to work around the error. An ounce of prevention is worth a pound of silly work-arounds.
sometimes because the process seems to get killed by the server.
What? You'd better find out why. An ounce of prevention is worth a pound of silly work-arounds. | 0 | 137 | false | 0 | 1 | running really long scripts - how to keep them running and start them again if they fail? | 7,334,651 |
1 | 2 | 0 | 4 | 3 | 0 | 1.2 | 0 | In search of a Python debugger I stumbled upon Aptana, which is based on eclipse.
Often, I want to debug a single python script. However, Aptana won't let me run/debug the currently opened file directly.
Instead, it requires me to create a debug/run configuration for each file I would like to run/debug. Alternatively I could create a Python project in Aptana.
But: I don't want to. I just want to be able to run or debug the currently opened file. This way I would like to debug my scripts without being forced to create a project first (for each single script!).
Can it be that hard? | 0 | python,eclipse,debugging,aptana | 2011-09-07T14:02:00.000 | 0 | 7,335,185 | This is because Aptana/Eclipse doesn't "realize" that the file you opened should be debugged using the Python debugger as it's not associated with a Python project/perspective (there's a lot of environment setup when a project is created in Aptana/Eclipse).
The simplest solution, IMO, would be to create a simple sandbox Python project and just stick your files in there to run/debug. Aptana should then realize you're dealing with Python and start running the Python debugger without setup (that's my experience w/ PyDev in Eclipse, at any rate). | 0 | 2,990 | true | 0 | 1 | eclipse: Run/Debug current file | 7,522,349 |
1 | 2 | 0 | 8 | 11 | 0 | 1.2 | 0 | I want to know the best/different ways to test a REST API which uses a database backend. I've developed my API with Flask in Python and want to use unittest or nose.
But my problem, is that some resources require another resource to create them in the first place. Is there a way to say that to test the creation of a blog post requires that another test involving the creation of the author was successful? | 0 | python,unit-testing,testing,rest,flask | 2011-09-07T15:03:00.000 | 0 | 7,336,101 | There are 2 standard ways of approaching a test that depends on something else (object, function call, etc).
You can use mocks in place of the objects the code you are testing depends on.
You can load a fixture or do the creation/call in the test setup.
Some people like "classical" unit tests where only the "unit" of code is tested. In these cases you typically use mocks and stubs to replace the dependencies.
Other like more integrative tests where most or all of the call stack is tested. In these cases you use a fixture, or possibly even do calls/creations in a setup function.
Generally you would not make one test depend on another. All tests should:
clean up after themselves
be runnable in isolation
be runnable as part of a suite
be consistent and repeatable
If you make one test dependent on another they cannot be run in isolation and you are also forcing an order to the tests run. Enforcing order in tests isn't good, in fact many people feel you should randomize the order in which your tests are run. | 0 | 7,898 | true | 1 | 1 | Testing REST API with database backend | 7,355,552 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | I'm attempting to automate scp commands with pexpect on Ubuntu. However, I keep getting a password GUI prompt with title "OpenSSH". How can I disable this behavior and use command line prompts instead? | 0 | python,pexpect | 2011-09-08T17:19:00.000 | 1 | 7,352,021 | See the DISPLAY and SSH_ASKPASS section of man ssh-add. | 0 | 905 | true | 0 | 1 | Python: how to launch scp with pexpect without OpenSSH GUI Password Prompt on Ubuntu? | 7,353,518 |
1 | 2 | 0 | 1 | 11 | 0 | 0.099668 | 0 | I have many projects that I'm programatically running:
nosetest --with-coverage --cover-html-dir=happy-sauce/
The problem is that for each project, the coverage module overwrites the index.html file, instead of appending to it. Is there a way to generate a combined super-index.html file, that contains the results for all my projects?
Thanks. | 0 | python,unit-testing,nose | 2011-09-08T17:45:00.000 | 0 | 7,352,319 | nosetests --with-coverage -i project1/*.py -i project2/*.py | 0 | 3,218 | false | 1 | 1 | Nosetests & Combined Coverage | 24,001,681 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I am running Aptana Studio 3, build: 3.0.4.201108101506.
When I run "Check for updates" I get the following error
"A Problem occurred"
No repository found at file:/C:/Users/Keith/AppData/Local/Aptana%20Studio%203/plugins/com.python.pydev_2.2.1.2011073123/.
Any help would be appreciated | 0 | python,aptana | 2011-09-09T22:56:00.000 | 1 | 7,368,288 | Looks like that filepath is set up as an update site in your preferences. I'd just remove it, since it looks invalid (maybe you installed a pydev zip from here?). Go to Preferences > Install/Update > Available Software Sites and then remove the entry for it. | 0 | 93 | false | 0 | 1 | error when running "Check for updates" | 7,419,393 |
1 | 1 | 0 | 1 | 2 | 0 | 0.197375 | 1 | Is there any way how to find out, if ip address comming to the server is proxy in Python?
I tried to scan most common ports, but i don't want to ban all ips with open 80 port, because it don't have to be proxy.
Is there any way how to do it in Python? I would prefere it before using some external/paid services. | 0 | python,sockets,proxy | 2011-09-10T11:39:00.000 | 0 | 7,371,442 | If it's a HTTP traffic, you can scan for headers like X-Forwarded-For.
But whatever you do it will always be only a heuristic. | 0 | 328 | false | 0 | 1 | Python - Determine if ip is proxy or not | 7,378,232 |
2 | 2 | 0 | 1 | 3 | 0 | 0.099668 | 0 | I am building a Music file organizer(in python2) in which I read the metadata of all files & then put those file in the required folder.
Now, I am already ready with the command line interface but this script shows feedback in a way that it shows "Which file is it working on right now?".
If the directory contains say 5000 mp3 files, there should be some kind of feedback.
So, I would like to know the most efficient way to find the total
number of mp3s available in a directory (scanning recursively in all
subsequent directories too).
My idea is to keep track of the total files processed and show a progress bar according to that. Is there a better way (performance wise), please feel free to guide.
I want my app to not have any kind of platform dependent code. If there is serious performance penalty sticking to this idea, please suggest for linux. | 0 | python,linux,algorithm,archlinux | 2011-09-10T13:05:00.000 | 1 | 7,371,878 | @shadyabhi: if you have many subdirectories maybe you can speedup the process by using os.listdir and multiprocessing.Process to recurse into each folder. | 0 | 362 | false | 0 | 1 | Effective way to find total number of files in a directory | 7,372,533 |
2 | 2 | 0 | 2 | 3 | 0 | 1.2 | 0 | I am building a Music file organizer(in python2) in which I read the metadata of all files & then put those file in the required folder.
Now, I am already ready with the command line interface but this script shows feedback in a way that it shows "Which file is it working on right now?".
If the directory contains say 5000 mp3 files, there should be some kind of feedback.
So, I would like to know the most efficient way to find the total
number of mp3s available in a directory (scanning recursively in all
subsequent directories too).
My idea is to keep track of the total files processed and show a progress bar according to that. Is there a better way (performance wise), please feel free to guide.
I want my app to not have any kind of platform dependent code. If there is serious performance penalty sticking to this idea, please suggest for linux. | 0 | python,linux,algorithm,archlinux | 2011-09-10T13:05:00.000 | 1 | 7,371,878 | I'm sorry to say this but no there isn't any way to do it more efficiently than recursively finding the files (at least that is platform (or filesystem) independent).
If the filesystem can help you it will, and you can't do anything to help it.
The reason it's not possible to do it without recursive scanning is how the filesystem is designed.
A directory can be seen as a file, and it contains a list of all files it contains. To find something in a subdirectory you have to first open the directory, then open the subdirectory and search that. | 0 | 362 | true | 0 | 1 | Effective way to find total number of files in a directory | 7,371,922 |
3 | 3 | 0 | 1 | 0 | 0 | 0.066568 | 0 | I'm currently working on a social web application using python/django. Recently I heard about the PHP's weakness on large scale projects, and how hippo-php helped Facebook to overcome this barrier. Considering a python social web application with lot of utilization, could you please tell me if a similar custom tool could help this python application? In what way? I mean which portion (or layer) of application need to be written for example in c++? I know that it's a general question but someone with relevant experience I think that could help me.
Thank you in advance. | 0 | php,python,django,performance | 2011-09-10T17:18:00.000 | 0 | 7,373,299 | You can think about PostgreSQL as Oracle, so from what I've found on the internet (because I am also a beginner) here is the order of DBs from smaller projects, to biggest:
SQLite
MySql
PostgreSQL
Oracle | 0 | 1,518 | false | 1 | 1 | Python/Django - Web Application Performance | 7,374,796 |
3 | 3 | 0 | 1 | 0 | 0 | 0.066568 | 0 | I'm currently working on a social web application using python/django. Recently I heard about the PHP's weakness on large scale projects, and how hippo-php helped Facebook to overcome this barrier. Considering a python social web application with lot of utilization, could you please tell me if a similar custom tool could help this python application? In what way? I mean which portion (or layer) of application need to be written for example in c++? I know that it's a general question but someone with relevant experience I think that could help me.
Thank you in advance. | 0 | php,python,django,performance | 2011-09-10T17:18:00.000 | 0 | 7,373,299 | Don't try to scale too early! Of course you can try to be prepared but most times you can not really know where you need to scale and therefore spend a lot of time and money in wrong direction before you recognize it.
Start your webapp and see how it goes (agreeing with Spacedman here).
Though from my experience the language of your web app is less likely going to be the bottleneck. Most of the time it starts with the database. Many times it simply a wrong line of code (be it just a for loop) and many other times its something like forgetting to use sth. like memcached or task management. As said, find out where it is. In most cases its better to check something else before blaming the language speed for it (since its most likely not the problem!). | 0 | 1,518 | false | 1 | 1 | Python/Django - Web Application Performance | 7,376,098 |
3 | 3 | 0 | 3 | 0 | 0 | 1.2 | 0 | I'm currently working on a social web application using python/django. Recently I heard about the PHP's weakness on large scale projects, and how hippo-php helped Facebook to overcome this barrier. Considering a python social web application with lot of utilization, could you please tell me if a similar custom tool could help this python application? In what way? I mean which portion (or layer) of application need to be written for example in c++? I know that it's a general question but someone with relevant experience I think that could help me.
Thank you in advance. | 0 | php,python,django,performance | 2011-09-10T17:18:00.000 | 0 | 7,373,299 | The portion to rewrite in C++ is the portion that is too slow in Python. You need to figure out where your bottleneck is, which you can do by load testing or just waiting until users complain.
Of course, even rewriting in C++ might not help. Your bottleneck might be the database (move to a separate, faster DB server or use sharding) or disk, or memory, or anything. Find bottleneck, work out how to eliminate bottleneck, implement. With 'test' inbetween all those phases. General advice.
There's normally no magic bullet, and I imagine Facebook did a LOT of testing and analysis of their bottlenecks before they tried anything. | 0 | 1,518 | true | 1 | 1 | Python/Django - Web Application Performance | 7,373,467 |
1 | 2 | 0 | 0 | 2 | 0 | 0 | 0 | First Note: Sorry this is long. Wanted to be thorough.
I really hate to ask a question when there's so much out there online but its been a week of searching and I have nothing to show for it. I'd really appreciate some help. I am a noob but I learn very fast and am more than willing to try alternate languages or whatever else it might take.
The goal:
What I'm trying to do is build a Netflix remote (personal use only) that controls Netflix on the server (Windows 7 PC 32-bit) via keyboard shortcuts (example: spacebar to pause) after a button is pressed in a php page on my ipod touch or android phone. Currently the remote uses USBUIRT to control the TV and IR devices without issue. If you have any alternate methods (that I can build, not buy) to suggest or other languages I could learn that can achieve this, I'm happy to learn.
The issue:
PHP's exec() and system() commands will not launch the python script (nor an exe compiled with py2exe) that simply presses the Windows key (intended to press the key on the server, not the machine loading the php page). I can use USBUIRT's UUTX.exe passing arguments with exec() to control IR devices without issue. But my exe, py, nor pyw files work. I've even tried calling a batch file that then launches the python script and that batch will not launch. The page refreshes and no errors are displayed.
Attempted:
Here's a code that works
$exec = exec("c:\\USBUIRT\\UUTX.exe -r3 -fC:\\USBUIRT\\Pronto.txt LED_Off", $results);
Here's a few attempts that don't work
$exec = exec("c:\\USBUIRT\\test.py", $results);
$exec = exec("python c:\\USBUIRT\\test.py", $results);
$exec = exec("C:\\python25\\python.exe c:\\USBUIRT\\test.py", $results);
All of those I've tried without the dual backslashes and with forward slashes and dual forward slashes. I've left off passing it to variable $exec and that makes no difference. $result outputs
Arraystring(9) "
Copying everything in the exec() into command line works correctly. I've tried moving the file to the htdocs folder, changed folder permissions, and made sure I'm not in safemode in php. Var_dump returns: Array" Using a foreach loop gives no info from the array.
My logs for Apache show only
[Sat Sep 10 19:54:09 2011] [error] [client 127.0.0.1] File does not exist: C:/Program Files/Apache Software Foundation/Apache2.2/htdocs/announce
Setup: Apache 2.2, python 2.5, and php 5.3. Running this on Windows 7 and only connect on the local network, no vpn or the like. Given every associated folder (python, htdocs, the cmd.exe file, usbuirt folder) IUSR, admins, users, and everyone with full control just for initial testing (later I'll of course tighten security up). Safe mode is off on php as well.
Notes: This code I saw on another similar issue doesn't work:
exec("ping google.com -n 1");
No errors in error.log nor event viewer. Putting it inside ob_start(); and getting the results with ob_get_clean(); gives me absolutely nothing. No text or anything at all. I've tried a lot more but I've already written a novel on here so I'll just have to answer the rest as we go. I'll post the full php source or the python script if that is needed but all it does is import sendkeys and press the windows key to pop open the start menu as a basic visual test. I don't know if its permissions, the way I have my setup running, my coding... I just don't know anymore. And again I apologize this is so long and if you do answer, I really appreciate you taking the time to read all this to help out a total stranger. | 0 | php,python,windows,apache,exec | 2011-09-11T02:20:00.000 | 1 | 7,375,924 | Figured it out thanks to the excellent help from Winston Ewert and Gringo Suave.
I set Apache's service to the Local System Account and gave it access to interact with the desktop. This should help if you have Windows XP or Server 2003, but Vista and newer there's an Interactive Services Detection that pops up when you try to launch GUI applications from php. Every command was executing correctly, but were doing so in Session 0. This is because Apache was installed as a service. For most people I would think that reinstalling without setting up Apache as a service would work, but I was considering moving to XAMPP anyway, so having to uninstall Apache helped push my decision.
Ultimately all of the codes I wrote in my original post now work as a result, and my project can move forward. I hope someone else stumbles across this and gets as much help from Winston Ewert and Gringo Suave as I did! Thank you both very much! | 0 | 1,377 | false | 0 | 1 | PHP exec() command wont launch python script using sendkeys | 7,380,933 |
2 | 4 | 0 | 2 | 3 | 0 | 0.099668 | 0 | I'm making a simple text adventure with Python and thought that background MIDI music would make it a little less boring.
Is there a simple, light-weight, MIDI player / API for Python? Or do I need to use a full game library like Pygame? (Because if so, I'd rather pass, as I want to make it as lightweight as possible.) | 0 | python,console,midi,adventure | 2011-09-11T11:48:00.000 | 0 | 7,377,983 | As @Jakob Bowyer noted, pygame is really the way to go. I just wanted to add that if you are concerned about pygame because of its size, then you can selectively enable which modules you want at runtime. In this case, just using the MIDI playback features of pygame won't consume too much system resources. | 0 | 392 | false | 0 | 1 | Light-weight MIDI playback for a Text Adventure? | 7,538,486 |
2 | 4 | 0 | 4 | 3 | 0 | 0.197375 | 0 | I'm making a simple text adventure with Python and thought that background MIDI music would make it a little less boring.
Is there a simple, light-weight, MIDI player / API for Python? Or do I need to use a full game library like Pygame? (Because if so, I'd rather pass, as I want to make it as lightweight as possible.) | 0 | python,console,midi,adventure | 2011-09-11T11:48:00.000 | 0 | 7,377,983 | Yes, you will be wanting pygame for this. It's a nice idea to keep something light, but on the other hand, why re-invent the wheel? If someone has already written the code for you to play .midi files, then use their code! The only other option I can think of is searching for a MIDI playing library for Python (I can't find any right now) and then spawning that inside a subprocess and feeding it commands and jazz. | 0 | 392 | false | 0 | 1 | Light-weight MIDI playback for a Text Adventure? | 7,378,115 |
1 | 1 | 0 | 1 | 3 | 0 | 0.197375 | 0 | Using Python and PyAudio, I can't seem to record sound to a wav file from an external audio interface (RME Fireface), but i am able to do so with the in built mic on my iMac. I set the default device to Fireface in System preferences, and when i run the code, the wav file is created but no sound comes out when i play it. The code is as given on the PyAudio webpage. Is there any way to rectify this? | 0 | macos,audio,python | 2011-09-06T09:50:00.000 | 0 | 7,379,439 | A couple shots in the dark - Verify if you're opening the device correctly - looks like the Fireface can be both half or full duplex (pref pane configurable?), and pyaudio apparently cares (i.e. you can't specify an output if you specify an input or vise versa.)
Another thing to check out is the audio routing - under /Applications/Utilities/Audio Midi Setup.app, depending on how you have the signals coming in you might be connecting to the wrong one and not realizing it. | 0 | 629 | false | 1 | 1 | Pyaudio for external interfaces (Mac OSX) | 8,441,627 |
1 | 3 | 0 | 0 | 2 | 1 | 0 | 0 | What problems can I have if I will use python 2.7 instead python 2.6 for my pylons/pyramid projects? Before I use python 2.6 on my ubuntu 10.04 but now I have ubuntu 11.04 on my laptop with python 2.7. | 0 | python,pylons,pyramid | 2011-09-12T06:41:00.000 | 0 | 7,384,150 | Take a look at http://docs.python.org/dev/whatsnew/2.7.html
You'll find what all you'll ever need to know. | 0 | 3,369 | false | 0 | 1 | python 2.6 vs 2.7, for pylons/pyramid projects | 7,384,274 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I am basically new to this kind of work.I am programming my application in C# in VS2010.I have a crystal report that is working fine and it basically gets populated with some xml data. That XMl data is coming from other application that is written in Python on another machine.
That Python script generates some data and that data is put on the memory stream. I basically have to read that memory stream and write my xml which is used to populate my crystal report. So my supervisor wants me to use remote procedure call.
I have never done any remote procedure calling. But as I have researched and understood. I majorly have to develop a web or WCF service I guess. I don't know how should I do it. We are planning to use the http protocol.
So, this is how it is supposed to work. I give them the url of my service and they would call that service and my service should try to read the data they put on the memory stream. After reading the data I should use part of the data to write my xml and this xml is used to populate my crystal report.
The other part of the data ( other than the data used to write the xml) should be sent to a database on the SQl server. This is my complete problem definition. I need ideas and links that will help me in solving this problem. | 0 | c#,python,web-services,rpc | 2011-09-12T19:05:00.000 | 0 | 7,392,676 | As John wrote, you're quite late if it's urgent and your description is quite vague. There are 1001 RPC techniques and the choice depends on details. But taking into account that you seem just to exchange some xml data, you probably don't need a full RPC implementation. You can write a HTTP server in python with just a few lines of code. If it needs to be a bit more stable and log running, have a look at twisted. Then just use pure html and the WebClient class. Not a perfect solution, but worked out quite well for me more than once. And you said it's urgent! ;-) | 0 | 779 | false | 0 | 1 | Remote procedure call in C# | 7,392,759 |
1 | 4 | 1 | 1 | 1 | 0 | 0.049958 | 0 | There is a library for Python that enables the calling ability (can call functions in C++ format without extern "C". Please, could you remind me the name of the library? I forgot it's name and can't find it.
It's not Boost.Python.
Thank you very much. Your answer will be rewarded. | 0 | c++,python,dll,shared-libraries | 2011-09-12T20:34:00.000 | 0 | 7,393,672 | SWIG, Boost.Python, SIP, Shiboken, PyBindgen, ...
SWIG and Boost.Python are most popular, i.e. they have the largest user base and the most active development teams. Which of these two to use is largely a matter of taste. So if you don't want to use Boost.Python, then SWIG is the obvious choice. | 0 | 496 | false | 0 | 1 | Library for Python: How to call C++ functions from Python program? | 7,400,723 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | most of my job is on a citrix ICA app.
i work in a winsows enviroment.
among other things, i have to print 300 reports from my app weekly. i am trying to automate this task. i was using a screenshot automation tool called sikuli, but it is not portable form station to station.
i thought i might be able to inject packets and send the commands on that level. i was not able to read the packets i captured with whireshark or do anythin sensable with them.
i have expirence with python and if i get pointed in the right direction, i am pretty sure i can pull something off.
does anyone have any ideas on how to do this (i am leaning towards packet injection aat the moment, but am open to ideas).
thanks for the help,
sam | 0 | python,automation,citrix,packet-injection | 2011-09-13T07:34:00.000 | 0 | 7,398,343 | after a lot of research, it cant be done. some manipulation like change window focus with the ICA COM object. | 0 | 978 | false | 0 | 1 | citrix GUI automation or packet injection? | 10,010,649 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | I am creating rpm's for my project which is in pure python. I am running the command
python setup.py bdist_rpm
to build the rpm. This is creating architechture specific rpm's (x86 or x86-64). What I would like is to have a no-arch rpm. Can any of python guru's help me with creating a no-arch rpm. Any help would be appriciated. Thanks in advance. | 0 | python,rpm,distutils | 2011-09-13T10:08:00.000 | 1 | 7,400,099 | If your software does not contain extension modules (modules written in C/C++), distutils will make the RPM noarch. I don’t think there’s a way to explicitly control it. | 0 | 116 | false | 0 | 1 | Python command to create no-arch rpm's | 7,531,272 |
1 | 1 | 0 | 0 | 1 | 0 | 1.2 | 1 | I am using buildbot (system for CI) and have one problem. How can I send parameters of Change to all builders? I want to use the properties comments and who of Changes object.
Thx | 0 | python,continuous-integration,buildbot | 2011-09-14T13:56:00.000 | 0 | 7,417,518 | I'm find answer: inheritance from BuildStep and use
self.build.allChanges() and for set property: self.setProperty() | 0 | 587 | true | 0 | 1 | Buildbot properties from changes to all build | 7,442,406 |
1 | 5 | 0 | 1 | 0 | 1 | 0.039979 | 0 | I want to generate a list of the hex values of all 256 byte-combinations.
Basically, I want for example 'A' => '\x41'. I've only found modules capable of converting 'A' => '41', and this is not what I want.
How am I to solve this problem? Does anybody know an appropriate module or algorithm (as I'd like to avoid hardcoding 256 hexvalues...)? | 0 | python,hex,byte,ascii | 2011-09-14T19:45:00.000 | 0 | 7,422,099 | ord('A') returns the ASCII value as an integer (65 in the case of 'A'). You can think of an integer in any base you want. hex(ord('A')) gives you a nice string ("0x41" in this case), as does print "%x" % ord('A'). | 0 | 6,934 | false | 0 | 1 | Generate a list of hex bytes in Python | 7,422,175 |
1 | 2 | 0 | 24 | 14 | 1 | 1.2 | 0 | I just wrote a function on Python. Then, I wanted to make it module and install on my Ubuntu 11.04. Here is what I did.
Created setup.py along with function.py file.
Built distribution file using $Python2.7 setup.py sdist
Then installed it $Python2.7 setup.py install
All was going fine. But, later I wanted to use the module importing it on my code.
I got import error: ImportError: No module named '-------'
PS. I searched over google and didn't find particular answer. Detailed answer will be much appreciated. | 0 | python,python-module | 2011-09-15T06:23:00.000 | 1 | 7,426,677 | Most installation requires:
sudo python setup.py install
Otherwise, you won't be able to write to the installation directories.
I'm pretty sure that (unless you were root), you got an error when you did
python2.7 setup.py install | 0 | 44,972 | true | 0 | 1 | How to install Python module on Ubuntu | 7,429,157 |
1 | 2 | 0 | 1 | 5 | 0 | 0.099668 | 0 | I'm developing a content type for Plone 4, and I'd like to block all user, group, and context portlets it may inherit from its parent object. I'm thoroughly confused by the documentation at this point–in portlets.xml, <blacklist/> only seems to address path-specific blocking. <assignment/> seems like what I want, but it seems too specific–I don't want to manage the assignment for all possible portlets on my content type.
There are hints that I've found that customizing an ILeftColumn and IRightColumn portlet manager specific to the content type, but I can't find any good examples. Does anyone have any hints or suggestions? I feel like I'm missing something dead simple. | 0 | python,plone,portlet | 2011-09-15T14:15:00.000 | 0 | 7,432,317 | Do the assignment to your portaltype live on a site via Sitesetup (controlpanel) -> Types -> "Manage portlets assigned to this content type".
Then export the configuration via ZMI -> portal_setup -> Export-Tab -> select 'Portlets' -> click 'export' on bottom.
Extract the types/YourType.xml-file and copy the relevant parts in your package's profiles/default/types/YourType.xml. | 0 | 911 | false | 1 | 1 | Plone Content Type-Specific Portlet Assignment | 7,435,407 |
1 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I am about to build a new python lib and I was seeking information concerning packaging in Python.
I understand that "setup.py" is the script that controls everything. I wonder how to deal with it when there are external libraries in svn for instance.
How to download automatically a given version from the repository using "setup.py" ? | 0 | python,packaging,setup.py | 2011-09-15T17:12:00.000 | 0 | 7,434,837 | I may not have understood the problem correctly.
For any additional dependencies, you mention them in setup.py as
install_requires=['module1 >= 1.3', 'module2 >=1.8.2']
When you use setuptools, easy_install oo pip, these external dependencies will get installed during setup, if required. These should also be available in package repositories for download. | 0 | 742 | false | 0 | 1 | setup.py and source control repository | 7,435,249 |
1 | 5 | 0 | 1 | 2 | 0 | 0.039979 | 0 | How does one get (finds the location of) the dynamically imported modules from a python script ?
so, python from my understanding can dynamically (at run time) load modules.
Be it using _import_(module_name), or using the exec "from x import y", either using imp.find_module("module_name") and then imp.load_module(param1, param2, param3, param4) .
Knowing that I want to get all the dependencies for a python file. This would include getting (or at least I tried to) the dynamically loaded modules, those loaded either by using hard coded string objects or those returned by a function/method.
For normal import module_name and from x import y you can do either a manual scanning of the code or use module_finder.
So if I want to copy one python script and all its dependencies (including the custom dynamically loaded modules) how should I do that ? | 0 | python,module,loadmodule | 2011-09-16T07:57:00.000 | 0 | 7,441,726 | Just an idea and I'm not sure that it will work:
You could write a module that contains a wrapper for __builtin__.__import__. This wrapper would save a reference to the old __import__and then assign a function to __builtin__.__import__ that does the following:
whenever called, get the current stacktrace and work out the calling function. Maybe the information in the globals parameter to __import__ is enough.
get the module of that calling functions and store the name of this module and what will get imported
redirect the call the real __import__
After you have done this you can call your application with python -m magic_module yourapp.py. The magic module must store the information somewhere where you can retrieve it later. | 0 | 303 | false | 0 | 1 | how do you statically find dynamically loaded modules | 7,442,171 |
1 | 3 | 0 | 1 | 0 | 0 | 0.066568 | 0 | I have some .txt files which included Turkish characters. I prepared a HTML code and wanna include texts which are in my txt files. The processes are successful but the html files which are made by python have character problems(special characters seems like this: �)
I have tried add u before strings in python code but it did not work.
txt files are made by python. actually they are my blog entries I got them using urrlib. Moreover, they have not character problems
thank you for your answers. | 0 | python,encoding,character | 2011-09-17T14:12:00.000 | 0 | 7,455,371 | When you serve the content to a web browser, you need to tell it what encoding the file is in. Ideally, you should send a Content-type: HTTP header in the response with something like text/plain; charset=utf-8, where "utf-8" is replaced by whatever encoding you're actually using if it's not utf-8.
Your browser may also need to be set to use a unicode-aware font for displaying text files; if it uses a font that doesn't have the necessary glyphs, obviously it can't display them. | 0 | 171 | false | 0 | 1 | Special characters in output are like � | 7,455,417 |
1 | 3 | 0 | 0 | 1 | 0 | 0 | 0 | I'm trying to calculate the percentage of CPU% used for a particular process using Python/Shell, but so far nothing.
I have looked at a lot of questions here, but none could help me.
Any suggestions? | 0 | python,shell,unix,cpu | 2011-09-19T11:13:00.000 | 1 | 7,470,045 | well, you can try to use the top command with "-b -n 1" and grab it's contents and than you can use cut or other tools to get what you need
NOTE: you can add the -p option to limit to a particular process id | 0 | 541 | false | 0 | 1 | CPU Utilization on UNIX | 7,470,125 |
2 | 5 | 0 | 2 | 12 | 0 | 0.07983 | 0 | I am trying to run a python script from the Linux SSH Secure Shell command line environment, and I am trying to import the argparse library, but it gives the error: "ImportError: No module named argparse".
I think that this is because the Python environment that the Linux shell is using does not have the argparse library in it, and I think I can fix it fix it if I can find the directories for the libraries being used by the Python environment, and copy the argparse library into it, but I can not find where that directory is located.
I would appreciate any help on finding this directory (I suppose I could include the argparse library in the same directory as my python script for now, but I would much rather have the argparse library in the place where the other Python libraries are, as it should be). | 0 | python,linux,command-line,argparse | 2011-09-19T15:45:00.000 | 1 | 7,473,609 | If you're on CentOS and don't have an easy RPM to get to Python 2.7, JF's suggestion of pip install argparse is the way to go. Calling out this solution in a new answer. Thanks, JF. | 0 | 15,510 | false | 0 | 1 | argparse Python modules in cli | 10,015,728 |
2 | 5 | 0 | 0 | 12 | 0 | 0 | 0 | I am trying to run a python script from the Linux SSH Secure Shell command line environment, and I am trying to import the argparse library, but it gives the error: "ImportError: No module named argparse".
I think that this is because the Python environment that the Linux shell is using does not have the argparse library in it, and I think I can fix it fix it if I can find the directories for the libraries being used by the Python environment, and copy the argparse library into it, but I can not find where that directory is located.
I would appreciate any help on finding this directory (I suppose I could include the argparse library in the same directory as my python script for now, but I would much rather have the argparse library in the place where the other Python libraries are, as it should be). | 0 | python,linux,command-line,argparse | 2011-09-19T15:45:00.000 | 1 | 7,473,609 | You're probably using an older version of Python.
The argparse module has been added pretty recently, in Python 2.7. | 0 | 15,510 | false | 0 | 1 | argparse Python modules in cli | 7,474,038 |
3 | 3 | 0 | 0 | 1 | 0 | 0 | 0 | Is anyone aware of any issues with Django's caching framework when deployed to Apache/Mod_WSGI?
When testing with the caching framework locally with the dev server, using the profiling middleware and either the FileBasedCache or LocMemCache, Django's very fast. My request time goes from ~0.125 sec to ~0.001 sec. Fantastic.
I deploy the identical code to a remote machine running Apache/Mod_WSGI and my request time goes from ~0.155 sec (before I deployed the change) to ~.400 sec (post deployment). That's right, caching slowed everything down.
I've spent hours digging through everything, looking for something I'm missing. I've tried using FileBasedCache with a location on tmpfs, but that also failed to improve performance.
I've monitored the remote machine with top, and it shows no other processes and it has 6GB available memory, so basically Django should have full rein. I love Django, but it's incredibly slow, and so far I've never been able to get the caching framework to make any noticeable impact in a production environment. Is there anything I'm missing?
EDIT: I've also tried memcached, with the same result. I confirmed memcached was running by telneting into it. | 0 | python,django,performance | 2011-09-19T20:57:00.000 | 0 | 7,477,211 | I had a similar problem with an app using memcached. The solution was running mod_wsgi in daemon mode instead of embeded mode, and Apache in mpm_worker mode. After that, application is working much faster. | 0 | 707 | false | 1 | 1 | Using Django Caching with Mod_WSGI | 7,482,454 |
3 | 3 | 0 | 0 | 1 | 0 | 0 | 0 | Is anyone aware of any issues with Django's caching framework when deployed to Apache/Mod_WSGI?
When testing with the caching framework locally with the dev server, using the profiling middleware and either the FileBasedCache or LocMemCache, Django's very fast. My request time goes from ~0.125 sec to ~0.001 sec. Fantastic.
I deploy the identical code to a remote machine running Apache/Mod_WSGI and my request time goes from ~0.155 sec (before I deployed the change) to ~.400 sec (post deployment). That's right, caching slowed everything down.
I've spent hours digging through everything, looking for something I'm missing. I've tried using FileBasedCache with a location on tmpfs, but that also failed to improve performance.
I've monitored the remote machine with top, and it shows no other processes and it has 6GB available memory, so basically Django should have full rein. I love Django, but it's incredibly slow, and so far I've never been able to get the caching framework to make any noticeable impact in a production environment. Is there anything I'm missing?
EDIT: I've also tried memcached, with the same result. I confirmed memcached was running by telneting into it. | 0 | python,django,performance | 2011-09-19T20:57:00.000 | 0 | 7,477,211 | Same thing happened to me and was wondering what is that is taking so much time.
each cache get was taking around 100 millisecond.
So I debugged the code django locmem code and found out that pickle was taking a lot of time (I was caching a whole table in locmemcache).
I wrapped the locmem as I didn't wanted anything advanced, so even if you remove the pickle and unpickle and put it. You will see a major improvement.
Hope it helps someone. | 0 | 707 | false | 1 | 1 | Using Django Caching with Mod_WSGI | 18,531,552 |
3 | 3 | 0 | 0 | 1 | 0 | 1.2 | 0 | Is anyone aware of any issues with Django's caching framework when deployed to Apache/Mod_WSGI?
When testing with the caching framework locally with the dev server, using the profiling middleware and either the FileBasedCache or LocMemCache, Django's very fast. My request time goes from ~0.125 sec to ~0.001 sec. Fantastic.
I deploy the identical code to a remote machine running Apache/Mod_WSGI and my request time goes from ~0.155 sec (before I deployed the change) to ~.400 sec (post deployment). That's right, caching slowed everything down.
I've spent hours digging through everything, looking for something I'm missing. I've tried using FileBasedCache with a location on tmpfs, but that also failed to improve performance.
I've monitored the remote machine with top, and it shows no other processes and it has 6GB available memory, so basically Django should have full rein. I love Django, but it's incredibly slow, and so far I've never been able to get the caching framework to make any noticeable impact in a production environment. Is there anything I'm missing?
EDIT: I've also tried memcached, with the same result. I confirmed memcached was running by telneting into it. | 0 | python,django,performance | 2011-09-19T20:57:00.000 | 0 | 7,477,211 | Indeed django is slow. But I must say most of the slowness goes from app itself.. django just forces you (bu providing bad examples in docs) to do lazy thing that are slow in production.
First of: try nginx + uwsgi. it is just the best.
To optimize you app: you need to find you what is causing slowness, it can be:
slow database queries (a lot of queries or just slow queries)
slow database itself
slow filesystem (nfs for example)
Try logging request queries and watch iostat or iotop or something like that.
I had this scenario with apache+mod_wsgi: first request from browser was very slow... then a few request from same browser were fast.. then if sat doing nothing for 2 minutes - wgain very slow. I don`t know if that was improperly configured apache if it was shutting down wsgi app and starting for each keepalive request. It just posted me off - I installed nging and with nginx+fgxi all was a lot faster than apache+mod_wsgi. | 0 | 707 | true | 1 | 1 | Using Django Caching with Mod_WSGI | 7,477,678 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I'm working on running a Memory/CPU intensive project on a cloud service, from my Googling and research it looks like I should use Amazon EC2 as there are guides it using MPI - however, reading up on stackoverflow about people's comparison of EC2 with rackspace, joyent, etc, I was wondering if this is really the best cloud option I should go with or is there an alternative better route I should take? Any insight would be appreciated.
Thanks, | 0 | python,multithreading,amazon-ec2,cloud,parallel-processing | 2011-09-20T00:21:00.000 | 1 | 7,478,803 | Your requirements are too vague for a specific response. It is unlikely you are going to be able to elaborate them sufficiently for anybody to provide an authoritative answer.
Fortunately for you, many Infrastructure as a Service platforms like AWS and Rackspace let you test things out extremely inexpensively (literal pocket change), so give them a try and see what works for your application. | 0 | 384 | true | 1 | 1 | Python Parallel Processing Amazon EC2 or Alternatives? | 7,478,918 |
5 | 6 | 0 | 15 | 26 | 0 | 1 | 0 | I use R for data analysis and am very happy with it. Cleaning data could be a bit easier, however. I am thinking about learning another language suited to this task. Specifically, I am looking for a tool to use to take raw data, remove unnecessary variables or observations, and format it for easy loading in R. Contents would be mostly numeric and string data, as opposed to multi-line text.
I am considering the awk/sed combination versus Python. (I recognize that Perl would be another option, but, if I was going to learn another full language, Python seems to be a better, more extensible choice.)
The advantage of sed/awk is that it would be quicker to learn. The disadvantage is that this combination isn't as extensible as Python. Indeed, I might imagine some "mission creep" if I learned Python, which would be fine, but not my goal.
The other consideration that I had is applications to large data sets. As I understand it, awk/sed operate line-by-line, while Python would typically pull all the data into memory. This could be another advantage for sed/awk.
Are there other issues that I'm missing? Any advice that you can offer would be appreciated. (I included the R tag for R users to offer their cleaning recommendations.) | 0 | python,r,awk,sed,data-cleaning | 2011-09-20T03:13:00.000 | 0 | 7,479,686 | Not to spoil your adventure, but I'd say no and here is why:
R is vectorised where sed/awk are not
R already has both Perl regular expression and extended regular expressions
R can more easily make recourse to statistical routines (say, imputation) if you need it
R can visualize, summarize, ...
and most importantly: you already know R.
That said, of course sed/awk are great for small programs or even one-liners and Python is a fine language. But I would consider to also stick with R. | 0 | 4,795 | false | 0 | 1 | Python or awk/sed for cleaning data | 7,479,812 |
5 | 6 | 0 | 6 | 26 | 0 | 1 | 0 | I use R for data analysis and am very happy with it. Cleaning data could be a bit easier, however. I am thinking about learning another language suited to this task. Specifically, I am looking for a tool to use to take raw data, remove unnecessary variables or observations, and format it for easy loading in R. Contents would be mostly numeric and string data, as opposed to multi-line text.
I am considering the awk/sed combination versus Python. (I recognize that Perl would be another option, but, if I was going to learn another full language, Python seems to be a better, more extensible choice.)
The advantage of sed/awk is that it would be quicker to learn. The disadvantage is that this combination isn't as extensible as Python. Indeed, I might imagine some "mission creep" if I learned Python, which would be fine, but not my goal.
The other consideration that I had is applications to large data sets. As I understand it, awk/sed operate line-by-line, while Python would typically pull all the data into memory. This could be another advantage for sed/awk.
Are there other issues that I'm missing? Any advice that you can offer would be appreciated. (I included the R tag for R users to offer their cleaning recommendations.) | 0 | python,r,awk,sed,data-cleaning | 2011-09-20T03:13:00.000 | 0 | 7,479,686 | I would recommend sed/awk along with the wealth of other command line tools available on UNIX-alike platforms: comm, tr, sort, cut, join, grep, and built in shell capabilities like looping and whatnot. You really don't need to learn another programming language as R can handle data manipulation as well as if not better than the other popular scripting languages. | 0 | 4,795 | false | 0 | 1 | Python or awk/sed for cleaning data | 7,488,114 |
5 | 6 | 0 | 1 | 26 | 0 | 0.033321 | 0 | I use R for data analysis and am very happy with it. Cleaning data could be a bit easier, however. I am thinking about learning another language suited to this task. Specifically, I am looking for a tool to use to take raw data, remove unnecessary variables or observations, and format it for easy loading in R. Contents would be mostly numeric and string data, as opposed to multi-line text.
I am considering the awk/sed combination versus Python. (I recognize that Perl would be another option, but, if I was going to learn another full language, Python seems to be a better, more extensible choice.)
The advantage of sed/awk is that it would be quicker to learn. The disadvantage is that this combination isn't as extensible as Python. Indeed, I might imagine some "mission creep" if I learned Python, which would be fine, but not my goal.
The other consideration that I had is applications to large data sets. As I understand it, awk/sed operate line-by-line, while Python would typically pull all the data into memory. This could be another advantage for sed/awk.
Are there other issues that I'm missing? Any advice that you can offer would be appreciated. (I included the R tag for R users to offer their cleaning recommendations.) | 0 | python,r,awk,sed,data-cleaning | 2011-09-20T03:13:00.000 | 0 | 7,479,686 | I would recommend 'awk' for this type of processing.
Presumably you are just searching/rejecting invalid observations in simple text files.
awk is lightning fast at this task and is very simple to program.
If you need to do anything more complex then you can.
Python is also a possibility if you don't mind the performance hit. The "rpy" library can be used to closely integrate the python and R components. | 0 | 4,795 | false | 0 | 1 | Python or awk/sed for cleaning data | 7,479,937 |
5 | 6 | 0 | 1 | 26 | 0 | 0.033321 | 0 | I use R for data analysis and am very happy with it. Cleaning data could be a bit easier, however. I am thinking about learning another language suited to this task. Specifically, I am looking for a tool to use to take raw data, remove unnecessary variables or observations, and format it for easy loading in R. Contents would be mostly numeric and string data, as opposed to multi-line text.
I am considering the awk/sed combination versus Python. (I recognize that Perl would be another option, but, if I was going to learn another full language, Python seems to be a better, more extensible choice.)
The advantage of sed/awk is that it would be quicker to learn. The disadvantage is that this combination isn't as extensible as Python. Indeed, I might imagine some "mission creep" if I learned Python, which would be fine, but not my goal.
The other consideration that I had is applications to large data sets. As I understand it, awk/sed operate line-by-line, while Python would typically pull all the data into memory. This could be another advantage for sed/awk.
Are there other issues that I'm missing? Any advice that you can offer would be appreciated. (I included the R tag for R users to offer their cleaning recommendations.) | 0 | python,r,awk,sed,data-cleaning | 2011-09-20T03:13:00.000 | 0 | 7,479,686 | I agree with Dirk. I thought about the same thing and used other languages a bit, too. But in the end I was surprised again again what more experienced users do with R. Packages like ddply or plyrmight be very interesting to you. That being said SQL helped me with data juggling often | 0 | 4,795 | false | 0 | 1 | Python or awk/sed for cleaning data | 7,484,242 |
5 | 6 | 0 | 3 | 26 | 0 | 0.099668 | 0 | I use R for data analysis and am very happy with it. Cleaning data could be a bit easier, however. I am thinking about learning another language suited to this task. Specifically, I am looking for a tool to use to take raw data, remove unnecessary variables or observations, and format it for easy loading in R. Contents would be mostly numeric and string data, as opposed to multi-line text.
I am considering the awk/sed combination versus Python. (I recognize that Perl would be another option, but, if I was going to learn another full language, Python seems to be a better, more extensible choice.)
The advantage of sed/awk is that it would be quicker to learn. The disadvantage is that this combination isn't as extensible as Python. Indeed, I might imagine some "mission creep" if I learned Python, which would be fine, but not my goal.
The other consideration that I had is applications to large data sets. As I understand it, awk/sed operate line-by-line, while Python would typically pull all the data into memory. This could be another advantage for sed/awk.
Are there other issues that I'm missing? Any advice that you can offer would be appreciated. (I included the R tag for R users to offer their cleaning recommendations.) | 0 | python,r,awk,sed,data-cleaning | 2011-09-20T03:13:00.000 | 0 | 7,479,686 | I would recommend investing for the long term with a proper language for processing data files, like python or perl or ruby, vs the short term sed/awk solution. I think that all data analysts need at least three languages; I use C for hefty computations, perl for processing data files, and R for interactive analysis and graphics.
I learned perl before python had become popular. I've heard great things about ruby so you might want to try that instead.
For any of these you can work with files line-by-line; python doesn't need to read the full file in advance. | 0 | 4,795 | false | 0 | 1 | Python or awk/sed for cleaning data | 7,479,874 |
2 | 3 | 0 | 0 | 1 | 0 | 0 | 0 | I've just started using Jenkins today, so it's entirely possible that I've missed something in the docs.
I currently have Jenkins set up to run unit tests from a local Git repo (via plugin). I have set up the environment correctly (at least, in a seemingly working condition), but have run into a small snag.
I have a single settings.py file that I have excluded from my git repo (it contains a few keys that I'm using in my app). I don't want to include that file into my git repo as I'm planning on OS'ing the project when I'm done (anyone using the project would need their own keys). I realize that this may not be the best way of doing this, but it's what's done (and it's a small personal project), so I'm not concerned about it.
The problem is that because it's not under git management, Jenkins doesn't pick it up.
I'd like to be able to copy this single file from my source directory to the Jenkins build directory prior to running tests.
Is there a way to do this? I've tried using the copy to slave plugin, but it seems like any file that I want would first (manually) need to be copied or created in workspace/userContent. Am I missing something? | 0 | python,git,build,jenkins | 2011-09-20T03:24:00.000 | 0 | 7,479,757 | I am using the Copy Data To Workspace Plugin for this, Copy to Slave plugin should also work, but I found Copy Data To Workspace Plugin to be easier to work with for this use-case. | 0 | 350 | false | 0 | 1 | Using un-managed file in Jenkins build step | 7,481,245 |
2 | 3 | 0 | 0 | 1 | 0 | 0 | 0 | I've just started using Jenkins today, so it's entirely possible that I've missed something in the docs.
I currently have Jenkins set up to run unit tests from a local Git repo (via plugin). I have set up the environment correctly (at least, in a seemingly working condition), but have run into a small snag.
I have a single settings.py file that I have excluded from my git repo (it contains a few keys that I'm using in my app). I don't want to include that file into my git repo as I'm planning on OS'ing the project when I'm done (anyone using the project would need their own keys). I realize that this may not be the best way of doing this, but it's what's done (and it's a small personal project), so I'm not concerned about it.
The problem is that because it's not under git management, Jenkins doesn't pick it up.
I'd like to be able to copy this single file from my source directory to the Jenkins build directory prior to running tests.
Is there a way to do this? I've tried using the copy to slave plugin, but it seems like any file that I want would first (manually) need to be copied or created in workspace/userContent. Am I missing something? | 0 | python,git,build,jenkins | 2011-09-20T03:24:00.000 | 0 | 7,479,757 | Why just not use "echo my-secret-keys > settings.txt" in jenkins and adjust your script to read this file so you can add it to report? | 0 | 350 | false | 0 | 1 | Using un-managed file in Jenkins build step | 7,489,440 |
1 | 7 | 0 | 0 | 1 | 0 | 0 | 0 | I have a WordPress self hosted blog which was down until last week. After updating WordPress, now the site is working fine. But I would like to check it frequently for next couple of days. Is it possible to write a program to do this so that I can schedule it?
Please give some suggestions. I am thinking of Python as the language for the program, but I am open to any language. | 0 | python | 2011-09-20T08:08:00.000 | 0 | 7,481,974 | Your program should send a get request to the website, receive the html (verify you get "200 OK"), and compare the beginning of the string to what you know it should be (compare everything until the first thing that depends on content). If the comparison fails, then you should suspect that your site may be down, and check it yourself. | 0 | 3,127 | false | 0 | 1 | Programmatically checking whether a website is working or not? | 7,482,087 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | I'm using Philip Semanchunk's posix_ipc python module to read from a posix message queue. A C++ program I've written populates the queue with a struct containing the data. My python program successfully reads the message off the queue but I'm not sure what to do with the resulting message.
doing a print msg just prints out an empty string but I know msg has something in it.
I want to be able to read the members of the struct but I'm assuming I need to do something maybe with the struct module to marshal this message into something readable? Has anyone done anything like this?
I've read through his documentation and demos, but he is using simple types and I haven't found any examples where the source is a C struct. Google hasn't been any help either.
Also, I'm restricted to using Python 2.3. Thanks! | 0 | python,posix,message-queue | 2011-09-20T13:20:00.000 | 0 | 7,485,830 | Use the Python struct module.
struct::unpack() will translate the hex string from MessageQueue::receive() into a tuple of strings | 0 | 817 | true | 0 | 1 | How do I interpret the return from posix_ipc::MessageQueue::receive()? | 7,500,416 |
1 | 2 | 1 | 10 | 19 | 0 | 1 | 0 | Full question
Why did Google choose Java for the Android Operating System and not the X language?
Where X would be one of the below:
Python version 2.7 or version 3
which is equally as powerful as Java
has a lot of useful third party libraries
is faster to develop in thanks to it's dynamic nature
C/C++ or ObjC
which are harder to develop in but
run faster thanks to less overhead
would require less beefy hardware, especially RAM
are equally as robust as Java but are more prone to app-wide crashes when just one module fails
And so on. My main concern when I asked this question was why Java and not Python. I can add other elements (languages) of comparison later if anyone else is also interested.
Info: I'm not a full-blown developer.
EDIT I was very much aware that my question was going to be met with some opposition and bashing, that's why I said that I'm not a full-blown developer. I have my personal opinions to support me and just that but even thus, I still got great answers. I understand now, yes, Dalvik VM runs Java bytecodes on ARM devices, but how different is that Java from any other Oracle/Sun Java spec, I don't know. I've been playing with both Java and Python and wrote at least one useful program in both + GUIs (Swing and PySide) and at least one third party library used. The order I did this was Java, then Python which made me realize how much faster it was for me to write everything from scratch in Python than it was in Java. Even packages seemed much easier to manager than Java's way of importing packages (thank God for Eclipse and a few intuitive clicks)... and then how complex would embedded apps be that you'd need to take extra care for type checking and unit tests (and afaik, unit tests are supposed to be a must nowadays for any serious developer)... but anyway, thanks for the answers so far. It's a learning process. ;) | 0 | java,android,python | 2011-09-21T09:02:00.000 | 0 | 7,497,199 | Google, as a company, uses Java a lot. The search features are written in Java. As far as I can tell from the outside, Google likes Java.
For most tasks, Java is faster than Python. I would rather work in Python, and I know how to write reasonably efficient Python, and yes PyPy is really shaking things up, but Google needed to provide a snappy experience on relatively underpowered phone processors so they likely didn't consider Python a contender.
Java, like Python, provides a great deal of isolation from details of the underlying hardware. I think all Android phones are ARM-based, but in theory you could make an Android phone based on an x86 chip or something completely different, and as long as you do a good job of porting the Dalvik VM, your code will run. (Aside from apps that have native ARM code compiled in, of course.)
Google likes the Java language, but they chose to write their own VM ("Dalvik") rather than license the Java VM. Compiled Java can be directly translated into Dalvik bytecodes. (Oracle sued Google over this. Oracle lost the lawsuit.) | 0 | 9,975 | false | 1 | 1 | Why did Google choose Java for the Android Operating System? | 7,497,322 |
1 | 1 | 0 | 3 | 1 | 0 | 0.53705 | 0 | I am a newbie and I need your help!!
I have installed scipy on my Ubuntu.
When I ran the code from scipy import optimize, special
I get the following in terminal:
can't read /var/mail/scipy.optimize
and if I type python, and get >>> then type in from scipy import optimize
then I ran code including scipy, optimize, I get the following:
name 'scipy' is not defined | 0 | python,scipy | 2011-09-21T14:44:00.000 | 0 | 7,501,785 | from scipy import optimize, special on the shell prompt starts the from command, which is an email program.
from scipy import optimize, special in Python will put the modules optimize and special in your namespace, but not scipy. Either use them unqualified or do import scipy instead. | 0 | 4,965 | false | 0 | 1 | Question with scipy.optimize | 7,501,841 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | Hello I have python script that takes apart an email from a string. I am using the get_payload(decode=True) function from the email class and it works great for pdf's and jpg's but it does not decode bmp files. The file is still encoded base64 when I write it to disk.
Has anyone come across this issue themselves? | 0 | python,email,mime | 2011-09-21T19:20:00.000 | 0 | 7,505,410 | OK so I finally found the problem and it was not related to the python mail class at all. I was reading from a named pipe using the .read() function and it was not reading the entire email from the pipe. I had to pass the read function a size argument and then it was able to read the entire email. So ultimately the reason why my bmp file was not decoded is because I had invalid base64 data causing the get_payload() function to not be able to decode the attatchment. | 0 | 310 | false | 0 | 1 | How to decode bitmap images using python's email class | 7,517,921 |
1 | 1 | 0 | 1 | 0 | 1 | 0.197375 | 0 | I have python 2.7 installed on my windows computer. I'm trying to email a puzzle answer to Spotify, which is running Python 2.6.6. When I submit my *.py source code, I'm getting the following error:
Run Time Error
Exited, exit status: 1
I only have "import sys". I've run tons of stress tests - possible inputs are 1 ≤ m ≤ 10 000 lines, I've tested with 1 million+ values with zero problems. I've tried printing with print & sys.stdout.write.
When I send in dummie test code (I run my full algorithm but only print garbage instead of my answer - ie, print "test!"), I get the expected "Wrong Answer" back.
I have no idea where to start debugging - any tips/help at all?
Thanks!
-Sam | 0 | python,runtime,exitstatus | 2011-09-22T08:59:00.000 | 0 | 7,512,180 | I got the same error. As I see it's not python output but just an answer from spotify bot that your program threw an exception in some tests. Maybe the real output isn't shown to prevent debugging using the bot.
When you print dummy data fist test fails and you get 'Wrong Answer'.
When you print real output first test may pass but next throw an exception and you get 'Run Time Error'.
I fixed one defect with possible exception in my script and Run Time Error went away. | 0 | 1,602 | false | 0 | 1 | Run Time Error (exit status 1) when submitting puzzle in Python | 8,352,031 |
2 | 3 | 0 | 0 | 2 | 0 | 0 | 0 | local_import function randomly does not import my modules from modules
directory. The Error is:
ImportError: No module named testapp.modules.mymodule
I have this problem when i use web2py with apache (with wsgi). I have no problem when i run locally with "python web2py.py" command.
Any suggestion? | 0 | python,apache,wsgi,web2py,web2py-modules | 2011-09-23T07:40:00.000 | 0 | 7,525,761 | Add testapp to your PYTHONPATH. | 0 | 642 | false | 1 | 1 | local_import function does not work | 7,525,938 |
2 | 3 | 0 | 1 | 2 | 0 | 0.066568 | 0 | local_import function randomly does not import my modules from modules
directory. The Error is:
ImportError: No module named testapp.modules.mymodule
I have this problem when i use web2py with apache (with wsgi). I have no problem when i run locally with "python web2py.py" command.
Any suggestion? | 0 | python,apache,wsgi,web2py,web2py-modules | 2011-09-23T07:40:00.000 | 0 | 7,525,761 | I will answer my own question :)
I started using mod_proxy and everything is ok. | 0 | 642 | false | 1 | 1 | local_import function does not work | 7,582,872 |
1 | 2 | 0 | 12 | 11 | 1 | 1.2 | 0 | I'd like to set the optimize flag (python -O myscript.py) at runtime within a python script based on a command line argument to the script like myscript.py --optimize or myscript --no-debug. I'd like to skip assert statements without iffing all of them away. Or is there a better way to efficiently ignore sections of python code. Are there python equivalents for #if and #ifdef in C++? | 0 | python,optimization,runtime,assert,conditional-compilation | 2011-09-23T09:42:00.000 | 0 | 7,527,055 | -O is a compiler flag, you can't set it at runtime because the script already has been compiled by then.
Python has nothing comparable to compiler macros like #if.
Simply write a start_my_project.sh script that sets these flags. | 0 | 2,416 | true | 0 | 1 | Is it possible to set the python -O (optimize) flag within a script? | 7,527,449 |
2 | 6 | 0 | 3 | 11 | 0 | 0.099668 | 0 | I'm developing an web based application written in PHP5, which basically is an UI on top of a database. To give users a more flexible tool I want to embed a scripting language, so they can do more complex things like fire SQL queries, do loops and store data in variables and so on. In my business domain Python is widely used for scripting, but I'm also thinking of making a simple Domain Specific Language. The script has to wrap my existing PHP classes.
I'm seeking advise on how to approach this development task?
Update: I'll try scripting in the database using PLPGSQL in PostgreSQL. This will do for now, but I can't use my PHP classes this way. Lua approach is appealing and seems what is what I want (besides its not Python). | 1 | php,python,dsl,plpgsql | 2011-09-23T11:36:00.000 | 0 | 7,528,360 | How about doing the scripting on the client. That will ensure maximum security and also save server resources.
In other words Javascript would be your scripting platform. What you do is expose the functionality of your backend as javascript functions. Depending on how your app is currently written that might require backend work or not.
Oh and by the way you are not limited to javascript for the actual language. Google "compile to javascript" and first hit should be a list of languages you can use. | 0 | 570 | false | 0 | 1 | Embed python/dsl for scripting in an PHP web application | 7,660,613 |
2 | 6 | 0 | 0 | 11 | 0 | 0 | 0 | I'm developing an web based application written in PHP5, which basically is an UI on top of a database. To give users a more flexible tool I want to embed a scripting language, so they can do more complex things like fire SQL queries, do loops and store data in variables and so on. In my business domain Python is widely used for scripting, but I'm also thinking of making a simple Domain Specific Language. The script has to wrap my existing PHP classes.
I'm seeking advise on how to approach this development task?
Update: I'll try scripting in the database using PLPGSQL in PostgreSQL. This will do for now, but I can't use my PHP classes this way. Lua approach is appealing and seems what is what I want (besides its not Python). | 1 | php,python,dsl,plpgsql | 2011-09-23T11:36:00.000 | 0 | 7,528,360 | You could do it without Python, by ie. parsing the user input for pre-defined "tags" and returning the result. | 0 | 570 | false | 0 | 1 | Embed python/dsl for scripting in an PHP web application | 7,605,372 |
4 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | For very simple, internal web-apps using ASP I was able to just switch IIS 'on' and then write some ASP scripts in the www directory that would start working immediately.
Is there an equivalent webserver app for Python scripts that I can run that will automatically start serving dynamic pages (python scripts) in a certain folder (with virtually no configuration)?
Solutions I've already found are either too limited (e.g. SimpleHTTPRequestHandler doesn't serve dynamic content) or require configuring the script that does the serving. | 0 | python,webserver | 2011-09-23T20:07:00.000 | 0 | 7,534,244 | WSGI setups are fairly easy to get started, but in no anyway turn key. django MVC has a simple built in development server if you plan on using a more comprehensive framework. | 0 | 394 | false | 1 | 1 | What is a pythonic webserver equivalent to IIS and ASP? | 7,534,361 |
4 | 4 | 0 | 4 | 0 | 0 | 0.197375 | 0 | For very simple, internal web-apps using ASP I was able to just switch IIS 'on' and then write some ASP scripts in the www directory that would start working immediately.
Is there an equivalent webserver app for Python scripts that I can run that will automatically start serving dynamic pages (python scripts) in a certain folder (with virtually no configuration)?
Solutions I've already found are either too limited (e.g. SimpleHTTPRequestHandler doesn't serve dynamic content) or require configuring the script that does the serving. | 0 | python,webserver | 2011-09-23T20:07:00.000 | 0 | 7,534,244 | There's always CGI. Add a script mapping of .py to "C:\Python27\python.exe" -u "%s" then drop .py files in a folder and IIS will execute them.
I'd not generally recommend it for real work—in the longer term you would definitely want to write apps to WSGI, and then deploy them through any number of interfaces including CGI—but it can be handy for quick prototyping. | 0 | 394 | false | 1 | 1 | What is a pythonic webserver equivalent to IIS and ASP? | 7,534,379 |
4 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | For very simple, internal web-apps using ASP I was able to just switch IIS 'on' and then write some ASP scripts in the www directory that would start working immediately.
Is there an equivalent webserver app for Python scripts that I can run that will automatically start serving dynamic pages (python scripts) in a certain folder (with virtually no configuration)?
Solutions I've already found are either too limited (e.g. SimpleHTTPRequestHandler doesn't serve dynamic content) or require configuring the script that does the serving. | 0 | python,webserver | 2011-09-23T20:07:00.000 | 0 | 7,534,244 | My limited experience with Python web frameworks has taught me that most go to one extreme or the other: Django on one end is a full-stack MVC framework, that will do pretty much everything for you. On the other end, there are Flask, web.py, CherryPy, etc., which do much less, but stay out of your way.
CherryPy, for example, not only comes with no ORM, and doesn't require MVC, but it doesn't even have a templating engine. So unless you use it with something like Cheetah, you can't write what would look like .asp at all. | 0 | 394 | false | 1 | 1 | What is a pythonic webserver equivalent to IIS and ASP? | 7,534,417 |
4 | 4 | 0 | 1 | 0 | 0 | 0.049958 | 0 | For very simple, internal web-apps using ASP I was able to just switch IIS 'on' and then write some ASP scripts in the www directory that would start working immediately.
Is there an equivalent webserver app for Python scripts that I can run that will automatically start serving dynamic pages (python scripts) in a certain folder (with virtually no configuration)?
Solutions I've already found are either too limited (e.g. SimpleHTTPRequestHandler doesn't serve dynamic content) or require configuring the script that does the serving. | 0 | python,webserver | 2011-09-23T20:07:00.000 | 0 | 7,534,244 | For development or just to play around, here's an example using the standard Python library that I have used to help friend who wanted to get a basic CGI server up and running. It will serve python scripts from cgi-bin and files from the root folder. I'm not near a Windows computer at the moment to make sure that this still works. This also assumes Python2.x. Python 3.x has this, it's just not named the same.
Make a directory on your harddrive with a cgi-bin folder in it (Ex. "C:\server\cgi-bin")
In a command window, navigate to "C:\server" directory
Type the following assuming you've installed python 2.7 in C:\Python27:
"c:\python27\python.exe -m CGIHTTPServer"
You should get a message like "Serving HTTP on 0.0.0.0 port 8000"
Linux is the same - "python -m CGIHTTPServer" in a directory with a cgi-bin/ in it. | 0 | 394 | false | 1 | 1 | What is a pythonic webserver equivalent to IIS and ASP? | 7,534,970 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 1 | What is the best way to set up a system that checks for events daily and sends messages via email, Twitter, SMS, and possibly Facebook? Keep in mind, that I do not have access to a web server with root access (Using Rackspace Cloud). Would PHP have a solution for this? Would there be any drawbacks to using Google App Engine and Python? | 0 | php,python,google-app-engine | 2011-09-23T22:41:00.000 | 0 | 7,535,544 | If you are using Google App Engine with Python you could use "Cron" to schedule a task to automatically run each day.
GAE also allows you to send emails, just a little tip: make sure that you 'invite' the email address used to send mail to the application as an administrator so that you can programatically send emails etc. | 0 | 150 | false | 1 | 1 | Check for Event Daily, and Send Notification Messages | 8,426,091 |
1 | 3 | 0 | 0 | 2 | 0 | 0 | 1 | I am writing an IRC bot in Python using the Twisted library. To test my bot I need to connect several times to an IRC network as my bot requires a restart each time a change is made. Therefore I am often "banned" from these networks for a couple of minutes because I have made a lot of connections.
This makes it annoying testing and writing the bot. Does anyone know of a better way to test the bot or any network which isn't as restrict with the amount of connections as QuakeNet is? | 0 | python,testing,connection,irc,bots | 2011-09-25T14:09:00.000 | 0 | 7,546,026 | freenode is good.. You can create channels for yourself to test. Also check out this project called supybot, which is good for Python bots. | 0 | 3,842 | false | 0 | 1 | Testing an IRC bot | 7,546,076 |
1 | 4 | 0 | 2 | 13 | 0 | 0.099668 | 0 | My Django application sends out quite a bit of emails and I've tried testing it thoroughly. However, for the first few months, I'd like to log all outgoing emails to ensure that everything is working smoothly. Is there a Django module that allows me to do this and makes the outgoing emails visible through the administration panel
Thanks. | 0 | python,django,logging,django-email | 2011-09-26T08:11:00.000 | 0 | 7,552,283 | I do not know if there exists a module that works this way, but writing a custom one is a piece of cake. Just create a separate model and every time you send an email, create a new instance ( use a custom method for email sending ). Then, link this model with the admin and bingo.. | 0 | 7,599 | false | 1 | 1 | How can I log all outgoing email in Django? | 7,552,429 |
1 | 1 | 0 | 3 | 3 | 0 | 1.2 | 0 | I'm trying to write a pop3 and imap clients in python using available libs, which will download email headers (and subsequently entire email bodies) from various servers and save them in a mongodb database. The problem I'm facing is that this client downloads emails in addition to a user's regular email client. So with the assumption that a user might or might not leave emails on the server when downloading using his mail client, I'd like to fetch the headers but only collect them from a certain date, to avoid grabbing entire mailboxes every time I fetch the headers.
As far as I can see the POP3 list call will get me all messages on the server, even those I probably already downloaded. IMAP doesn't have this problem.
How do email clients handle this situation when dealing with POP3 servers? | 1 | python,email,pop3 | 2011-09-26T10:14:00.000 | 0 | 7,553,606 | Outlook logs in to a POP3 server and issues the STAT, LIST and UIDL commands; then if it decides the user has no new messages it logs out. I have observed Outlook doing this when tracing network traffic between a client and my DBMail POP3 server. I have seen Outlook fail to detect new messages on a POP3 server using this method. Thunderbird behaves similarly but I have never seen it fail to detect new messages.
Issue the LIST and UIDL commands to the server after logging in. LIST gives you an index number (the message's linear position in the mailbox) and the size of each message. UIDL gives you the same index number and a computed hash value for each message.
For each user you can store the size and hash value given by LIST and UIDL. If you see the same size and hash value, assume it is the same message. When a given message no longer appears in this list, assume it has been deleted and clear it from your local memory.
For complete purity, remember the relative positions of the size/hash pairs in the message list, so that you can support the possibility that they may repeat. (My guess on Outlook's new message detection failure is that sometimes these values do repeat, at least for DBMail, but Outlook remembers them even after they are deleted, and forever considers them not new. If it were me, I would try to avoid this behavior.)
Footnote: Remember that the headers are part of the message. Do not trust anything in the header for this reason: dates, senders, even server hand-off information can be easily faked and cannot be assumed unique. | 0 | 1,501 | true | 0 | 1 | Download POP3 headers from a certain date (Python) | 7,556,750 |
1 | 1 | 0 | 1 | 3 | 0 | 1.2 | 0 | I want to setup Jenkins to
1) pull our source code from our repository,
2) compile and build it
3) run the tests on an embedded device
step 1 & 2 are quite easy and straight forward with Jenkins
as for step 3,
we have hundreds of those devices in various versions of them, and I'm looking for a utility (preferable in python) that can handle the availability of hardware devices/resources.
in such manner that one of the steps will be able to receive which of the device is available and run the tests on it. | 0 | python,embedded,jenkins | 2011-09-26T18:01:00.000 | 0 | 7,559,224 | What I have found, is that the best thing to do, is have something like jenkins, or if you're using enterprise, electric commander, manage a resource 'pool' the pool is essentially virtual devices, but they have a property, such that you can call into a python script w/ either an ip-address or serial port and communicate w/ your devices.
I used it for automated embedded testing on radios. The python script managed a whole host of tests, and commander would go ahead and choose a single-step resource from the pool, that resource had an ip, and would pass it into the python script. test would then perform all the tests and the stdout would get stored up into commander/jenkins ... Also set properties to track pass/fail count as test was executing
//main resource gets single step item from pool, in the main resource wrote a tiny script that asked if the item pulled from the pool had the resource name == "Bench1" .. "BenchX" etc.
basically:
if resource.name=="BENCH1":
python myscript.py --com COM3 --baud 9600
...
etc.
the really great feature about doing it this way, is if you have to disconnect a device, you don't need to deliver up script changes, you simply mark the commander/jenkins resource as disabled, and the main 'project' can still pull from what remains in your resource pool | 0 | 1,059 | true | 0 | 1 | Handling hardware resources when testing with Jenkins | 7,560,987 |
1 | 1 | 0 | 2 | 0 | 1 | 0.379949 | 0 | Are there any benchmark on this???
(I tried googling for some results but found none...
and I couldn't test gmpy because gmplib wouldn't be installed on my laptop)
thank you! | 0 | java,python,performance,cython,bignum | 2011-09-26T20:21:00.000 | 0 | 7,560,850 | First of all, I'm probably biased since I'm the maintainer of gmpy.
gmpy uses the GMP multiple-precision library and GMP is usually considered the fastest general purpose multiple-precision library. But when it's "fastest" depends on on the operation and the size of the values. When I compare the performance between Python longs and gmpy's mpz type, the crossover point is roughly between 20 and 50 digits. You'll probably get different results on your machine.
What exactly are you trying to do? | 0 | 1,269 | false | 0 | 1 | What's the fastest implementation for bignum? (Java's bigInteger / Cython's int / gmpy / etc...) | 7,561,424 |
1 | 4 | 0 | 1 | 2 | 0 | 0.049958 | 0 | Right now I have a script which uses numpy that I want to run automatically on a server. When I ssh in and run it manually, it works fine. However, when I set it to run as a cron job, it can't find numpy. Apparently due to the shared server environment, the cron demon for whatever reason can't find numpy. I contacted the server host's tech support and they told me to set up a vps or get my own damn server. Is there any way to hack a workaround for this? Perhaps, by moving certain numpy files into the same directory as the script? | 0 | python,numpy,cron,installation | 2011-09-26T22:11:00.000 | 0 | 7,561,969 | Your cron job is probably executing with a different python interpreter.
Log in as you (via ssh), and say which python. That will tell you where your python is. Then have your cron job execute that python interpreter to run your script, or chmod +x your script and put the path in a #! line at the top of the script. | 0 | 681 | false | 0 | 1 | Workaround Way To Install Numpy? | 7,562,061 |
1 | 1 | 0 | 1 | 0 | 1 | 0.197375 | 0 | I am an experienced PHP developer (10 years) who has built 3 different custom frameworks for extreme high traffic sites. I have recently started to get into programming a lot of python, usually just for fun (algorithms). I am starting to develop a new site as my side project and wanted to know if I should use a pre-existing python web framework (Django, Pyramids, ect...) or develop my own.
I know things might go a lot faster using a pre-existing framework, but from my experience with PHP frameworks and knowing the amount of traffic my side project could generate, whould it be better to develop an extremely light weight framework myself just like I have been doing for a while with PHP? It also might be a good way for me to learn python web development because most of my experience with the language has been for coding algorithms.
If I do use a pre-existing framework I was going to try out Pyramid or Django.
Also do other companies that use Python for web development and expect high traffic use their own web frameworks or a pre-existing one? | 0 | python,frameworks | 2011-09-26T23:18:00.000 | 0 | 7,562,454 | Learn from existing frameworks, I think. The Python web stack (wsgi, sqlalchemy, template engines, full stack frameworks, microframeworks) has spent a lot of time maturing. You'll have the opportunity to develop fast and learn from existing design. | 0 | 481 | false | 1 | 1 | Use Python Framework or Build Own | 7,562,586 |
2 | 2 | 0 | 1 | 0 | 0 | 1.2 | 1 | I need to fetch twitter historical data for a given set of keywords. Twitter Search API returns tweets that are not more than 9 days old, so that will not do. I'm currently using Tweepy Library (http://code.google.com/p/tweepy/) to call Streaming API and it is working fine except the fact that it is too slow. For example, when I run a search for "$GOOG" sometimes it takes more than an hour between two results. There are definitely tweets containing that keyword but it isn't returning result fast enough.
What can be the problem? Is Streaming API slow or there is some problem in my method of accessing it? Is there any better way to get that data free of cost? | 0 | python,api,twitter,streaming,tweepy | 2011-09-27T04:06:00.000 | 0 | 7,564,100 | How far back do you need? To fetch historical data, you might want to keep the stream on indefinitely (the stream API allows for this) and store the stream locally, then retrieve historical data from your db.
I also use Tweepy for live Stream/Filtering and it works well. The latency is typically < 1s and Tweepy is able to handle large volume streams. | 0 | 1,072 | true | 0 | 1 | Is there any better way to access twitter streaming api through python? | 7,640,150 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | I need to fetch twitter historical data for a given set of keywords. Twitter Search API returns tweets that are not more than 9 days old, so that will not do. I'm currently using Tweepy Library (http://code.google.com/p/tweepy/) to call Streaming API and it is working fine except the fact that it is too slow. For example, when I run a search for "$GOOG" sometimes it takes more than an hour between two results. There are definitely tweets containing that keyword but it isn't returning result fast enough.
What can be the problem? Is Streaming API slow or there is some problem in my method of accessing it? Is there any better way to get that data free of cost? | 0 | python,api,twitter,streaming,tweepy | 2011-09-27T04:06:00.000 | 0 | 7,564,100 | streaming API too fast you get message as soon as you post it, we use twitter4j. But streamer streams only current messages, so if you not listening on streamer the moment you send tweet then message is lost. | 0 | 1,072 | false | 0 | 1 | Is there any better way to access twitter streaming api through python? | 7,569,606 |
1 | 2 | 0 | 1 | 4 | 0 | 0.099668 | 0 | pycassa has pycassa.util.convert_time_to_uuid(time_arg, lowest_val=True, randomize=False)
phpcassa has static string uuid1 ([string $node = null], [int $time = null])
Can phpcassa's uuid1 be used to get lowest/highest uuids like in pycassa?
If not, what's the best approach to ensure you get everything between two given timestamps? | 0 | php,python,cassandra | 2011-09-27T18:27:00.000 | 0 | 7,573,938 | I believe that if you have a column with a type of UUID version 1, Cassandra will ignore the 'unique' component of the UUID and just use the time part for the range. | 0 | 411 | false | 0 | 1 | lowest possible timeuuid in php (phpcassa) | 7,575,278 |
3 | 8 | 0 | 2 | 34 | 0 | 0.049958 | 0 | As a long time Python programmer, I wonder, if a central aspect of Python culture eluded me a long time: What do we do instead of Makefiles?
Most ruby-projects I've seen (not just rails) use Rake, shortly after node.js became popular, there was cake. In many other (compiled and non-compiled) languages there are classic Make files.
But in Python, no one seems to need such infrastructure. I randomly picked Python projects on GitHub, and they had no automation, besides the installation, provided by setup.py.
What's the reason behind this?
Is there nothing to automate? Do most programmers prefer to run style checks, tests, etc. manually?
Some examples:
dependencies sets up a virtualenv and installs the dependencies
check calls the pep8 and pylint commandlinetools.
the test task depends on dependencies enables the virtualenv, starts selenium-server for the integration tests, and calls nosetest
the coffeescript task compiles all coffeescripts to minified javascript
the runserver task depends on dependencies and coffeescript
the deploy task depends on check and test and deploys the project.
the docs task calls sphinx with the appropiate arguments
Some of them are just one or two-liners, but IMHO, they add up. Due to the Makefile, I don't have to remember them.
To clarify: I'm not looking for a Python equivalent for Rake. I'm glad with paver. I'm looking for the reasons. | 0 | python,automation,makefile,rake | 2011-09-28T09:16:00.000 | 0 | 7,580,939 | Any decent test tool has a way of running the entire suite in a single command, and nothing is stopping you from using rake, make, or anything else, really.
There is little reason to invent a new way of doing things when existing methods work perfectly well - why re-invent something just because YOU didn't invent it? (NIH). | 0 | 12,445 | false | 1 | 1 | Why are there no Makefiles for automation in Python projects? | 7,581,531 |
3 | 8 | 0 | -3 | 34 | 0 | 1.2 | 0 | As a long time Python programmer, I wonder, if a central aspect of Python culture eluded me a long time: What do we do instead of Makefiles?
Most ruby-projects I've seen (not just rails) use Rake, shortly after node.js became popular, there was cake. In many other (compiled and non-compiled) languages there are classic Make files.
But in Python, no one seems to need such infrastructure. I randomly picked Python projects on GitHub, and they had no automation, besides the installation, provided by setup.py.
What's the reason behind this?
Is there nothing to automate? Do most programmers prefer to run style checks, tests, etc. manually?
Some examples:
dependencies sets up a virtualenv and installs the dependencies
check calls the pep8 and pylint commandlinetools.
the test task depends on dependencies enables the virtualenv, starts selenium-server for the integration tests, and calls nosetest
the coffeescript task compiles all coffeescripts to minified javascript
the runserver task depends on dependencies and coffeescript
the deploy task depends on check and test and deploys the project.
the docs task calls sphinx with the appropiate arguments
Some of them are just one or two-liners, but IMHO, they add up. Due to the Makefile, I don't have to remember them.
To clarify: I'm not looking for a Python equivalent for Rake. I'm glad with paver. I'm looking for the reasons. | 0 | python,automation,makefile,rake | 2011-09-28T09:16:00.000 | 0 | 7,580,939 | Is there nothing to automate?
Not really. All but two of the examples are one-line commands.
tl;dr Very little of this is really interesting or complex. Very little of this seems to benefit from "automation".
Due to documentation, I don't have to remember the commands to do this.
Do most programmers prefer to run stylechecks, tests, etc. manually?
Yes.
generation documentation,
the docs task calls sphinx with the appropiate arguments
It's one line of code. Automation doesn't help much.
sphinx-build -b html source build/html. That's a script. Written in Python.
We do this rarely. A few times a week. After "significant" changes.
running stylechecks (Pylint, Pyflakes and the pep8-cmdtool).
check calls the pep8 and pylint commandlinetools
We don't do this. We use unit testing instead of pylint.
You could automate that three-step process.
But I can see how SCons or make might help someone here.
tests
There might be space for "automation" here. It's two lines: the non-Django unit tests (python test/main.py) and the Django tests. (manage.py test). Automation could be applied to run both lines.
We do this dozens of times each day. We never knew we needed "automation".
dependecies sets up a virtualenv and installs the dependencies
Done so rarely that a simple list of steps is all that we've ever needed. We track our dependencies very, very carefully, so there are never any surprises.
We don't do this.
the test task depends on dependencies enables the virtualenv, starts selenium-server for the integration tests, and calls nosetest
The start server & run nosetest as a two-step "automation" makes some sense. It saves you from entering the two shell commands to run both steps.
the coffeescript task compiles all coffeescripts to minified javascript
This is something that's very rare for us. I suppose it's a good example of something to be automated. Automating the one-line script could be helpful.
I can see how SCons or make might help someone here.
the runserver task depends on dependencies and coffeescript
Except. The dependencies change so rarely, that this seems like overkill. I supposed it can be a good idea of you're not tracking dependencies well in the first place.
the deploy task depends on check and test and deploys the project.
It's an svn co and python setup.py install on the server, followed by a bunch of customer-specific copies from the subversion area to the customer /www area. That's a script. Written in Python.
It's not a general make or SCons kind of thing. It has only one actor (a sysadmin) and one use case. We wouldn't ever mingle deployment with other development, QA or test tasks. | 0 | 12,445 | true | 1 | 1 | Why are there no Makefiles for automation in Python projects? | 7,581,523 |
3 | 8 | 0 | 2 | 34 | 0 | 0.049958 | 0 | As a long time Python programmer, I wonder, if a central aspect of Python culture eluded me a long time: What do we do instead of Makefiles?
Most ruby-projects I've seen (not just rails) use Rake, shortly after node.js became popular, there was cake. In many other (compiled and non-compiled) languages there are classic Make files.
But in Python, no one seems to need such infrastructure. I randomly picked Python projects on GitHub, and they had no automation, besides the installation, provided by setup.py.
What's the reason behind this?
Is there nothing to automate? Do most programmers prefer to run style checks, tests, etc. manually?
Some examples:
dependencies sets up a virtualenv and installs the dependencies
check calls the pep8 and pylint commandlinetools.
the test task depends on dependencies enables the virtualenv, starts selenium-server for the integration tests, and calls nosetest
the coffeescript task compiles all coffeescripts to minified javascript
the runserver task depends on dependencies and coffeescript
the deploy task depends on check and test and deploys the project.
the docs task calls sphinx with the appropiate arguments
Some of them are just one or two-liners, but IMHO, they add up. Due to the Makefile, I don't have to remember them.
To clarify: I'm not looking for a Python equivalent for Rake. I'm glad with paver. I'm looking for the reasons. | 0 | python,automation,makefile,rake | 2011-09-28T09:16:00.000 | 0 | 7,580,939 | The make utility is an optimization tool which reduces the time spent building a software image. The reduction in time is obtained when all of the intermediate materials from a previous build are still available, and only a small change has been made to the inputs (such as source code). In this situation, make is able to perform an "incremental build": rebuild only a subset of the intermediate pieces that are impacted by the change to the inputs.
When a complete build takes place, all that make effectively does is to execute a set of scripting steps. These same steps could just be deposited into a flat script. The -n option of make will in fact print these steps, which makes this possible.
A Makefile isn't "automation"; it's "automation with a view toward optimized incremental rebuilds." Anything scripted with any scripting tool is automation.
So, why would Python project eschew tools like make? Probably because Python projects don't struggle with long build times that they are eager to optimize. And, also, the compilation of a .py to a .pyc file does not have the same web of dependencies like a .c to a .o.
A C source file can #include hundreds of dependent files; a one-character change in any one of these files can mean that the source file must be recompiled. A properly written Makefile will detect when that is or is not the case.
A big C or C++ project without an incremental build system would mean that a developer has to wait hours for an executable image to pop out for testing. Fast, incremental builds are essential.
In the case of Python, probably all you have to worry about is when a .py file is newer than its corresponding .pyc, which can be handled by simple scripting: loop over all the files, and recompile anything newer than its byte code. Moreover, compilation is optional in the first place!
So the reason Python projects tend not to use make is that their need to perform incremental rebuild optimization is low, and they use other tools for automation; tools that are more familiar to Python programmers, like Python itself. | 0 | 12,445 | false | 1 | 1 | Why are there no Makefiles for automation in Python projects? | 53,604,587 |
2 | 4 | 0 | 1 | 5 | 0 | 0.049958 | 0 | I'm currently working on a project that has been relatively easy, up until now. The underlying project is to transmit data/messages over lasers using audio transformation.
In a nutshell the process is currently like this
The user enters a message
Message is turned into binary
For each 1 and 0 in the binary message, it plays a corresponding tone to signal which is which, in my case 250hz for a 1 and 450 hz for a 0.
The outgoing tone is sent over a stereo cable to an audio transformer rigged to a laser
A solar panel acts as a microphone and records the incoming "sound" as a file
It them plays the file back and reads off the tones and tries to match each 250 and 450 hz to a 1 or 0 (which is where my issue lies).
Up until the actual processing of the sound is fine, my current issue is the following.
I play the tones each for x time, on the receiving end it is recorded for y time, y time is cut sampled many times and then analyzed sample by sample which then logs each frequency. This is inefficient and inaccurate. I have had many issues regardless of the time I play the tones for it often hears a tone twice or doesn't hear it at all, which completely throws off whole messages.
I have tried to match the rate at which it samples with the time each tone plays, but unless aligned accordingly it does not work. I've only had a few successful tests for messages like 'test' and 'hi'. I have already looked into bpsk and fsk, but I feel as if I'm already doing something like it but that I have a bad receiving end to decipher it all.
This is all written in Python and I'd be very grateful for any tips, suggestions, or possible implementations that you can provide. Also for tone emission I'm using pyaudiere and for recording I'm using pyaudio.
Thanks!
-Steve | 0 | python,audio,signal-processing,frequency | 2011-09-30T04:57:00.000 | 0 | 7,606,111 | I would tackle the receiving end using two FIR filters, one for each frequency that you are trying to detect. The coefficients of the filters are just a copy of the signal you are looking for (i.e. 250Hz in one case and 450Hz in the other). You would have to look at the output of your solar panel to decide whether that is a square wave, sine wave, or something in between. The length of the filter corresponds to the duration of the tone (i.e. 'x' in your question). The samples are fed into both filters in parallel.
The output of each filter needs to be rectified (i.e. take the absolute value) and smoothed. The smoothing can be done using a simple moving average over a period of about half x (you can experiment to find the best value). Then if you compare the smoothed values (i.e. is a>b, or b>a) you should get a stream of 0's and 1's.
Things to be aware of: This assumes the channel behaves the same for both frequencies (i.e. you get similar snr and attenuation). You might need to tweak your frequencies a bit because 450Hz is quite close to 500Hz which is a harmonic of 250Hz. | 0 | 1,017 | false | 1 | 1 | Python Audio Transfer Through Lasers | 7,632,584 |
2 | 4 | 0 | 1 | 5 | 0 | 0.049958 | 0 | I'm currently working on a project that has been relatively easy, up until now. The underlying project is to transmit data/messages over lasers using audio transformation.
In a nutshell the process is currently like this
The user enters a message
Message is turned into binary
For each 1 and 0 in the binary message, it plays a corresponding tone to signal which is which, in my case 250hz for a 1 and 450 hz for a 0.
The outgoing tone is sent over a stereo cable to an audio transformer rigged to a laser
A solar panel acts as a microphone and records the incoming "sound" as a file
It them plays the file back and reads off the tones and tries to match each 250 and 450 hz to a 1 or 0 (which is where my issue lies).
Up until the actual processing of the sound is fine, my current issue is the following.
I play the tones each for x time, on the receiving end it is recorded for y time, y time is cut sampled many times and then analyzed sample by sample which then logs each frequency. This is inefficient and inaccurate. I have had many issues regardless of the time I play the tones for it often hears a tone twice or doesn't hear it at all, which completely throws off whole messages.
I have tried to match the rate at which it samples with the time each tone plays, but unless aligned accordingly it does not work. I've only had a few successful tests for messages like 'test' and 'hi'. I have already looked into bpsk and fsk, but I feel as if I'm already doing something like it but that I have a bad receiving end to decipher it all.
This is all written in Python and I'd be very grateful for any tips, suggestions, or possible implementations that you can provide. Also for tone emission I'm using pyaudiere and for recording I'm using pyaudio.
Thanks!
-Steve | 0 | python,audio,signal-processing,frequency | 2011-09-30T04:57:00.000 | 0 | 7,606,111 | Did you do a sanity check by listening to the sound files (both transmit and receive), or viewing the waveforms with an audio editor, to see if they roughly sound or look the same? That way you can narrow down the problem to channel induced errors versus your software analysis.
Your decoding/demodulation software will need a synchronization method that can determine and track the times that the audio signal changes from one frequency of modulation to another, then you will need to separately test this synchronization method for offset errors. | 0 | 1,017 | false | 1 | 1 | Python Audio Transfer Through Lasers | 7,614,995 |
1 | 2 | 0 | 0 | 3 | 0 | 0 | 0 | In Eclipse PyDev plugin, all document of default library of python will be load, but document of pygtk doesn't load in Eclipse.
Any way to load them to eclipse? | 0 | python,eclipse,pygtk,pydev,documentation-generation | 2011-10-01T16:26:00.000 | 0 | 7,621,477 | Make sure that your PYTHONPATH includes pygtk. | 0 | 887 | false | 0 | 1 | How to load PyGTK documentation in Eclipse PyDev Plugin in auto-completion? | 7,651,572 |
2 | 3 | 0 | 1 | 5 | 0 | 0.066568 | 0 | I have an infrared camera/tracker with which I am communicating via the serial port. I'm using the pyserial module to do this at the moment. The camera updates the position of a tracked object at the rate of 60 Hz. In order to get the position of the tracked object I execute one pyserial.write() and then listen for an incoming reply with pyserial.read(serialObj.inWaiting()). Once the reply/position has been received the while loop is reentered and so on. My question has to do with the reliability and speed of this approach. I need the position to be gotten by the computer at the rate of at least 60Hz (and the position will then be sent via UDP to a real-time OS). Is this something that Pyserial/Python are capable of or should I look into alternative C-based approaches?
Thanks,
Luke | 0 | python,real-time,pyserial | 2011-10-02T21:38:00.000 | 0 | 7,629,403 | This is more a matter of latency than speed.
Python always performs memory allocation and release, but if the data is reused, the same memory will be reused by the C library.
So the OS (C library / UDP/IP stack) will have more impact than Python itself.
I really think you should use a serial port on your RTOS machine and use C code and pre-allocated buffers. | 0 | 4,369 | false | 0 | 1 | pyserial/python and real time data acquisition | 7,629,435 |
2 | 3 | 0 | 0 | 5 | 0 | 1.2 | 0 | I have an infrared camera/tracker with which I am communicating via the serial port. I'm using the pyserial module to do this at the moment. The camera updates the position of a tracked object at the rate of 60 Hz. In order to get the position of the tracked object I execute one pyserial.write() and then listen for an incoming reply with pyserial.read(serialObj.inWaiting()). Once the reply/position has been received the while loop is reentered and so on. My question has to do with the reliability and speed of this approach. I need the position to be gotten by the computer at the rate of at least 60Hz (and the position will then be sent via UDP to a real-time OS). Is this something that Pyserial/Python are capable of or should I look into alternative C-based approaches?
Thanks,
Luke | 0 | python,real-time,pyserial | 2011-10-02T21:38:00.000 | 0 | 7,629,403 | Python should keep up fine, but the best thing to do is make sure you monitor how many reads per second you are getting. Count how many times the read completed each second, and if this number is too low, write to a performance log or similar. You should also consider decoupling the I/O part from the rest of your python program (if there is one) as pyserial read calls are blocking. | 0 | 4,369 | true | 0 | 1 | pyserial/python and real time data acquisition | 7,630,217 |
1 | 3 | 0 | 0 | 2 | 0 | 0 | 0 | I am testing a piece of hardware which hosts an ftp server. I connect to the server in order to configure the hardware in question.
My test environment is written in Python 3.
To start the ftp server, I need to launch a special proprietary terminal application on my pc. I must use this software as far as I know and I have no help files for it. I do however know how to use it to launch the ftp server and that's all I need it for.
When I start this app, I go to the menu and open a dialog where I select the com port/speed the hardware is connected to. I then enter the command to launch the ftp server in a console like window within the application. I am then prompted for the admin code for the hardware, which I enter. When I'm finished configuring the device, I issue a command to restart the hardware's software.
In order for me to fully automate my tests, I need to remove the manual starting of this ftp server for each test.
As far as I know, I have two options:
Windows GUI automation
Save the stream of data sent on the com port when using this application.
I've tried to find an GUI automater but pywinauto isn't supporting Python 3. Any other options here which I should look at?
Any suggestions on how I can monitor the com port in question and save the traffic on it?
Thanks,
Barry | 0 | python,windows,python-3.x,serial-port,automated-tests | 2011-10-03T08:27:00.000 | 1 | 7,632,642 | I was also able to solve this using WScript, but pySerial was the preferred solution. | 0 | 1,707 | false | 0 | 1 | Control rs232 windows terminal program from python | 7,712,917 |
1 | 3 | 0 | 1 | 4 | 0 | 0.066568 | 0 | In my script I read messages from socket and change the state of some objects in memory depending on the content in message. Everything works fine.
But I want to implement deletion of non-active objects: for example, if there's no message for specified object during some time, it should be deleted. What is the best way to do it ? | 0 | python | 2011-10-03T13:12:00.000 | 0 | 7,635,456 | Store a timestamp in each object - update the timestamp to the current time whenever you modify it.
Then have something that runs every so often, looks at all of the objects, and removes any with a timestamp earlier than a certain amount before the current time. | 0 | 204 | false | 0 | 1 | How to track an objects state in time? | 7,635,473 |
2 | 2 | 0 | 0 | 1 | 1 | 0 | 0 | For a program of mine I have a database full of street name (using GIS stuff) in unicode. The user selects any part of the world he wants to see (using openstreetmap, google maps or whatever) and my program displays every streets selected using a nice font to show their names. As you may know not every font can display non latin characters... and it gives me headaches. I wonder how to tell my program "if this word is written in chinese, then use a chinese font".
EDIT: I forgot to mention that I want to use non-standard fonts. Arial, Courier and some other can display non-latin words, but I want to use other fonts (I have a specific font for chinese, another one for japanese, another one for arabic...). I just have to know what font to chose depending of the word I want to write. | 0 | python,unicode,localization,fonts | 2011-10-03T17:59:00.000 | 0 | 7,638,787 | Use utf-8 text and a font that has glyphs for every possible character defined, like Arial/Verdana in Windows. That bypasses the entire detection problem. One font will handle everything. | 0 | 210 | false | 0 | 1 | How to detect the right font to use depending on the langage | 7,638,836 |
2 | 2 | 0 | 0 | 1 | 1 | 1.2 | 0 | For a program of mine I have a database full of street name (using GIS stuff) in unicode. The user selects any part of the world he wants to see (using openstreetmap, google maps or whatever) and my program displays every streets selected using a nice font to show their names. As you may know not every font can display non latin characters... and it gives me headaches. I wonder how to tell my program "if this word is written in chinese, then use a chinese font".
EDIT: I forgot to mention that I want to use non-standard fonts. Arial, Courier and some other can display non-latin words, but I want to use other fonts (I have a specific font for chinese, another one for japanese, another one for arabic...). I just have to know what font to chose depending of the word I want to write. | 0 | python,unicode,localization,fonts | 2011-10-03T17:59:00.000 | 0 | 7,638,787 | You need information about the language of the text.
And when you decide what fonts you want, you do a mapping from language to font.
If you try to do it automatically, it does not work. The fonts for Japanese, Chinese Traditional, and Chinese Simplified look differently even for the same character. They might be inteligible, but a native would be able to tell (ok, complain) that the font is wrong.
Plus, if you do anything algorithmically, there is no way to consider the estethic part (for instance the fact that you don't like Arial :-) | 0 | 210 | true | 0 | 1 | How to detect the right font to use depending on the langage | 7,684,888 |
1 | 3 | 0 | 3 | 13 | 0 | 0.197375 | 0 | I have a quick one off task in a python script that I'd like to call from Django (www user), that's going to need to root privileges.
At first I thought I would could use Python's os.seteuid() and set the setuid bit on the script, but then I realized that I would have to set the setuid bit on Python itself, which I assume is big no no. From what I can tell, this would also be the case if using sudo, which I really would like to avoid.
At this point, I'm considering just writing a C wrapper the uses seteuid and calls my python script as root, passing the necessary arguments to it.
Is this the correct thing to do or should I be looking at something else? | 0 | python,c,django,freebsd | 2011-10-03T18:33:00.000 | 1 | 7,639,141 | The correct thing is called privilege separation: clearly identify minimal set of tasks which have to be done on elevated privileges. Write a separate daemon and an as much limited as possible way of communicating the task to do. Run this daemon as another user with elevated privileges. A bit more work, but also more secure.
EDIT: using a setuid-able wrapper will also satisfy the concept of privilege separation, although I recommend having the web server chrooted and mounting the chrooted file system nosuid (which would defeat that). | 0 | 4,966 | false | 0 | 1 | Execute Python Script as Root (seteuid vs c-wrapper) | 7,639,481 |
1 | 1 | 0 | 11 | 6 | 0 | 1.2 | 0 | [This question is intended as a means to both capture my findings and sanity check them - I'll put up my answer toute suite and see what other answers and comments appear.]
I spent a little time trying to get my head around the different social authentication options for (python) Appengine. I was particularly confused by how the authentication mechanisms provided by Google can interact with other social authentication mechanisms. The picture is complicated by the fact that Google has nice integration with third party OpenID providers but some of the biggest social networks are not OpenID providers (eg facebook, twitter). [Note that facebook can use OpenID as a relaying party, but not as a provider].
The question is then the following: what are the different options for social authentication in Appengine and what are the pros and cons of each? | 0 | python,google-app-engine,oauth,openid,facebook-authentication | 2011-10-05T10:42:00.000 | 1 | 7,660,059 | In my research on this question I found that there are essentially three options:
Use Google's authentication mechanisms (including their federated login via OpenID)
Pros:
You can easily check who is logged in via the Users service provided with Appengine
Google handles the security so you can be quite sure it's well tested
Cons:
This can only integrate with third party OpenID providers; it cannot integrate with facebook/twitter at this time
Use the social authentication mechanisms provided by a known framework such as tipfy, or django
Pros:
These can integrate with all of the major social authentication services
They are quite widely used so they are likely to be quite robust and pretty well tested
Cons:
While they are probably well tested, they may not be maintained
They do come as part of a larger framework which you may have to get comfortable with before deploying your app
Roll your own social authentication
Pros:
You can do mix up whatever flavours of OpenID and OAuth tickles your fancy
Cons:
You are most likely to introduce security holes
Unless you've a bit of experience working with these technologies, this is likely to be the most time consuming
Further notes:
It's probable that everyone will move to OpenID eventually and then the standard Google authentication should work everywhere
The first option allows you to point a finger at Google if there is a problem with their authentication; the second option imposes more responsibility on you, but still allows you to say that you use a widely used solution if there is a problem and the final option puts all the responsibility on you
Most of the issues revolve around session management - in case 1, Google does all of the session management and it is pretty invisible to the developer; in case 2, the session management is handled by the framework and in the 3rd case, you've to devise your own. | 0 | 552 | true | 1 | 1 | What are the different options for social authentication on Appengine - how do they compare? | 7,662,946 |
1 | 1 | 0 | 1 | 3 | 1 | 0.197375 | 0 | I have a test that runs a python script, which calls into C++ code, where it segfaults and dumps core. I've tried to load the core file in GDB using /usr/bin/python2.6, but this just gives me ?? for all the items in the stack trace. How do I debug this core file? | 0 | c++,python,gdb,core-file | 2011-10-05T23:15:00.000 | 0 | 7,668,850 | You need to compile a version of Python with debugging symbols. You can do this by building Python with ./configure --with-pydebug. Hopefully you will be able to find the error that way.
That will change the behavior of Python internally in some ways. If you don't still get the segfault that way, you might try running ./configure CFLAGS="-O0 -ggdb3" or even just ./configure CFLAGS=-ggdb3. | 0 | 473 | false | 0 | 1 | Debugging a segmentation fault in C++ code called from Python | 7,669,126 |
4 | 7 | 0 | 13 | 221 | 1 | 1 | 0 | I have noticed this in a couple of scripting languages, but in this example, I am using python. In many tutorials, they would start with #!/usr/bin/python3 on the first line. I don't understand why we have this.
Shouldn't the operating system know it's a python script (obviously it's installed since you are making a reference to it)
What if the user is using a operating system that isn't unix based
The language is installed in a different folder for whatever reason
The user has a different version. Especially when it's not a full version number(Like Python3 vs Python32)
If anything, I could see this breaking the python script because of the listed reasons above. | 0 | python,scripting | 2011-10-06T04:29:00.000 | 1 | 7,670,303 | This line helps find the program executable that will run the script. This shebang notation is fairly standard across most scripting languages (at least as used on grown-up operating systems).
An important aspect of this line is specifying which interpreter will be used. On many development-centered Linux distributions, for example, it is normal to have several versions of python installed at the same time.
Python 2.x and Python 3 are not 100% compatible, so this difference can be very important. So #! /usr/bin/python and #! /usr/bin/python3 are not the same (and neither are quite the same as #! /usr/bin/env python3 as noted elsewhere on this page. | 0 | 273,899 | false | 0 | 1 | Purpose of #!/usr/bin/python3 shebang | 7,720,640 |
4 | 7 | 0 | 7 | 221 | 1 | 1 | 0 | I have noticed this in a couple of scripting languages, but in this example, I am using python. In many tutorials, they would start with #!/usr/bin/python3 on the first line. I don't understand why we have this.
Shouldn't the operating system know it's a python script (obviously it's installed since you are making a reference to it)
What if the user is using a operating system that isn't unix based
The language is installed in a different folder for whatever reason
The user has a different version. Especially when it's not a full version number(Like Python3 vs Python32)
If anything, I could see this breaking the python script because of the listed reasons above. | 0 | python,scripting | 2011-10-06T04:29:00.000 | 1 | 7,670,303 | And this line is how.
It is ignored.
It will fail to run, and should be changed to point to the proper location. Or env should be used.
It will fail to run, and probably fail to run under a different version regardless. | 0 | 273,899 | false | 0 | 1 | Purpose of #!/usr/bin/python3 shebang | 7,670,323 |
4 | 7 | 0 | 3 | 221 | 1 | 0.085505 | 0 | I have noticed this in a couple of scripting languages, but in this example, I am using python. In many tutorials, they would start with #!/usr/bin/python3 on the first line. I don't understand why we have this.
Shouldn't the operating system know it's a python script (obviously it's installed since you are making a reference to it)
What if the user is using a operating system that isn't unix based
The language is installed in a different folder for whatever reason
The user has a different version. Especially when it's not a full version number(Like Python3 vs Python32)
If anything, I could see this breaking the python script because of the listed reasons above. | 0 | python,scripting | 2011-10-06T04:29:00.000 | 1 | 7,670,303 | Actually the determination of what type of file a file is very complicated, so now the operating system can't just know. It can make lots of guesses based on -
extension
UTI
MIME
But the command line doesn't bother with all that, because it runs on a limited backwards compatible layer, from when that fancy nonsense didn't mean anything. If you double click it sure, a modern OS can figure that out- but if you run it from a terminal then no, because the terminal doesn't care about your fancy OS specific file typing APIs.
Regarding the other points. It's a convenience, it's similarly possible to run
python3 path/to/your/script
If your python isn't in the path specified, then it won't work, but we tend to install things to make stuff like this work, not the other way around. It doesn't actually matter if you're under *nix, it's up to your shell whether to consider this line because it's a shellcode. So for example you can run bash under Windows.
You can actually ommit this line entirely, it just mean the caller will have to specify an interpreter. Also don't put your interpreters in nonstandard locations and then try to call scripts without providing an interpreter. | 0 | 273,899 | false | 0 | 1 | Purpose of #!/usr/bin/python3 shebang | 52,982,676 |
4 | 7 | 0 | 28 | 221 | 1 | 1 | 0 | I have noticed this in a couple of scripting languages, but in this example, I am using python. In many tutorials, they would start with #!/usr/bin/python3 on the first line. I don't understand why we have this.
Shouldn't the operating system know it's a python script (obviously it's installed since you are making a reference to it)
What if the user is using a operating system that isn't unix based
The language is installed in a different folder for whatever reason
The user has a different version. Especially when it's not a full version number(Like Python3 vs Python32)
If anything, I could see this breaking the python script because of the listed reasons above. | 0 | python,scripting | 2011-10-06T04:29:00.000 | 1 | 7,670,303 | That's called a hash-bang. If you run the script from the shell, it will inspect the first line to figure out what program should be started to interpret the script.
A non Unix based OS will use its own rules for figuring out how to run the script. Windows for example will use the filename extension and the # will cause the first line to be treated as a comment.
If the path to the Python executable is wrong, then naturally the script will fail. It is easy to create links to the actual executable from whatever location is specified by standard convention. | 0 | 273,899 | false | 0 | 1 | Purpose of #!/usr/bin/python3 shebang | 7,670,334 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 0 | I have an apache server that I am using for cgi. I am writing my cgis in Python.
As long as my responses are of the form "Content-Type: text/html\n #" it works.
But if I send anything else, I get a 500 error and my logs say "malformed header from script. Bad header" Can I change my configurations to make it work? Is there anything else I can do? | 0 | python,ajax,apache,cgi | 2011-10-07T06:05:00.000 | 0 | 7,683,597 | You need to send a Content-Type header to tell the browser what type of data you're sending it. Without that you'll get the 500 error you're experiencing. | 0 | 1,336 | false | 1 | 1 | Getting Apache cgi to send json in Python | 7,683,660 |
1 | 1 | 0 | 2 | 1 | 0 | 1.2 | 0 | I'm using a shell execute action in rsyslog to a python script on a CentOS machine. How can I ensure that it runs in a specified virtualenv? | 0 | python,centos,virtualenv,rsyslog | 2011-10-07T12:35:00.000 | 1 | 7,687,332 | Have you ever asked a question while researching something, then learned what you needed to do and then wished you hadn't asked the question?
All you need to do is modify your python path and add the path to the site-packages directory of the virtualenv you want to use. | 0 | 321 | true | 0 | 1 | Rsyslog + Virtualenv | 8,363,498 |
1 | 4 | 0 | 0 | 1 | 1 | 0 | 0 | Does something exist that can take as input U+0043 and produce as output the letter C, maybe even a small description of the character ( like LATIN CAPITAL LETTER C )?
EDIT: the U+0043 is just an example. I would like a generic solution please, that could work for as many codepoints as possible. | 0 | python,unicode | 2011-10-07T15:36:00.000 | 0 | 7,689,527 | You could do chr(0x43) do get C. | 0 | 109 | false | 0 | 1 | How can I convert from U+0043 to C using Python? | 7,689,574 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | Is it possible to profile a script run in wlst? I need to profile a python script that migrates data from an XML file to an LDAP. | 0 | python,weblogic11g,wlst | 2011-10-10T05:57:00.000 | 0 | 7,709,023 | So, I am answering my own question! Yes it is possible to profile. The JVM arguements needed for JProfiler/any other profiler needs to be added to wlst.sh and it works. | 0 | 244 | true | 0 | 1 | Is it possible to profile a script run in WLST? | 8,122,676 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I am working on an application that is supposed to connect to IMAP account, read through emails and pick out emails sent by lets say "Mark", then it is supposed to respond to mark
with an automatic response such as "Got it mate" and then do the same tomorrow, with the only difference that tomorrow it should not respond to the same email.
I am not sure how to achieve this the best way, I have thought of storing the processed IDs in a table, or record last check date. But I feel these are not the best CS solutions. | 0 | python,imap | 2011-10-10T12:12:00.000 | 0 | 7,712,554 | The UID is guaranteed to be unique. Store each one locally. | 0 | 59 | true | 1 | 1 | Best way to earmark messages in an IMAP folder? | 7,793,784 |
1 | 1 | 0 | 0 | 2 | 0 | 0 | 1 | Hello I am having problems with audio being sent over the network. On my local system with no distance there is no problems but whenever I test on a remote system there is audio but its not the voice input i want its choppy/laggy etc. I believe its in how I am handling the sending of the audio but I have tried now for 4 days and can not find a solution.
I will post all relevant code and try and explain it the best I can
these are the constant/global values
#initilaize Speex
speex_enc = speex.Encoder()
speex_enc.initialize(speex.SPEEX_MODEID_WB)
speex_dec = speex.Decoder()
speex_dec.initialize(speex.SPEEX_MODEID_WB)
#some constant values
chunk = 320
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 44100
I found adjusting the sample rate value would allow for more noise
Below is the pyAudio code to initialize the audio device this is also global
#initalize PyAudio
p = pyaudio.PyAudio()
stream = p.open(format = FORMAT,
channels = CHANNELS,
rate = RATE,
input = True,
output = True,
frames_per_buffer = chunk)
This next function is the keypress function which writes the data from the mic and sends it using the client function This is where I believe I am having problems.
I believe how I am handling this is the problem because if I press and hold to get audio it loops and sends on each iteration. I am not sure what to do here. (Ideas!!!)
def keypress(event):
#chunklist = []
#RECORD_SECONDS = 5
if event.keysym == 'Escape':
root.destroy()
#x = event.char
if event.keysym == 'Control_L':
#for i in range(0, 44100 / chunk * RECORD_SECONDS):
try:
#get data from mic
data = stream.read(chunk)
except IOError as ex:
if ex[1] != pyaudio.paInputOverflowed:
raise
data = '\x00' * chunk
encdata = speex_enc.encode(data) #Encode the data.
#chunklist.append(encdata)
#send audio
client(chr(CMD_AUDIO), encrypt_my_audio_message(encdata))
The server code to handle the audio
### Server function ###
def server():
PORT = 9001
### Initialize socket
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server_socket.bind((socket.gethostbyname(socket.gethostname()), PORT))
# socket.gethostbyname(socket.gethostname())
server_socket.listen(5)
read_list = [server_socket]
### Start receive loop
while True:
readable, writable, errored = select.select(read_list, [], [])
for s in readable:
if s is server_socket:
conn, addr = s.accept()
read_list.append(conn)
print "Connection from ", addr
else:
msg = conn.recv(2048)
if msg:
cmd, msg = ord(msg[0]),msg[1:]
## get a text message from GUI
if cmd == CMD_MSG:
listb1.insert(END, decrypt_my_message(msg).strip() + "\n")
listb1.yview(END)
## get an audio message
elif cmd == CMD_AUDIO:
# make sure length is 16 --- HACK ---
if len(msg) % 16 != 0:
msg += '\x00' * (16 - len(msg) % 16)
#decrypt audio
data = decrypt_my_message(msg)
decdata = speex_dec.decode(data)
#Write the data back out to the speaker
stream.write(decdata, chunk)
else:
s.close()
read_list.remove(s)
and for completion the binding of the keyboard in Tkinter
root.bind_all('', keypress)
Any ideas are greatly appreciated how I can make that keypress method work as needed or suggest a better way or maybe I am doing something wrong altogether
*cheers
Please note I have tested it without the encryption methods also and same thing :-) | 0 | python,tcp,speex,pyaudio | 2011-10-11T02:55:00.000 | 0 | 7,720,932 | Did you run ping or ttcp to test network performance between the 2 hosts?
If you have latency spikes or if some packets are dropped your approach to sending voice stream will suffer badly. TCP will wait for missing packet, report it being lost, wait for retransmit, etc.
You should be using UDP over lossy links and audio compression that handles missing packets gracefully. Also in this case you have to timestamp outgoing packets. | 0 | 2,438 | false | 1 | 1 | Python Audio over Network Problems | 13,102,430 |
1 | 1 | 0 | 2 | 3 | 0 | 1.2 | 1 | We have devices that run a proprietary FTP client on them. They retrieve media files (AVI videos and Images) as well as XML files from our web service utilizing a python based FTP server. The problem I'm having is that the FTP client wants to download the media files in ASCII mode instead of binary mode. I'd like to continue to use our python FTP server (pyftpdlib) but I can't figure out a way to force the client to use binary mode.
I've skimmed through the FTP RFC looking for a command/response sequence that would allow our FTP server to tell the FTP client to use binary instead of ASCII. Does such a command/response sequence exist? | 0 | python,ftp | 2011-10-12T15:58:00.000 | 0 | 7,742,965 | You can override the default behaviour or you ftp server by using a custom FTPHandler and overriding the FTPHandler.ftp_TYPE(filetype) method and this way force your server to serve file in binary mode self._current_type = "i". | 0 | 1,814 | true | 0 | 1 | Can you force an FTP client to use binary from the server side | 7,743,098 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.