Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2 | 3 | 0 | 6 | 2 | 1 | 1 | 0 | I am really bad at compiling programs, and I just want to know if my python 2.5 program would be faster if I converted it to a .exe using py2exe. I don't want to spend a lot of time trying to compile it if it will just be slower in the end. My program uses OpenCV and PyAudio, but I think that are the only non pure-python modules it uses. Thanks!
NOTE: I do not think this question requires a snippit of code, but if it does, please say so in the comments. Thanks! | 0 | python,windows-xp,py2exe,python-2.5 | 2012-12-24T13:43:00.000 | 0 | 14,022,166 | No, not really. Since it's merely a wrapper it provides the necessary files needed to run your code. Using Cython could make your program run faster by being able to compile it using C. | 0 | 6,424 | false | 0 | 1 | Is a python script faster when you convert it to a .exe using py2exe? | 14,022,239 |
2 | 3 | 0 | 1 | 2 | 1 | 0.066568 | 0 | I am really bad at compiling programs, and I just want to know if my python 2.5 program would be faster if I converted it to a .exe using py2exe. I don't want to spend a lot of time trying to compile it if it will just be slower in the end. My program uses OpenCV and PyAudio, but I think that are the only non pure-python modules it uses. Thanks!
NOTE: I do not think this question requires a snippit of code, but if it does, please say so in the comments. Thanks! | 0 | python,windows-xp,py2exe,python-2.5 | 2012-12-24T13:43:00.000 | 0 | 14,022,166 | I spent last 2 months working on Windows Python and I must say i am not very happy.
If you have any C modules(besides standard library), you'll have huge problems to get it working.
Speed is similar to Linux, but it seems a little bit slower.
Please note that py2exe is not the best.
I had issues with py2exe and had to use pyinstaller.
It has a better debug, it worked in cases when py2exe didn't
My biggest dissapointment was exe size, I had a simple program which used lxml, suds and it was 7-8mb big... | 0 | 6,424 | false | 0 | 1 | Is a python script faster when you convert it to a .exe using py2exe? | 14,024,396 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | Its weird because, when I run a normal python script on the server, it runs but when I run it via uWSGI, it cant import certain modules.
there is a bash script that starts uwsgi, and passes a path via --pythonpath option.
Is this an additional path or all the paths have to be given here ?
If yes, how do I separate multiple paths given by this option. | 0 | python,uwsgi,pythonpath | 2012-12-26T06:00:00.000 | 1 | 14,036,549 | you can specify multiple --pythonpath options, but PYTHONPATH should be honoured (just be sure it is correctly set by your init script, you can try setting it from the command line and running uwsgi in the same shell session) | 0 | 1,207 | false | 0 | 1 | Does uwsgi server read the paths in the environment variable PYTHONPATH? | 14,039,533 |
1 | 2 | 0 | 28 | 40 | 1 | 1.2 | 0 | Is there any effect of unused imports in a Python script? | 0 | python,performance,python-import | 2012-12-26T09:44:00.000 | 0 | 14,038,691 | You pollute your namespace with names that could interfere with your variables and occupy some memory.
Also you will have a longer startup time as the program has to load the module.
In any case, I would not become too neurotic with this, as if you are writing code you could end up writing and deleting import os continuously as your code is modified. Some IDE's as PyCharm detect unused imports so you can rely on them after your code is finished or nearly completed. | 0 | 6,737 | true | 0 | 1 | Do unused imports in Python hamper performance? | 14,038,726 |
1 | 5 | 0 | 0 | 29 | 0 | 0 | 0 | Is it possible to run a Python script within PHP and transferring variables from each other ?
I have a class that scraps websites for data in a certain global way. i want to make it go a lot more specific and already have pythons scripts specific to several website.
I am looking for a way to incorporate those inside my class.
Is safe and reliable data transfer between the two even possible ? if so how difficult it is to get something like that going ? | 0 | php,python | 2012-12-27T00:30:00.000 | 0 | 14,047,979 | For me the escapeshellarg(json_encode($data)) is giving not exactly a json-formatted string, but something like { name : Carl , age : 23 }.
So in python i need to .replace(' ', '"') the whitespaces to get some real json and be able to cast the json.loads(sys.argv[1]) on it.
The problem is, when someone enters a name with already whitespaces in it like "Ca rl". | 0 | 63,592 | false | 0 | 1 | executing Python script in PHP and exchanging data between the two | 70,877,468 |
1 | 3 | 0 | -1 | 2 | 1 | -0.066568 | 0 | I have written a script in python that I would like to be able to give to some less tech-savvy friends. However, it relies on PIL and requests to function. How can I include these modules without forcing my friends to try to install them? | 0 | python,module | 2012-12-28T10:45:00.000 | 0 | 14,068,303 | It's simple. Make them to put your script in site-packages or dist-packages. They can import the script using import module and use them. | 0 | 112 | false | 0 | 1 | Auto-including Modules in Python | 14,068,920 |
1 | 1 | 0 | 3 | 0 | 1 | 1.2 | 0 | Is it possible to delete a folder from inside a file using the ZipFile module in Python? | 0 | python,python-zipfile | 2012-12-31T02:34:00.000 | 0 | 14,096,829 | No. Read out the rest of the archive and write it to a new zip file. | 0 | 422 | true | 0 | 1 | Deleting a folder in a zip file using the Python zipfile module | 14,096,860 |
2 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 0 | I am using pydub to mix two wav files in one file. Each wav file has about 25Mb and for me page is loaded in about 4 seconds ( so execution time would be 4 seconds )
Does this execution time depend on user's internet connection speed?
If it has any sense : The test.py file is on GoDaddy Deluxe Linux Hosting) | 0 | python,execution-time,pydub | 2013-01-03T11:07:00.000 | 0 | 14,137,675 | it does not: once your script starts dubbing the wav files, it's another task.
see it as a 3-step (i'm guessing, very little information is provided)
step 1: you send the request --> time determined by "internet speed"
step 2: files get dubbed --> server side work, internet speed doesn't count anymore
step 3: you get the result back --> again internet speed related
you have to time them separately: run a benchmark only on the mixing part and see it for yourself
Funny practical way to see this:
Consider the dinner process: the time you spend eating your dinner doesn't depend on the time it takes for you to order or for the waiter to deliver the meal to you.
quick edit: i just realized it may depend on internet speed, if the dubbing/mixing part is streamed real time while being processed. but this doesn't seem your case. | 0 | 544 | false | 0 | 1 | Does python script execution time depend on internet speed? | 14,137,744 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I am using pydub to mix two wav files in one file. Each wav file has about 25Mb and for me page is loaded in about 4 seconds ( so execution time would be 4 seconds )
Does this execution time depend on user's internet connection speed?
If it has any sense : The test.py file is on GoDaddy Deluxe Linux Hosting) | 0 | python,execution-time,pydub | 2013-01-03T11:07:00.000 | 0 | 14,137,675 | No. The execution happens on the server and the execution time depends on the server specs and your script optimizations. The internet speed just affects when the client will receive the response after it is ready from the server and sent!
So in few words:
Server gets request from browser (time for request to reach server depends on internet speed of the client and the host)
Server processes the request according to your code (Execution time depends on your code)
Server responds to client and client receives response (time for request to reach client depends on internet speed of the client and the host) | 0 | 544 | false | 0 | 1 | Does python script execution time depend on internet speed? | 14,137,714 |
1 | 2 | 0 | -1 | 0 | 0 | -0.099668 | 0 | In my office we have about 1000 PDFs that have arbitrary title and author information. My bosses had a spreadsheet created with the PDFs filename and an appropriate title and appropriate author information.
I would like to find a programmatic way to move the data from the Excel sheet to the PDF attributes?
My preferred language is Python so I looked for a Python library to do this, each library I looked at had the author and title fields as read-only.
If Python doesn't have a library that works I am okay using VBA, VB.NET, JavaScript... I will take this as an opportunity to learn a new language. | 0 | python,pdf | 2013-01-03T17:56:00.000 | 0 | 14,144,460 | Use Action Wizard in Acrobat X Pro.
Create New Action.
Setup Start With, and Save To step.
Set checkbox Overwrite existing files.
Select Content. Select Add Document Description.
Left mouse click in option. Uncheck Autor checkbox Leave As Is and enter new Autor name.
Press Save button, set name action Your Action Name.
Run Action:
File- Action Wizard - Your Action Name.
I test it - work). | 0 | 834 | false | 0 | 1 | Changing a PDF author programmatically | 14,144,813 |
1 | 4 | 0 | 4 | 0 | 1 | 0.197375 | 0 | Are there any IDEs for Python that support automatic error highlighting (like the Eclipse IDE for Java?) I think it would be a useful feature for a Python IDE, since it would make it easier to find syntax errors. Even if such an editor did not exist, it still might be possible to implement this by automatically running the Python script every few seconds, and then parsing the console output for error messages. | 0 | python,ide,livecoding | 2013-01-04T06:25:00.000 | 0 | 14,152,187 | eclipse+pydev
pycharm
many others .... | 0 | 203 | false | 0 | 1 | Is it possible to implement automatic error highlighting for Python? | 14,152,212 |
2 | 3 | 0 | 0 | 7 | 0 | 0 | 0 | i have thousands of servers(linux), some only has python 2.x and some only has python 3.x, i want to write one script check.py can run on all servers just as $./check.py without use $python check.py or $python3 check.py, is there any way to do this?
my question is how the script check.py find the Interpreter no matter the Interpreter is python2.x and python3.x | 0 | python,python-3.x | 2013-01-04T06:55:00.000 | 1 | 14,152,548 | In the general case, no; many Python 2 scripts will not run on Python 3, and vice versa. They are two different languages.
Having said that, if you are careful, you can write a script which will run correctly under both. Some authors take extra care to make sure their scripts will be compatible across both versions, commonly using additional tools like the six library (the name is a pun; you can get to "six" by multiplying "two by three" or "three by two").
However, it is now 2020, and Python 2 is officially dead. Many maintainers who previously strove to maintain Python 2 compatibility while it was still supported will now be relieved and often outright happy to pull the plug on it going forward. | 0 | 5,143 | false | 0 | 1 | can one python script run both with python 2.x and python 3.x | 64,151,445 |
2 | 3 | 0 | 0 | 7 | 0 | 0 | 0 | i have thousands of servers(linux), some only has python 2.x and some only has python 3.x, i want to write one script check.py can run on all servers just as $./check.py without use $python check.py or $python3 check.py, is there any way to do this?
my question is how the script check.py find the Interpreter no matter the Interpreter is python2.x and python3.x | 0 | python,python-3.x | 2013-01-04T06:55:00.000 | 1 | 14,152,548 | Considering that Python 3.x is not entirely backwards compatible with Python 2.x, you would have to ensure that the script was compatible with both versions. This can be done with some help from the 2to3 tool, but may ultimately mean running two distinct Python scripts. | 0 | 5,143 | false | 0 | 1 | can one python script run both with python 2.x and python 3.x | 14,152,613 |
1 | 1 | 0 | 4 | 1 | 0 | 0.664037 | 0 | I was wondering what is the advantages of mod_wsgi. For most python web framework, I can launch (daemon) the application by python directly and serve it in a port. Then when shall I use mod_wsgi? | 0 | python,django,apache,mod-wsgi,wsgi | 2013-01-04T07:02:00.000 | 0 | 14,152,651 | Since I answered your other question regarding Flask, I assume you are referring to using Flask's development server. The problem with that is it is single threaded.
Using mod_wsgi, you would be running behind apache, which will do process forking and allow for multiple simultaneous requests to be handled.
There are other options as well. Depending on your particular use case I would consider using eventlet's wsgi server. | 0 | 82 | false | 1 | 1 | Why should I use `mod_wsgi` instead of launching by python? | 14,152,716 |
1 | 1 | 0 | 2 | 4 | 0 | 0.379949 | 1 | I am using hosted exchange Microsoft Office 365 email and I have a Python script that sends email with smtplib. It is working very well. But there is one issue, how can I get the emails to show up in my Outlook Sent Items? | 0 | python,outlook,smtplib | 2013-01-04T08:57:00.000 | 0 | 14,153,954 | You can send a copy of that email to yourself, with some header that tag the email was sent by yourself, then get another script (using IMAP library maybe) to move the email to the Outlook Sent folder | 0 | 1,550 | false | 0 | 1 | How can I see emails sent with Python's smtplib in my Outlook Sent Items folder? | 14,154,176 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 1 | Are there problems with sharing a single instance of RemoteWebDriver between multiple test cases? If not, what's the best practice place to create the instance? I'm working with Python, so I think my options are module level setup, test case class setup, test case instance setup (any others?) | 0 | python,selenium,testcase | 2013-01-04T17:03:00.000 | 0 | 14,161,479 | Sharing a single RemoteWebDriver can be dangerous, since your tests are no longer independently self-contained. You have to be careful about cleaning up browser state and the like, and recovering from browser crashes in the event a previous test has crashed the browser. You'll also probably have more problems if you ever try to do anything distributed across multiple threads, processes, or machines. That said, the options you have for controlling this are not dependent on Selenium itself, but whatever code or framework you are using to drive it. At least with Nose, and I think basic pyunit, you can have setup routines at the class, module, or package level, and they can be configured to run for each test, each class, each module, or each package, if memory serves. | 0 | 249 | true | 0 | 1 | Share single instance of selenium RemoteWebDriver between multiple test cases | 14,161,887 |
2 | 2 | 0 | 2 | 0 | 0 | 0.197375 | 0 | I would like to build a RF transmitter/controller for my garage. When my vehicle gets within 25', I'd like for a computer to trigger a physical relay to open my garage door. you know, like Batman.
I like Python, so I'm hoping I can use it here. | 0 | python,hardware | 2013-01-04T20:49:00.000 | 0 | 14,164,753 | Have you looked into getting a basic arduino? It sounds like you should pick up a cheap one at Radio Shack and get the RF devices to trigger your garage door opener remotely. With the hardware, you can easily talk to it via python (though it'd be easy enough to just do with the Arduino language). | 0 | 465 | false | 0 | 1 | Can I use Python to interact with an RF transmitter/controller? | 14,164,831 |
2 | 2 | 0 | 4 | 0 | 0 | 1.2 | 0 | I would like to build a RF transmitter/controller for my garage. When my vehicle gets within 25', I'd like for a computer to trigger a physical relay to open my garage door. you know, like Batman.
I like Python, so I'm hoping I can use it here. | 0 | python,hardware | 2013-01-04T20:49:00.000 | 0 | 14,164,753 | Look in to the Raspberry Pi. It's a $25 embeddable computer than supports Python. You will need to spec an RF transceiver that can interface with the on board hardware and use the documentation to determine a control method. | 0 | 465 | true | 0 | 1 | Can I use Python to interact with an RF transmitter/controller? | 14,165,338 |
1 | 1 | 0 | 1 | 0 | 1 | 0.197375 | 0 | I'm trying learn TDD in python. Unfortunately I have not found any PEPs about unittest.
Does one subclass of unittest.TestCase should contain all tests about one tested function?
What are the recommendations for naming classes, methods or test-files? | 0 | python,unit-testing | 2013-01-06T19:22:00.000 | 0 | 14,185,831 | I usually make one class that handles the setup and tearing down for a particular test topic and subclass it for every single test. That is, one class for every test, with a name that conveys what is being tested. Nothing fancy. | 0 | 352 | false | 0 | 1 | Python unittest - division tests into classes/functions | 14,185,895 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I am trying to program an alpha sign - 215r - using the alphasign python api [Alphasign] (https://alphasign.readthedocs.org/en/latest/index.html). I downloaded python 2.7, pyusb, pyserial, and libusb. I got the vid and pid of the sign using libusb and added that to the devices.py file. However, when I ran the example python code [here] (https://alphasign.readthedocs.org/en/latest/index.html), I still got an error that said it could not find device with vid and pid of 8765:1234 (the example numbers). Now, when I open the file (the code is copied and pasted from the link above) it crashes IDLE (totally shuts down). ...when I run the file from bash, it says core dump. suggestions please!! | 0 | python-2.7,sign,pyusb | 2013-01-07T23:12:00.000 | 0 | 14,205,744 | I had a similar problem on a Mac (mtn lion): When I ran the sample app, I got a segment fault 11. It was crashing in the alphasign library from the sign.connect() call.
Changed it to sign.connect(reset=False), and it worked fine.
FYI: The segment fault occurs in the low-level USB driver, libusb, not in python code. | 0 | 262 | false | 0 | 1 | Programming an Alpha electronic sign with Alphasign Python | 14,761,682 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I am using intellij IDEA version 11.1.5 on windows and python plugin version is 2.9.2
I am using grinder maven plugin to run the performance tests using grinder. It only supports python(Jython) to run tests. I am not getting any auto suggestions for the python development even though I have installed the python plugin. Python files are also getting displayed as a text files.
Is there any other configuration to enable the auto suggestions for python development? | 0 | maven,python-3.x,intellij-idea | 2013-01-08T11:51:00.000 | 0 | 14,214,331 | Your file types are not configured correctly, .py is most likely assigned to Text files instead of Python files, you can fix it in File | Settings | File Types.
There is no support for tests running via Maven, but you can create your own Run/Debug configuration for Python unit tests in IDEA. | 0 | 711 | false | 0 | 1 | Python support for maven module in intellij | 14,501,896 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | Is there a way to know which application I'm running a python script from?
I can run python from multiple sources, like Textmate, Sublime Text 2 or Terminal (I'm on Mac OSX). How can I know, exactly which tool launched the current python app.
I've tried looking into the os and inspect modules, but couldn't find the solution. | 0 | python | 2013-01-09T13:20:00.000 | 1 | 14,236,130 | If you're happy to stay specific to unix and then you can get the parent PID of the process with os.getppid(). If you wanted to translate it back to a program id, you could can run a subprocess to use the relevant OS-specific PID-to-useful-data tool (odds on - ps) | 0 | 99 | false | 0 | 1 | How to know which application run a python script | 14,236,492 |
1 | 1 | 0 | 2 | 1 | 0 | 0.379949 | 0 | I currently have a v1 API and have updated and created new scripts for v2. The API is consumed by other developers and consists of a bunch of scripts. Before migrating and adding v2 I want to make sure I have a successful versioning strategy to go ahead with.
Currently, there is a bash script called before using the API, with which you can supply the version # or by default gives you the most recent version. Originally, I intended to have different subfolders for each different version, but for scripts that do not change between revisions and scripts that get content added to them, the git history will not be preserved correctly as the original file will still reside in the v1 subdir and will not be 'git mv'ed. This is obviously not the best way but I can't think of a better way currently.
Any recommendations will be helpful but one restriction is that we cannot have a git submodule with different branches. There are no other restrictions (e.g. the bash file used for setup can be deleted) as long as the scripts are accessible. Thanks!
EDIT: We also have scripts above the "API" directory that are part of the same repo that call into the API (we are consumers of our own API). The changes to these files need to be visible when using any version of the API and cannot just be seen in the latest version (related to tags in the repo) | 0 | python,git,api,version | 2013-01-11T04:03:00.000 | 1 | 14,271,489 | I think you want to use tags in your git repository. For each version of your api, use git tag vn and you don't need to maintain earlier versions of your files. You can access all files at a certain version just using git checkout vn.
If you use a remote repository, you need to use the flag --tags to send the tags to the remote repository, ie, git push --tags. | 0 | 605 | false | 0 | 1 | API Versioning while maintaining git history | 14,271,577 |
1 | 1 | 0 | 0 | 2 | 1 | 0 | 0 | I'm working on an open source project (Master of Mana, a mod for Civilization 4) which uses Python 2.4.1 for several game mechanics. Is there a chance for a performance improvement if I try to upgrade to Python 2.7.3 or even 3.3.0?
Related to this, has anyone done a performance analysis on different Python versions? | 0 | python,performance | 2013-01-11T15:27:00.000 | 0 | 14,281,308 | Most newer Python versions bring new features.
Existing code parts are probably updated as well, either for performance or for extended functionality.
The former kind of changes bring a performance benefit, but extended functionality might lead to a poorer performance.
I don't know what is the relationship between these kinds of changes. Probably you will have to do some profiling on yourself. | 0 | 379 | false | 0 | 1 | Upgrading to a newer Python version - performance improvements? | 14,281,528 |
1 | 3 | 0 | 5 | 0 | 1 | 0.321513 | 0 | I'm a complete Noob, having studied Python 2.7 for less than four days using eclipse on a mac, and I have managed to write a "FizzBang" from scratch in about 20 minutes, but....I'm having one heck of a time with basic algorithms. I'm wondering if this is something I'll speed up at in time, or if there is some sort of "logical thinking" practice that is above me without instruction. Memorizing syntax has been no problem so far and I really enjoy the feeling when it all works out.
My question is, should I detour from my current beginner book and read something about basic algorithms (maybe something specific to Python algorithms)?
If so, what beginner text would 'yall recommend?
I searched for this topic and didn't find anything that matched, so if this is a duplicative post, or whatever you call it, my bad.
I'd appreciate any help I get from you Pro's. Thanks | 0 | python,algorithm,structure | 2013-01-11T21:43:00.000 | 0 | 14,287,141 | Learning the syntax of a programming language to express an algorithm is like learning the syntax of English to express a thought.
Sure, there are nuances in English that allow you to express some thoughts better than others or in other languages. However, a command of English does not automatically enable you to be able to think some thoughts.
Similarly, if you want to pick up an algorithms book, go for it! Your understanding of python is only very loosely connected with your ability to develop and algorithm to solve a problem.
Once you learn how to solve problems, you will be able to develop an algorithm to solve the specific problem at hand, and then choose the language best suited to express that algorithm
… And as you design more and more algorithms, you'll get better at developing better algorithms; and as you write more python code, you'll get better at writing python code.
I don't know what book you're currently reading, but beginner books tend to orient themselves at teaching the language (it's syntax, semantics, etc) using simple algorithmic examples. If you're having a tough time understanding the algorithms that govern the solutions to these examples, you should probably do some beginner reading on algorithms. It's somewhat of a cycle, really - in order to learn algorithms, you need to be able to express them (and algorithms are most easily expressed in code). Thus to understand algorithms, you need to understand code.
This is not entirely true - pseudocode solves this problem quite well. But you'll need to understand at least the pseudocode.
Hope this helps | 0 | 590 | false | 0 | 1 | Beginning Python trouble with basic algorithms | 14,287,198 |
2 | 2 | 1 | 3 | 0 | 1 | 0.291313 | 0 | I was using PIL to do image processing, and I tried to convert a color image into a grayscale one, so I wrote a Python function to do that, meanwhile I know PIL already provides a convert function to this.
But the version I wrote in Python takes about 2 seconds to finish the grayscaling, while PIL's convert almost instantly. So I read the PIL code, figured out that the algorithm I wrote is pretty much the same, but
PIL's convert is written in C or C++.
So is this the problem making the performance's different? | 0 | python,c,python-imaging-library | 2013-01-12T02:41:00.000 | 0 | 14,289,657 | Yes, coding the same algorithm in Python and in C, the C implementation will be faster. This is definitely true for the usual Python interpreter, known as CPython. Another implementation, PyPy, uses a JIT, and so can achieve impressive speeds, sometimes as fast as a C implementation. But running under CPython, the Python will be slower. | 0 | 318 | false | 0 | 1 | performance concern, Python vs C | 14,289,791 |
2 | 2 | 1 | 2 | 0 | 1 | 0.197375 | 0 | I was using PIL to do image processing, and I tried to convert a color image into a grayscale one, so I wrote a Python function to do that, meanwhile I know PIL already provides a convert function to this.
But the version I wrote in Python takes about 2 seconds to finish the grayscaling, while PIL's convert almost instantly. So I read the PIL code, figured out that the algorithm I wrote is pretty much the same, but
PIL's convert is written in C or C++.
So is this the problem making the performance's different? | 0 | python,c,python-imaging-library | 2013-01-12T02:41:00.000 | 0 | 14,289,657 | If you want to do image processing, you can use
OpenCV(cv2), SimpleCV, NumPy, SciPy, Cython, Numba ...
OpenCV, SimpleCV SciPy have many image processing routines already.
NumPy can do operations on arrays in c speed.
If you want loops in Python, you can use Cython to compile your python code with static declaration into an external module.
Or you can use Numba to do JIT convert, it can convert your python code into machine binary code, and will give you near c speed. | 0 | 318 | false | 0 | 1 | performance concern, Python vs C | 14,290,456 |
1 | 3 | 0 | 1 | 25 | 0 | 0.066568 | 0 | Both 'pypy' and 'gevent' are supposed to provide high performance. Pypy is supposedly faster than CPython, while gevent is based on co-routines and greenlets, which supposedly makes for a faster web server.
However, they're not compatible with each other.
I'm wondering which setup is more efficient (in terms of speed/performance):
The builtin Flask server running on pypy
or:
The gevent server, running on CPython | 0 | python,performance,gevent,pypy | 2013-01-12T15:10:00.000 | 0 | 14,294,643 | Builtin flask server is a BaseHTTPServer or so, never use. The best scenario is very likely tornado + pypy or something like that. Benchmark before using though. It also depends quite drastically on what you're doing. The web server + web framework benchmarks are typically hello world kind of benchmarks. Is your application really like that?
Cheers, fijal | 0 | 15,866 | false | 1 | 1 | Which setup is more efficient? Flask with pypy, or Flask with gevent? | 14,294,862 |
1 | 1 | 0 | 0 | 4 | 1 | 0 | 0 | I am using platypus to bundle a python applet. I am wondering if there is a way to import modules, like math from stdlib. | 0 | python,import | 2013-01-15T19:44:00.000 | 0 | 14,345,587 | This is probably suboptimal, but I simply add the package (the directories which actually hold the code, not an egg or anything fancy) I want to import into the bundled files list in Platypus.
This works well when you only need to add a few packages. | 0 | 773 | false | 0 | 1 | How to import modules into platypus applet? | 28,459,509 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I installed OSQA Bitnami on my VPS. And I also point 3 domains to this VPS. Now I want each domain point to different web service (I have another PHP website I want to host here).
How can I run php service along with OSQA Bitnami (it's python stack)? | 0 | php,python,bitnami,osqa | 2013-01-16T02:45:00.000 | 0 | 14,350,605 | You can install BitNami modules for each one those stacks, just go to the stack download page, select the module for your platform, execute it in the command line and point to the existing installation. Then you will need to configure httpd.conf to point each domain to each app. | 0 | 133 | false | 1 | 1 | How can I run Bitnami OSQA with other web service (wordpress, joomla)? | 14,351,798 |
1 | 1 | 0 | 0 | 2 | 1 | 0 | 0 | Is there a way to set the optimization level from the setuptools setup.py file? Is there any way to set the optimization level within setuptools?
I've got lots of __debug__ style logging that isn't needed on release. | 0 | python,optimization,software-distribution | 2013-01-16T13:32:00.000 | 0 | 14,359,644 | You control the use of .pyc vs .pyo with the -O switch on the Python command line. Python won't even attempt to use .pyo files without the -O switch, so compiling your .py files to .pyo files at setup time won't help: you need to invoke your program with the -O switch, or the PYTHONOPTIMIZE environment variable. | 0 | 1,371 | false | 0 | 1 | Python Setuptools Distribute: Optimize Option in setup.py? | 14,359,944 |
1 | 3 | 0 | 0 | 2 | 0 | 0 | 0 | I am trying to setup an internal wiki for our development team. It looks like mediawiki is the de facto means of doing this.
While I'm probably capable of setting this up with PHP - I was curious if there is a python port of mediawiki or a similar framework.
I'm very comfortable in PHP, and am more than happy to use it - but most of our developers prefer python. I think it would be neat to have a python wiki running.
This is just a curiosity - and not a serious issue.
Thank you for any suggestions! | 0 | php,python,mediawiki,wiki | 2013-01-16T19:59:00.000 | 0 | 14,366,750 | No, there is no Python port of MediaWiki.
Keep in mind that your developers will be using the wiki, not developing for it. Chances are that none of them will never need to look at the application's internals, so the language it was written in is irrelevant. | 0 | 549 | false | 0 | 1 | MediaWiki python port? | 14,366,895 |
2 | 2 | 0 | 2 | 0 | 1 | 0.197375 | 1 | I'm new to python so please excuse me if question doesn't make sense in advance.
We have a python messaging server which has one file server.py with main function in it. It also has a class "*server" and main defines a global instance of this class, "the_server". All other functions in same file or diff modules (in same dir) import this instance as "from main import the_server".
Now, my job is to devise a mechanism which allows us to get latest message status (number of messages etc.) from the aforementioned messaging server.
This is the dir structure:
src/ -> all .py files only one file has main
In the same directory I created another status server with main function listening for connections on a different port and I'm hoping that every time a client asks me for message status I can invoke function(s) on my messaging server which returns the expected numbers.
How can I import the global instance, "the_server" in my status server or rather is it the right way to go? | 0 | python,client-server,messaging | 2013-01-17T00:37:00.000 | 0 | 14,370,402 | Unless your "status server" and "real server" are running in the same process (that is, loosely, one of them imports the other and starts it), just from main import the_server in your status server isn't going to help. That will just give you a new, completely independent instance of the_server that isn't doing anything, which you can then report status on.
There are a few obvious ways to solve the problem.
Merge the status server into the real server completely, by expanding the existing protocol to handle status-related requests, as Peter Wooster suggestions.
Merge the status server into the real server async I/O implementation, but still listening on two different ports, with different protocol handlers for each.
Merge the status server into the real server process, but with a separate async I/O implementation.
Store the status information in, e.g., a mmap or a multiprocessing.Array instead of directly in the Server object, so the status server can open the same mmap/etc. and read from it. (You might be able to put the Server object itself in shared memory, but I wouldn't recommend this even if you could make it work.)
I could make these more concrete if you explained how you're dealing with async I/O in the server today. Select (or poll/kqueue/epoll) loop? Thread per connection? Magical greenlets? Non-magical cooperative threading (like PEP 3156/tulip)? Even just "All I know is that we're using twisted/tornado/gevent/etc., so whatever that does" is enough. | 0 | 128 | false | 0 | 1 | Python server interaction | 14,370,602 |
2 | 2 | 0 | 2 | 0 | 1 | 1.2 | 1 | I'm new to python so please excuse me if question doesn't make sense in advance.
We have a python messaging server which has one file server.py with main function in it. It also has a class "*server" and main defines a global instance of this class, "the_server". All other functions in same file or diff modules (in same dir) import this instance as "from main import the_server".
Now, my job is to devise a mechanism which allows us to get latest message status (number of messages etc.) from the aforementioned messaging server.
This is the dir structure:
src/ -> all .py files only one file has main
In the same directory I created another status server with main function listening for connections on a different port and I'm hoping that every time a client asks me for message status I can invoke function(s) on my messaging server which returns the expected numbers.
How can I import the global instance, "the_server" in my status server or rather is it the right way to go? | 0 | python,client-server,messaging | 2013-01-17T00:37:00.000 | 0 | 14,370,402 | You should probably use a single server and design a protocol that supports several kinds of messages. 'send' messages get sent, 'recv' message read any existing message, 'status' messages get the server status, 'stop' messages shut it down, etc.
You might look at existing protocols such as REST, for ideas. | 0 | 128 | true | 0 | 1 | Python server interaction | 14,370,495 |
3 | 6 | 0 | 5 | 61 | 0 | 0.16514 | 0 | I've installed pytest 2.3.4 under Debian Linux. By default it runs under Python 2.7, but sometimes I'd like to run it under Python 3.x, which is also installed. I can't seem to find any instructions on how to do that.
The PyPI Trove classifiers show Python :: 3 so presumably it must be possible. Aside from py.test somedir/sometest.py, I can use python -m pytest ..., or even python2.7 -m pytest ..., but if I try python3 -m pytest ... I get
/usr/bin/python3: No module named pytest | 0 | python,python-3.x,pytest | 2013-01-17T02:11:00.000 | 1 | 14,371,156 | Install it with pip3:
pip3 install -U pytest | 0 | 65,185 | false | 0 | 1 | Pytest and Python 3 | 59,968,198 |
3 | 6 | 0 | 27 | 61 | 0 | 1 | 0 | I've installed pytest 2.3.4 under Debian Linux. By default it runs under Python 2.7, but sometimes I'd like to run it under Python 3.x, which is also installed. I can't seem to find any instructions on how to do that.
The PyPI Trove classifiers show Python :: 3 so presumably it must be possible. Aside from py.test somedir/sometest.py, I can use python -m pytest ..., or even python2.7 -m pytest ..., but if I try python3 -m pytest ... I get
/usr/bin/python3: No module named pytest | 0 | python,python-3.x,pytest | 2013-01-17T02:11:00.000 | 1 | 14,371,156 | python3 doesn't have the module py.test installed. If you can, install the python3-pytest package.
If you can't do that try this:
Install virtualenv
Create a virtualenv for python3
virtualenv --python=python3 env_name
Activate the virtualenv
source ./env_name/bin/activate
Install py.test
pip install py.test
Now using this virtualenv try to run your tests | 0 | 65,185 | false | 0 | 1 | Pytest and Python 3 | 14,371,623 |
3 | 6 | 0 | 69 | 61 | 0 | 1.2 | 0 | I've installed pytest 2.3.4 under Debian Linux. By default it runs under Python 2.7, but sometimes I'd like to run it under Python 3.x, which is also installed. I can't seem to find any instructions on how to do that.
The PyPI Trove classifiers show Python :: 3 so presumably it must be possible. Aside from py.test somedir/sometest.py, I can use python -m pytest ..., or even python2.7 -m pytest ..., but if I try python3 -m pytest ... I get
/usr/bin/python3: No module named pytest | 0 | python,python-3.x,pytest | 2013-01-17T02:11:00.000 | 1 | 14,371,156 | I found a workaround:
Installed python3-pip using aptitude, which created /usr/bin/pip-3.2.
Next pip-3.2 install pytest which re-installed pytest, but under a python3.2 path.
Then I was able to use python3 -m pytest somedir/sometest.py.
Not as convenient as running py.test directly, but workable. | 0 | 65,185 | true | 0 | 1 | Pytest and Python 3 | 14,371,849 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I have a C++ application from Windows that I wish to port across to run on a Red Hat Linux system. This application embeds a slightly modified version of Python 2.7.3 (I added the Py_SetPath command as it is essential for my use case) so I definitely need to compile the Python source.
My problem is that despite looking, I can't actually find any guidance on how to get Python to emit the right files for me to link against and how to then get g++ to link my C++ code against it in such a way that I don't need to have an installed copy of Python on every system I distribute this to.
So my questions are:
how do I compile Python so that it can be embedded into the C++ app on Linux?
what am I linking against for the C++ app to work?
Sorry for these basic questions, but having convinced my employer to let me try and move our systems over to Linux, I'm keen to make it go off as smoothly as possible and I'm worried avbout not making too much progress! | 0 | c++,python,gcc,g++,redhat | 2013-01-17T09:02:00.000 | 1 | 14,375,397 | You want to link to the python static library, which should get created by default and will be called libpython2.7.a
If I recall correctly, as long as you don't build Python with --enable-shared it doesn't install the dynamic library, so you'll only get the static lib and so simply linking your C++ application with -lpython2.7 -L/path/where/you/installed/python/lib should link to the static library. | 0 | 667 | true | 0 | 1 | Compile Python 2.7.3 on Linux for Embedding into a C++ app | 14,390,969 |
1 | 2 | 0 | 5 | 1 | 0 | 1.2 | 0 | I am trying to compress a huge python object ~15G, and save it on the disk. Due to requrement constraints I need to compress this file as much as possible. I am presently using zlib.compress(9). My main concern is the memory taken exceeds what I have available on the system 32g during compression, and going forward the size of the object is expected to increase. Is there a more efficient/better way to achieve this.
Thanks.
Update: Also to note the object that I want to save is a sparse numpy matrix, and that I am serializing the data before compressing, which also increases the memory consumption. Since I do not need the python object after it is serialized, would gc.collect() help? | 0 | python,memory,numpy,compression | 2013-01-17T22:26:00.000 | 0 | 14,389,279 | Incremental (de)compression should be done with zlib.{de,}compressobj() so that memory consumption can be minimized. Additionally, higher compression ratios can be attained for most data by using bz2 instead. | 1 | 1,086 | true | 0 | 1 | Compress large python objects | 14,389,347 |
2 | 2 | 0 | 2 | 5 | 0 | 0.197375 | 0 | I have a Python program that needs to access a Java RMI API from a third party system in order to fetch some data.
I have no control over the third party system so it MUST be done using RMI.
What should be my approach here? I have never worked with RMI using Python so I'm kind of lost as to what I should do..
Thanks in advance! | 0 | java,python,rmi | 2013-01-18T16:41:00.000 | 0 | 14,403,472 | You're going to have a very hard time i would imagine. RMI and Java serialization are very Java specific. I don't know if anyone has already attempted to implement this in python (i'm sure google knows), but your best bet would be to find an existing library.
That aside, i would look at finding a way to do the RMI in some client side java shim (maybe some sort of python<->java bridge library?). Or, maybe you could run your python in Jython and leverage the underlying jvm to handle the RMI stuff. | 0 | 5,222 | false | 1 | 1 | Accessing a Java RMI API from Python program | 14,403,624 |
2 | 2 | 0 | 2 | 5 | 0 | 1.2 | 0 | I have a Python program that needs to access a Java RMI API from a third party system in order to fetch some data.
I have no control over the third party system so it MUST be done using RMI.
What should be my approach here? I have never worked with RMI using Python so I'm kind of lost as to what I should do..
Thanks in advance! | 0 | java,python,rmi | 2013-01-18T16:41:00.000 | 0 | 14,403,472 | How about a little java middle ware piece that you can talk to via REST and the piece in turn can to the remote API? | 0 | 5,222 | true | 1 | 1 | Accessing a Java RMI API from Python program | 14,403,646 |
2 | 14 | 0 | 22 | 669 | 1 | 1 | 0 | Sometimes I want to just insert some print statements in my code, and see what gets printed out when I exercise it. My usual way to "exercise" it is with existing pytest tests. But when I run these, I don't seem able to see any standard output (at least from within PyCharm, my IDE).
Is there a simple way to see standard output during a pytest run? | 0 | python,logging,output,pytest | 2013-01-18T18:14:00.000 | 0 | 14,405,063 | Try pytest -s -v test_login.py for more info in console.
-v it's a short --verbose
-s means 'disable all capturing' | 0 | 296,089 | false | 0 | 1 | How can I see normal print output created during pytest run? | 47,816,384 |
2 | 14 | 0 | 4 | 669 | 1 | 0.057081 | 0 | Sometimes I want to just insert some print statements in my code, and see what gets printed out when I exercise it. My usual way to "exercise" it is with existing pytest tests. But when I run these, I don't seem able to see any standard output (at least from within PyCharm, my IDE).
Is there a simple way to see standard output during a pytest run? | 0 | python,logging,output,pytest | 2013-01-18T18:14:00.000 | 0 | 14,405,063 | If you are using PyCharm IDE, then you can run that individual test or all tests using Run toolbar. The Run tool window displays output generated by your application and you can see all the print statements in there as part of test output. | 0 | 296,089 | false | 0 | 1 | How can I see normal print output created during pytest run? | 54,546,338 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I have been doing a little bit of research and haven't found anything that is quite going to work. I want to have python know what the current song playing in iTunes is so I can serially send it to my Arduino.
I have seen Appscript but it is no longer supported and from what I have read full of a few bugs now that it hasn't been updated.
I am using Mac OS X 10.8.2 & iTunes 10.0.1
Anyone got any ideas on how to make this work. Any information is greatly appreciated.
FYI: My project is a little 1.8' colour display screen that I am going to have serval pieces of information on RAM HDD CPU Song etc. | 0 | python,applescript,itunes | 2013-01-19T03:21:00.000 | 1 | 14,410,771 | You can set up a simple Automator workflow to retrieve the current iTunes song. Try these two actions for starters:
iTunes: Get the Current Song
Utilities: Run Shell Script
Change the shell script to cat > ~/itunes_track.txt and you should have a text file containing the path of the current track. Once you get your data out of Automator you should be all set :) | 0 | 2,056 | false | 0 | 1 | Python getting Itunes song | 14,410,871 |
1 | 1 | 0 | 1 | 1 | 1 | 0.197375 | 0 | I was playing with PyAIML. I understand how to set the bot's name. But could not figure out how to set the creator's name so that if anyone ask's "Who created you?" then it can reply appropriately. Please help. | 0 | python,artificial-intelligence,chatbot,aiml | 2013-01-20T00:21:00.000 | 0 | 14,420,413 | It's the same way you set and get the botname. Only the variable name differs to your choice. | 0 | 444 | false | 0 | 1 | How to set the master's name of a chatbot using PyAIML? | 14,879,572 |
1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | I have a shell script which launches many different python scripts.
The shell script exports many variables, which are in turn used by the python scripts.
This is working perfectly when run in command line, but it does not work when executed in crontab.
In the cron logs, I could see the shell script working, but the python script does not seem to run.
Will the python scripts be able to run from the shell script in cron?
Will the python scripts be able to access the env variables set by the parent shell script from cron? | 0 | python,shell,cron | 2013-01-20T18:00:00.000 | 1 | 14,427,475 | If you're having problems it's a good idea to use full qualified paths to commands in any script that's being called from cron, so as to avoid PATH and environment variable issues with the bare-bones environment that cron is called in. | 0 | 147 | false | 0 | 1 | Child script execution in crontab | 17,640,694 |
1 | 1 | 0 | 4 | 5 | 1 | 1.2 | 0 | I'm developing a distribution for the Python package I'm writing so I can post
it on PyPI. It's my first time working with distutils, setuptools, distribute,
pip, setup.py and all that and I'm struggling a bit with a learning curve
that's quite a bit steeper than I anticipated :)
I was having a little trouble getting some of my test data files to be
included in the tarball by specifying them in the data_files parameter in setup.py until I came across a different post here that pointed me
toward the MANIFEST.in file. Just then I snapped to the notion that what you
include in the tarball/zip (using MANIFEST.in) and what gets installed in a
user's Python environment when they do easy_install or whatever (based on what
you specify in setup.py) are two very different things; in general there being
a lot more in the tarball than actually gets installed.
This immediately triggered a code-smell for me and the realization that there
must be more than one use case for a distribution; I had been fixated on the
only one I've really participated in, using easy_install or pip to install a
library. And then I realized I was developing work product where I had only a
partial understanding of the end-users I was developing for.
So my question is this: "What are the use cases for a Python distribution
other than installing it in one's Python environment? Who else am I serving
with this distribution and what do they care most about?"
Here are some of the working issues I haven't figured out yet that bear on the
answer:
Is it a sensible thing to include everything that's under source control
(git) in the source distribution? In the age of github, does anyone download
a source distribution to get access to the full project source? Or should I
just post a link to my github repo? Won't including everything bloat the
distribution and make it take longer to download for folks who just want to
install it?
I'm going to host the documentation on readthedocs.org. Does it make any
sense for me to include HTML versions of the docs in the source
distribution?
Does anyone use python setup.py test to run tests on a source
distribution? If so, what role are they in and what situation are they in? I
don't know if I should bother with making that work and if I do, who to make
it work for. | 0 | python,pip,setuptools,distutils,distribute | 2013-01-21T10:43:00.000 | 0 | 14,436,912 | Some things that you might want to include in the source distribution but maybe not install include:
the package's license
a test suite
the documentation (possibly a processed form like HTML in addition to the source)
possibly any additional scripts used to build the source distribution
Quite often this will be the majority or all of what you are managing in version control and possibly a few generated files.
The main reason why you would do this when those files are available online or through version control is so that people know they have the version of the docs or tests that matches the code they're running.
If you only host the most recent version of the docs online, then they might not be useful to someone who has to use an older version for some reason. And the test suite on the tip in version control may not be compatible with the version of the code in the source distribution (e.g. if it tests features added since then). To get the right version of the docs or tests, they would need to comb through version control looking for a tag that corresponds to the source distribution (assuming the developers bothered tagging the tree). Having the files available in the source distribution avoids this problem.
As for people wanting to run the test suite, I have a number of my Python modules packaged in various Linux distributions and occasionally get bug reports related to test failures in their environments. I've also used the test suites of other people's modules when I encounter a bug and want to check whether the external code is behaving as the author expects in my environment. | 0 | 476 | true | 0 | 1 | What are the use cases for a Python distribution? | 14,437,754 |
2 | 3 | 0 | 0 | 0 | 1 | 0 | 0 | I want to write some code to do acoustic analysis and I'm trying to determine the proper tool(s) for the job. I would normally write something like this in Python using numpy and scipy and possibly Cython for the analysis part. I've discovered that the world of Python audio libraries is a bit chaotic, with scads of very limited packages in various states of development.
I've also come across a bunch of audio/acoustic specific languages like SuperCollider, Faust, etc. that seem to make the audio processing easy but may be limited in terms of IO and analysis capability.
I'm currently working on Linux with Alsa and PulseAudio installed by default. I would prefer not to involve and of the various and sundry other audio packages like Jack if possible, though that is not a hard requirement.
My primary interest in this question is to determine whether there is a domain specific language that will provide for quicker prototyping and testing or whether a general language like Python is more appropriate. Thanks. | 0 | python,audio,dsl,supercollider | 2013-01-22T23:29:00.000 | 0 | 14,469,941 | I'm not 100% sure what you want to do, but as an additional suggestion I would put forth: Spear with scripting in Common Lisp. If what you are doing involves a great deal of spectral analysis, then you can do the heavy Lifting in Spear, and script all of this using Common List with Common Music. Spear has some great tools in terms of editing out very specific partials. | 0 | 519 | false | 0 | 1 | Audio Domain Specific Language vs Python | 16,253,793 |
2 | 3 | 0 | 4 | 0 | 1 | 0.26052 | 0 | I want to write some code to do acoustic analysis and I'm trying to determine the proper tool(s) for the job. I would normally write something like this in Python using numpy and scipy and possibly Cython for the analysis part. I've discovered that the world of Python audio libraries is a bit chaotic, with scads of very limited packages in various states of development.
I've also come across a bunch of audio/acoustic specific languages like SuperCollider, Faust, etc. that seem to make the audio processing easy but may be limited in terms of IO and analysis capability.
I'm currently working on Linux with Alsa and PulseAudio installed by default. I would prefer not to involve and of the various and sundry other audio packages like Jack if possible, though that is not a hard requirement.
My primary interest in this question is to determine whether there is a domain specific language that will provide for quicker prototyping and testing or whether a general language like Python is more appropriate. Thanks. | 0 | python,audio,dsl,supercollider | 2013-01-22T23:29:00.000 | 0 | 14,469,941 | I've got a lot of experience with SuperCollider and Python (with and without Numpy). I do a lot of audio analysis, and I'm afraid the answer depends on what you want to do.
If you want to create systems that will input OR output audio in real time, then Python is not a good choice. The audio I/O libraries (as you say) are a bit sketchy. There's also a fundamental issue that Python's garbage collector is not really designed for realtime stuff. You should use a system that is designed from the ground up for realtime. SuperCollider is nice for this, and as caseyanderson notes, some of the standard building-blocks for audio analysis are right there. There are other environments too.
If you want to do hardcore work such as applying various machine learning algorithms, not necessarily in real time (i.e. if you can get away with reading/writing WAV files rather than live audio), then you should use a general-purpose programming language with wide support, and an ecosystem of good libraries for the extra things you want. Using Python with libs such as numpy and scikits-learn works great for this. It's good for quick prototyping, but not only does it lack solid realtime audio, it also has far fewer of the standard audio building-blocks. Those are two important things which hold you back when prototyping audio pipelines.
So, then, you're caught between these two options. Depending on your application you may be able to combine the two by manipulating the audio I/O in a realtime environment, and using OSC messaging or shell scripts to communicate with an external Python process. The limitation there is that you can't really throw masses of data around between the two (you can't sensibly pipe all your audio across to some other process, that'd be silly). | 0 | 519 | false | 0 | 1 | Audio Domain Specific Language vs Python | 14,707,108 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | I have a weather station supplying me data every 2.5 seconds. (using weewx)
I want to show this live on my website using highcharts to plot live data.
Currently i can pickup the messages from the redis channel 'weather' using Predis just to test.
The issue is that the data is only sent every 2.5, so when a users opens the php site he sometimes has to wait 2.5 seconds for the chart to appear.
Do you have any suggestions to get around this issue? | 0 | php,python,redis | 2013-01-24T19:15:00.000 | 0 | 14,508,976 | Store the data manually the very first time (while developing the software).
Every 2.5 seconds of running, use polling to check for updated data. If the data is updated, then update the data currently stored.
When the user logs on, you plot the chart with the values in the database. | 0 | 238 | false | 0 | 1 | Redis: Live data via channel | 14,511,844 |
2 | 2 | 0 | 0 | 0 | 0 | 1.2 | 1 | I have a weather station supplying me data every 2.5 seconds. (using weewx)
I want to show this live on my website using highcharts to plot live data.
Currently i can pickup the messages from the redis channel 'weather' using Predis just to test.
The issue is that the data is only sent every 2.5, so when a users opens the php site he sometimes has to wait 2.5 seconds for the chart to appear.
Do you have any suggestions to get around this issue? | 0 | php,python,redis | 2013-01-24T19:15:00.000 | 0 | 14,508,976 | What you should do is have a second listener dump data into a key current_weather every time an event comes across. When you first load the page, pull from that key to build the chart, then start listening for updates. | 0 | 238 | true | 0 | 1 | Redis: Live data via channel | 14,511,705 |
1 | 1 | 0 | 8 | 2 | 0 | 1.2 | 0 | I do not understand the difference between setting up a Unencrypted Session Factory in order to set cookies, as compared to using request.response.set_cookie(..) and request.cookies[key]. | 0 | python,python-2.7,pyramid | 2013-01-25T22:31:00.000 | 0 | 14,531,396 | The UnencryptedCookieSessionFactory manages one cookie, that is signed. This means that the client can read1 what is in the cookie, but cannot change the values in the cookie.
If you set cookies directly using response.set_cookie(), the client can not only read the cookie, they can change the value of the cookie and you won't be able to detect that the contents have been tampered with.
Moreover, the UnencryptedCookieSessionFactory let's you store any python structure and it'll take care of encoding these to fit within the limitations of a cookie; you'd have to do the same work manually with .set_cookie().
1 You'd have to base64-decode the cookie, then use the pickle module to decode the contents. Because the cookie is cryptographically signed, the usual security concerns that apply to pickle are mitigated. | 0 | 451 | true | 1 | 1 | In Pyramid Framework what is the difference between default Unencrypted Session Factory and setting cookies manually? | 14,539,402 |
1 | 1 | 0 | 1 | 1 | 1 | 1.2 | 0 | I would like to take snippets of text and convert them, programmatically, to be more phonetic than traditional English spellings. The purpose of this is for converting a bunch of text I'm working with to a version that can be read by common TTS tools in a more natural way for a scientific application.
Many of the terms and acronyms are common, but there are a variety that are not. I am hoping to find a script or resource of some kind that already has much of this effort done (realizing I will have to do a bit of customization as per these terms), however so far I've found nothing that goes down this path.
I HAVE found solutions that are truly phonetic and are for linguistic applications, however these are not desirable as they are not readable by standard off the shelf TTS solutions that I've used.
Any ideas of a starting point for this situation? Any examples or even libs that make this easier to chew would be fine. Or am I bound to sit down and grind out a solution entirely of my own design? | 0 | python,ruby,text-to-speech | 2013-01-27T05:09:00.000 | 0 | 14,544,627 | Here is an idea that should work :
Use a cunning, "linguist"-grade phoneticizer.
Use a standard, small table of phonetic to English syllable ( in the English accent of your choosing ), mappings.
Run the resulting "phonetically spelled" words through a standard, OTS TTS product .
So, for example, you can have :
Fish and chips ----step 1---> phonetic linguist code ---step 2. (new Zealand accent)---> fush und chupsh ---step 3. ---> audio pleasure
Hope this assists you! | 0 | 894 | true | 0 | 1 | Programmatically convert text to phonetic TTS readable text? | 14,544,779 |
2 | 3 | 0 | 6 | 1 | 1 | 1 | 0 | I have medium amateur skills in Python and I'm beginner in asm and haven't any knowledge of C-language.
I know that python C-extensions must follow specific interface to work fine.
Is this possible to write python extension in pure Assembly with the right interface and full functionality? The second question is would it be efficient enough if case of doing it right?
While googling I haven't found any examples of code or some articles or solutions about this question.
And this ISN'T the question about running asm-code from within Python so it's not duplicate of topics on SO. | 0 | python,assembly,python-extensions | 2013-01-27T10:50:00.000 | 0 | 14,546,610 | In theory - it is possible.
In practice - it is highly impractical to do so.
There are very very few cases where there is justified usage of Assembly over C, and even if you face such a situation, it is highly unlikely you will be working with Python in that case.
Also note, that the compiler can optimize the C code to extremely efficient assembly. In fact it is highly unlikely that you will hand write assembly and it will be more efficient that the compiler output, unless you are have extremely potent assembly skills, or have been writing assembly all your life.. | 0 | 1,492 | false | 0 | 1 | How to write python extensions in pure asm and would it be efficient? | 14,546,652 |
2 | 3 | 0 | 2 | 1 | 1 | 0.132549 | 0 | I have medium amateur skills in Python and I'm beginner in asm and haven't any knowledge of C-language.
I know that python C-extensions must follow specific interface to work fine.
Is this possible to write python extension in pure Assembly with the right interface and full functionality? The second question is would it be efficient enough if case of doing it right?
While googling I haven't found any examples of code or some articles or solutions about this question.
And this ISN'T the question about running asm-code from within Python so it's not duplicate of topics on SO. | 0 | python,assembly,python-extensions | 2013-01-27T10:50:00.000 | 0 | 14,546,610 | You could write your asm as inline asm inside your c extention, as for efficiency...
Teapot.
Efficiency isn't measured by the choice of language, its measured by how well its implemented and how well its designed. | 0 | 1,492 | false | 0 | 1 | How to write python extensions in pure asm and would it be efficient? | 14,546,657 |
2 | 12 | 0 | 2 | 116 | 1 | 0.033321 | 0 | Is there any way to check if object is an instance of a class? Not an instance of a concrete class, but an instance of any class.
I can check that an object is not a class, not a module, not a traceback etc., but I am interested in a simple solution. | 0 | python,object,instance | 2013-01-27T16:23:00.000 | 0 | 14,549,405 | Yes. Accordingly, you can use hasattr(obj, '__dict__') or obj is not callable(obj). | 0 | 140,863 | false | 0 | 1 | Python check instances of classes | 55,782,792 |
2 | 12 | 0 | 2 | 116 | 1 | 0.033321 | 0 | Is there any way to check if object is an instance of a class? Not an instance of a concrete class, but an instance of any class.
I can check that an object is not a class, not a module, not a traceback etc., but I am interested in a simple solution. | 0 | python,object,instance | 2013-01-27T16:23:00.000 | 0 | 14,549,405 | It's a bit hard to tell what you want, but perhaps inspect.isclass(val) is what you are looking for? | 0 | 140,863 | false | 0 | 1 | Python check instances of classes | 14,549,914 |
1 | 3 | 0 | 3 | 13 | 1 | 0.197375 | 0 | I need to make computations in a python program, and I would prefer to make some of them in R. Is it possible to embed R code in python ? | 0 | python,r | 2013-01-27T19:48:00.000 | 0 | 14,551,472 | When I need to do R calculations, I usually write R scripts, and run them from Python using the subprocess module. The reason I chose to do this was because the version of R I had installed (2.16 I think) wasn't compatible with RPy at the time (which wanted 2.14).
So if you already have your R installation "just the way you want it", this may be a better option. | 1 | 13,823 | false | 0 | 1 | Embed R code in python | 14,552,819 |
5 | 13 | 0 | -1 | 41 | 1 | -0.015383 | 0 | I am working on an application which uses Boost.Python to embed the Python interpreter. This is used to run user-generated "scripts" which interact with the main program.
Unfortunately, one user is reporting runtime error R6034 when he tries to run a script. The main program starts up fine, but I think the problem may be occurring when python27.dll is loaded.
I am using Visual Studio 2005, Python 2.7, and Boost.Python 1.46.1. The problem occurs only on one user's machine. I've dealt with manifest issues before, and managed to resolve them, but in this case I'm at a bit of a loss.
Has anyone else run into a similar problem? Were you able to solve it? How? | 0 | visual-c++,python-2.7,visual-studio-2005,manifest,boost-python | 2013-01-27T21:10:00.000 | 0 | 14,552,348 | Adding this answer for who is still looking for a solution. ESRI released a patch for this error. Just download the patch from their website (no login required), install it and it will solve the problem. I downloaded the patch for 10.4.1 but there are maybe patches for other versions also. | 0 | 52,195 | false | 0 | 1 | Runtime error R6034 in embedded Python application | 45,562,941 |
5 | 13 | 0 | 0 | 41 | 1 | 0 | 0 | I am working on an application which uses Boost.Python to embed the Python interpreter. This is used to run user-generated "scripts" which interact with the main program.
Unfortunately, one user is reporting runtime error R6034 when he tries to run a script. The main program starts up fine, but I think the problem may be occurring when python27.dll is loaded.
I am using Visual Studio 2005, Python 2.7, and Boost.Python 1.46.1. The problem occurs only on one user's machine. I've dealt with manifest issues before, and managed to resolve them, but in this case I'm at a bit of a loss.
Has anyone else run into a similar problem? Were you able to solve it? How? | 0 | visual-c++,python-2.7,visual-studio-2005,manifest,boost-python | 2013-01-27T21:10:00.000 | 0 | 14,552,348 | The discussion on this page involves doing things way far advanced above me. (I don't code.) Nevertheless, I ran Process Explorer as the recommended diagnostic. I found that another program uses and needs msvcr90.dll in it's program folder. Not understanding anything else being discussed here, as a wild guess I temporarily moved the dll to a neighboring program folder.
Problem solved. End of Runtime error message.
(I moved the dll back when I was finished with the program generating the error message.)
Thank you all for your help and ideas. | 0 | 52,195 | false | 0 | 1 | Runtime error R6034 in embedded Python application | 33,030,097 |
5 | 13 | 0 | 4 | 41 | 1 | 0.061461 | 0 | I am working on an application which uses Boost.Python to embed the Python interpreter. This is used to run user-generated "scripts" which interact with the main program.
Unfortunately, one user is reporting runtime error R6034 when he tries to run a script. The main program starts up fine, but I think the problem may be occurring when python27.dll is loaded.
I am using Visual Studio 2005, Python 2.7, and Boost.Python 1.46.1. The problem occurs only on one user's machine. I've dealt with manifest issues before, and managed to resolve them, but in this case I'm at a bit of a loss.
Has anyone else run into a similar problem? Were you able to solve it? How? | 0 | visual-c++,python-2.7,visual-studio-2005,manifest,boost-python | 2013-01-27T21:10:00.000 | 0 | 14,552,348 | (This might be better as a comment than a full answer, but my dusty SO acct. doesn't yet have enough rep for that.)
Like the OP I was also using an embedded Python 2.7 and some other native assemblies.
Complicating this nicely was the fact that my application was a med-large .Net solution running on top of 64-Bit IIS Express (VS2013).
I tried Dependency Walker (great program, but too out of date to help with this), and Process Monitor (ProcMon -- which probably did find some hints, but even though I was using filters the problems were buried in thousands of unrelated operations, better filters may have helped).
However, MANY THANKS to Michael Cooper! Your steps and Process Explorer (procexp) got me quickly to a solution that had been dodging me all day.
I can add a couple of notes to Michael's excellent post.
I ignored (i.e. left unchanged) not just the \WinSxS\... folder but also the \System32\... folder.
Ultimately I found msvcr90.dll being pulled in from:
C:\Program Files (x86)\Intel\OpenCL SDK\2.0\bin\x64
Going through my Path I found the above and another, similar directory which seemed to contain 32-bit versions. I removed both of these, restarted and... STILL had the problem.
So, I followed Michael's steps once more, and, discovered another msvcr90.dll was now being loaded from:
C:\Program Files\Intel\iCLS Client\
Going through my Path again, I found the above and an (x86) version of this directory as well. So, I removed both of those, applied the changes, restarted VS2013 and...
No more R6034 Error!
I can't help but feel frustrated with Intel for doing this. I had actually found elsewhere online a tip about removing iCLS Client from the Path. I tried that, but the symptom was the same, so, I thought that wasn't the problem. Sadly iCLS Client and OpenCL SDK were tag-teaming my iisexpress. If I was lucky enough to remove either one, the R6034 error remained. I had to excise both of them in order to cure the problem.
Thanks again to Michael Cooper and everyone else for your help! | 0 | 52,195 | false | 0 | 1 | Runtime error R6034 in embedded Python application | 31,012,118 |
5 | 13 | 0 | 0 | 41 | 1 | 0 | 0 | I am working on an application which uses Boost.Python to embed the Python interpreter. This is used to run user-generated "scripts" which interact with the main program.
Unfortunately, one user is reporting runtime error R6034 when he tries to run a script. The main program starts up fine, but I think the problem may be occurring when python27.dll is loaded.
I am using Visual Studio 2005, Python 2.7, and Boost.Python 1.46.1. The problem occurs only on one user's machine. I've dealt with manifest issues before, and managed to resolve them, but in this case I'm at a bit of a loss.
Has anyone else run into a similar problem? Were you able to solve it? How? | 0 | visual-c++,python-2.7,visual-studio-2005,manifest,boost-python | 2013-01-27T21:10:00.000 | 0 | 14,552,348 | In my case the rebuilding of linked libraries and the main project with similar "Runtime execution libraries" project setting helped. Hope that will be usefull for anybody. | 0 | 52,195 | false | 0 | 1 | Runtime error R6034 in embedded Python application | 28,445,016 |
5 | 13 | 0 | 0 | 41 | 1 | 0 | 0 | I am working on an application which uses Boost.Python to embed the Python interpreter. This is used to run user-generated "scripts" which interact with the main program.
Unfortunately, one user is reporting runtime error R6034 when he tries to run a script. The main program starts up fine, but I think the problem may be occurring when python27.dll is loaded.
I am using Visual Studio 2005, Python 2.7, and Boost.Python 1.46.1. The problem occurs only on one user's machine. I've dealt with manifest issues before, and managed to resolve them, but in this case I'm at a bit of a loss.
Has anyone else run into a similar problem? Were you able to solve it? How? | 0 | visual-c++,python-2.7,visual-studio-2005,manifest,boost-python | 2013-01-27T21:10:00.000 | 0 | 14,552,348 | In my case, I realised the problem was coming when, after compiling the app into an exe file, I would rename that file. So leaving the original name of the exe file doesn't show the error. | 0 | 52,195 | false | 0 | 1 | Runtime error R6034 in embedded Python application | 32,307,577 |
1 | 2 | 0 | 0 | 5 | 0 | 0 | 0 | Basically I have code completion working (to the best of my knowledge that it 'works') in Eclipse, but it's not nearly as good as what Visual Studio has. I have it set to call auto-complete when ( is pressed, but doing this does not show a list of the method parameters. I have to mouse over the method for that to happen, and I'd prefer for it to happen while I type, like Intellisense in VS.
I'm using Aptana 3 with PyDev if it's relevant. | 0 | python,eclipse,autocomplete | 2013-01-28T13:54:00.000 | 0 | 14,563,584 | Just press Ctrl + Space, as Jonas Karlsson said. | 0 | 2,276 | false | 0 | 1 | How do I get Eclipse to show me a method's signature while typing? | 46,099,882 |
1 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 0 | Here's my problem:
Suppose there's a course for robots to go through, and there's an overhead webcam that can see the whole of it, and which the robot can use to navigate. Now the question is, what's the best way to detect the robot (position and heading) on the image of this webcam? I was thinking about a few solutions, like putting leds on it, or two separate colored circles, but those doesn't seem to be the best way to do it.
Is there a better solution to this, and if yes, I would really appreciate some opencv2 python code example of it, as I'm new to computer vision. | 0 | python,opencv,computer-vision,robot | 2013-01-28T21:55:00.000 | 0 | 14,571,975 | I'd do the following, and I'm pretty sure it would work:
I assume that the background of the video stream (the robots vicinity) is pretty static, so the firs step is:
1. background subtraction
2. detect movement in the foreground, this is your robot and everything else that changes from the background model, you'll need some thresholding here
3. connected-component detection to get the blobs
4. identify the blob corresponding to the robot (biggest?)
5. now you can get the coordinates of the blob
6. you can compute the heading if you track your blob through multiple frames
you can find good examples by googling the keywords
Distinctive color would work with color filtering and template matching and the likes, but the above method is more general. | 0 | 684 | false | 1 | 1 | How can I detect my robot from an overhead webcam image? | 14,572,614 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | Is it possible to use Pyro and gevent together? How would I go about doing this?
Pyro wants to have its own event loop, which underneath probably uses epoll etc. I am having trouble reconciling the two.
Help would be appreciated. | 0 | python,gevent,pyro | 2013-01-29T03:24:00.000 | 0 | 14,575,161 | I use gevent.spawn(daemon.requestLoop). I can't say more without knowing more about the specifics. | 0 | 304 | true | 0 | 1 | How can I use Pyro with gevent? | 18,750,345 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I would like to achieve the following things:
Given file contains a job list which I need to execute one by one in a remote server using SSH APIs and store results.
When I try to call the following command directly on remote server using putty it executes successfully but when I try to execute it through python SSH programming it says cant find autosys.ksh.
autosys.ksh autorep -J JOB_NAME
Any ideas? Please help. Thanks in advance. | 0 | python,unix,ssh,autosys | 2013-01-29T16:08:00.000 | 1 | 14,587,135 | After reading your comment on the first answer, you might want to create a bash script with bash path as the interpreter line and then the autosys commands.
This will create a bash shell and run the commands from the script in the shell.
Again, if you are using autosys commands in the shell you better set autosys environment up for the user before running any autosys commands. | 0 | 1,343 | false | 0 | 1 | How to call .ksh file as part of Unix command through ssh in Python | 18,859,497 |
1 | 5 | 0 | 4 | 4 | 0 | 0.158649 | 0 | I want to do the following:
If the bash/python script is launched from a terminal, it shall do something such as printing an error message text. If the script is launched from GUI session like double-clicking from a file browser, it shall do something else, e.g. display a GUI message box. | 0 | python,linux,bash,user-interface,command-line-interface | 2013-01-29T21:12:00.000 | 1 | 14,592,390 | It can check the value of $DISPLAY to see whether or not it's running under X11, and $(tty) to see whether it's running on an interactive terminal. if [[ $DISPLAY ]] && ! tty; then chances are good you'd want to display a GUI popup. | 0 | 788 | false | 0 | 1 | How can Linux program, e.g. bash or python script, know how it was started: from command line or interactive GUI? | 14,592,451 |
1 | 1 | 0 | 2 | 0 | 0 | 1.2 | 1 | So I'm trying to run a search query through the Twitter API in Python. I can get it to return up to 100 results using the "count" parameter. Unfortunately, version 1.1 doesn't seem to have the "page" parameter that was present in 1.0. Is there some sort of alternative for 1.1? Or, if not, does anyone have any suggestions for alternative ways to get a decent amount of tweets returned for a subject.
Thanks.
Update with solution:
Thanks to the Ersin below.
I queried as a normally would for a page, and when it's return I would check for the id of the oldest tweet. I'd then use this as the max_id in the next URL. | 0 | python,twitter | 2013-01-29T21:44:00.000 | 0 | 14,592,874 | I think you should use "since_id" parameter in your url. since_id provides u getting pages that older than since_id. So, for the next page you should set the since_id parameter as the last id of your current page. | 0 | 297 | true | 0 | 1 | Returning more than one page in Python Twitter search | 14,593,070 |
2 | 2 | 0 | 1 | 4 | 0 | 0.099668 | 0 | I cannot run any script by pressing F5 or selecting run from the menus in IDLE. It stopped working suddenly. No errors are coughed up. IDLE simply does nothing at all.
Tried reinstalling python to no effect.
Cannot run even the simplest script.
Thank you for any help or suggestions you have.
Running Python 2.6.5 on windows 7.
Could not resolve the problem with idle. I have switched to using pyDev in Aptana Studio 3. | 0 | python,python-idle | 2013-01-29T21:44:00.000 | 1 | 14,592,879 | Your function keys are locked,I think so.
Function keys can be unlocked by fn key + esc.
Then f5 will work without any issue. | 0 | 7,570 | false | 0 | 1 | IDLE no longer runs any script on pressing F5 | 62,436,025 |
2 | 2 | 0 | 1 | 4 | 0 | 0.099668 | 0 | I cannot run any script by pressing F5 or selecting run from the menus in IDLE. It stopped working suddenly. No errors are coughed up. IDLE simply does nothing at all.
Tried reinstalling python to no effect.
Cannot run even the simplest script.
Thank you for any help or suggestions you have.
Running Python 2.6.5 on windows 7.
Could not resolve the problem with idle. I have switched to using pyDev in Aptana Studio 3. | 0 | python,python-idle | 2013-01-29T21:44:00.000 | 1 | 14,592,879 | I am using a Dell laptop, and ran into this issue. I found that if I pressed Function + F5, the program would run.
On my laptop keyboard, functions key items are in blue (main functions in white). The Esc (escape) key has a blue lock with 'Fn' on it. I pressed Esc + F5, and it unlocked my function keys. I can now run a program in the editor by only pressing F5
Note: Running Python 3 - but I do not think this is an issue with Idle or Python - I think this is a keyboard issue. | 0 | 7,570 | false | 0 | 1 | IDLE no longer runs any script on pressing F5 | 48,695,999 |
1 | 3 | 0 | 0 | 1 | 0 | 0 | 0 | I have a python code (name.py) written in separate file and now I want to execute that code using sikuli.
I have tried
openApp but its not working
could be possible I did some mistake but still looking for working logic. | 0 | python,sikuli | 2013-01-30T08:44:00.000 | 0 | 14,599,820 | Do you want to run the file as an executable script or use its contents?
As an executable script make sure that the file is a valid script and will execute something when called and then use suprocess.Popen() from another .py file to execute that file.
To use the module's contents make sure the file is on the PYTHONPATH and use import name and everything within name will now be available for use. | 0 | 3,069 | false | 0 | 1 | How to execute python script file (filename.py) using sikuli | 31,121,679 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I want to fetch the commit message to my bitbucket repository each time a user is doing any push operation.
How can I do that?
I am in development version. So is there any way by which I can post to localhost/someurl for each commit from my repository.
Else suggest other ways by which I can achieve this.
Thanks in advance for help. | 0 | python,post,push-notification,bitbucket,githooks | 2013-01-31T11:16:00.000 | 1 | 14,624,421 | Select the administration menu for the repository (the gear symbol), then Services. There you can set up integration with external services, such as email or twitter. | 0 | 258 | false | 0 | 1 | get bitbucket commit message for each push | 14,627,911 |
1 | 1 | 0 | 1 | 4 | 1 | 0.197375 | 0 | For a project I am working on I would like to use a pgcrypto compatible encryption in python. And specific the public key encryption part.
The problem I have is that most (all) of the implementations make use of subprocess like approaches to fork gpg, as I have to encrypt a lot of data (50.000+ entries per session) this approach will not work for me.
Can someone give me some pointers how that this could be achieved? | 0 | python,postgresql,public-key-encryption,gnupg,pgcrypto | 2013-02-01T09:34:00.000 | 0 | 14,643,282 | Have a look at PyCrypto, it doesn't seem to use forking. pgcrypto can be configured to fit most crypto configurations. | 0 | 690 | false | 0 | 1 | How to encrypt in a pgcrypto compatible way in python | 14,660,589 |
1 | 2 | 0 | 2 | 1 | 1 | 0.197375 | 0 | My project requires such that my python files have to be converted to py2exe. Fair and well , my py2exe is working. Assume my binary is called as "test.exe". I know that my test.exe contains all pyc files of my python file. What i want to do is , protect my text.exe, so that my source is not seen, in other words i dont want it be decompiled back, what can i do for this ? | 0 | python,py2exe | 2013-02-01T11:08:00.000 | 0 | 14,644,986 | In short: nothing. Any executable can always be reverse-engineered.
More in detail: do you really think your code is so valuable that people would go to spend months to do that?
Also keep in mind that if you import any module released under GPL, you would be doing something illegal in not having your code as GPL as well. | 0 | 930 | false | 0 | 1 | protect binary generated from py2exe python | 14,645,057 |
1 | 9 | 0 | 3 | 39 | 0 | 0.066568 | 0 | Is there a module for Python to open IBM SPSS (i.e. .sav) files? It would be great if there's something up-to-date which doesn't require any additional dll files/libraries. | 0 | python,dataset,statistics,python-module,spss | 2013-02-01T13:07:00.000 | 0 | 14,647,006 | But the benefit of using the IBM libraries is that they get this rather complex binary file format right. They are free, relieve you of the burden of writing code for this format, and the license permits you to redistribute them. What more could you ask? | 0 | 47,105 | false | 0 | 1 | Is there a Python module to open SPSS files? | 14,669,613 |
1 | 1 | 0 | 0 | 2 | 0 | 1.2 | 1 | We test an application developed in house using a python test suite which accomplishes web navigations/interactions through Selenium WebDriver. A tricky part of our web testing is in dealing with a series of pdf reports in the app. We are testing a planned upgrade of Firefox from v3.6 to v16.0.1, and it turns out that the way we captured reports before no longer works, because of changes in the directory structure of firefox's temp folder. I didn't write the original pdf capturing code, but I will refactor it for whatever we end up using with v16.0.1, so I was wondering if there' s a better way to save a pdf using Python's selenium webdriver bindings than what we're currently doing.
Previously, for Firefox v3.6, after clicking a link that generates a report, we would scan the "C:\Documents and Settings\\Local Settings\Temp\plugtmp" directory for a pdf file (with a specific name convention) to be generated. To be clear, we're not saving the report from the webpage itself, we're just using the one generated in firefox's Temp folder.
In Firefox 16.0.1, after clicking a link that generates a report, the file is generated in "C:\Documents and Settings\ \Local Settings\Temp\tmp*\cache*", with a random file name, not ending in ".pdf". This makes capturing this file somewhat more difficult, if using a technique similar to our previous one - each browser has a different tmp*** folder, which has a cache full of folders, inside of which the report is generated with a random file name.
The easiest solution I can see would be to directly save the pdf, but I haven't found a way to do that yet.
To use the same approach as we used in FF3.6 (finding the pdf in the Temp folder directory), I'm thinking we'll need to do the following:
Figure out which tmp*** folder belongs to this particular browser instance (which we can do be inspecting the tmp*** folders that exist before and after the browser is instantiated)
Look inside that browser's cache for a file generated immedaitely after the pdf report was generated (which we can by comparing timestamps)
In cases where multiple files are generated in the cache, we could possibly sort based on size, and take the largest file, since the pdf will almost certainly be the largest temp file (although this seems flaky and will need to be tested in practice).
I'm not feeling great about this approach, and was wondering if there's a better way to capture pdf files. Can anyone suggest a better approach?
Note: the actual scraping of the PDF file is still working fine. | 0 | python,pdf,selenium,webdriver,selenium-webdriver | 2013-02-01T17:40:00.000 | 0 | 14,651,973 | We ultimately accomplished this by clearing firefox's temporary internet files before the test, then looking for the most recently created file after the report was generated. | 0 | 1,728 | true | 1 | 1 | Capturing PDF files using Python Selenium Webdriver | 14,760,698 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | hi i wanted to know if uploading large files like videos ( over 200 mb - 1gb) from php is a good option after setting up the server configuration like max_post_size , execution time etc. The reason i ask this question is because i read some where that when a large file is uploaded , best practice is to break that file into chunks and upload it ( I think youtube does that). Do i need to use another language like python or C++ for uploading large files or is php enough. If i need to use another language can anyone please help me with reading material for that .
Thank you. | 0 | php,python,web | 2013-02-01T22:02:00.000 | 0 | 14,655,765 | Its not only PHP to be considered for large file uploads. Your web server also need to support that, at least in nginx. I don't know how httpd handles that, but as you said splitting in chunks are viable solution. FTP is another option. | 0 | 978 | false | 0 | 1 | Is php good for large file uploads such as videos | 14,655,886 |
1 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I have 3 scripts. one is starttest.py which kicks the execution of methods called in test.py. Methods are defined in module.py.
There are many print statements in each of file and I want to capture each print statement in my log file from Starttest.py file itself. I tried using sys.stdout in starttest.py file but this function only takes print statements from starttest.py file. It does not have any control on test.py and module.py file print statements.
Any suggestions to capture the print statements from all of the files in a single place only? | 0 | python,python-2.7 | 2013-02-04T00:34:00.000 | 0 | 14,679,012 | Maybe look at the logging module that comes with python | 0 | 109 | false | 0 | 1 | Capturing log information from bunch of python functions in single file | 14,679,195 |
1 | 3 | 0 | 0 | 2 | 0 | 0 | 0 | I am looking for solutions to create a RPC client in Linux that can connect to Sun ONC RPC server.
The server is written in C.
I would like to know if I can:
Create an RPC client in Linux
Create the RPC client in Python | 0 | python,c,linux,rpc | 2013-02-04T12:31:00.000 | 1 | 14,686,861 | An ONC RPC client can be created by using the .idl file and rpcgen. The original RPC protocol precedes SOAP by several years.
Yes, you can create the RPC client in linux (see rpcgen)
Yes, you can create the RPC client in python (please see pep-0384) | 0 | 1,821 | false | 0 | 1 | Connect to Sun ONC RPC server from Linux | 14,688,041 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | i have developed a custom field that extends ImageField and this custom field, dynamically creates 2more normal fields. Now, I need to write tests for this custom fields ?
What tests are needed for this customfield ? Can you name them so that I will code those test cases. I am not asking technically how to write a test, I donno but I wil learn . But, what I want to know is, what are the things I need to test here. | 0 | python,django,django-testing | 2013-02-04T12:58:00.000 | 0 | 14,687,281 | Try to test every case of how your custom field could be used. In example try to send by it different kind of datas (string, integers, blank, different image formats etc.) and check if it works according to your expectations. | 0 | 32 | false | 0 | 1 | What tests do I need to write for the customfield that I have developed? | 14,689,563 |
1 | 2 | 0 | 9 | 11 | 0 | 1 | 0 | is there a posibility to make eclipse PyDev use a remote Python interpreter?
I would like to do this, as the Linux Server I want to connect to has several optimization solvers (CPLEX, GUROBI etc.) running, that my script uses.
Currently I use eclipse locally to write the scripts, then copy all the files to the remote machine, log in using ssh and execute the scripts there with "python script.py".
Instead I hope to click the "run" button and just have everything executed within my eclipse IDE.
Thanks | 0 | eclipse,pydev,python | 2013-02-05T20:51:00.000 | 1 | 14,716,662 | Unfortunately no. You can remotely connect to your Linux server via Remote System Explorer (RSE). But can't use it as a remote interpreter. I use Pycharm. You can use the free Community Edition or the Professional Edition for which you have to pay for it. It is not that expensive and it has been working great for me. | 0 | 7,677 | false | 0 | 1 | Eclipse PyDev use remote interpreter | 15,360,958 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | After using cheetah and mako at their functional minimum (only for substitution) for sometime, I started asking myself whether just using string.Template wouldn't be the better and simpler approach for my use case(less deps).
In addition I wondered whether it would be reasonable to import these templates as .py files to avoid .open() on each call. This would make handling templates a little more complicated but other than that I'd save a lot of system calls.
What do you guys think?
I'm well aware that my present templating is speedy enough for 99.9% of the use cases I will go through.
Thank you for any Input | 0 | python,performance,templating | 2013-02-06T11:12:00.000 | 0 | 14,727,628 | After lots of trying and reading I found string.Template from the Core Library to be the fastest - I just wrapped in my own simple class to encapsulate the file-access/reads et voilà. | 0 | 42 | true | 1 | 1 | Fastest way to do _simple_ templating with Python | 16,305,372 |
1 | 1 | 0 | 2 | 1 | 0 | 1.2 | 0 | How can I stop, restart or start Gunicorn running within a virtualenv on a Debian system?
I can't seem to find a solution apart from finding the PID for the gunicorn daemon and killing it.
Thank you. | 0 | python,python-2.7,debian,gunicorn | 2013-02-06T19:06:00.000 | 1 | 14,736,788 | That is indeed the proper way to do it. Start it with the -p option so you don't have to guess at the PID if you have more than one instance running. You can tell gunicorn to reload your application without restarting the gunicorn process itself by sending it a SIGHUP instead of killing it.
If that makes you uncomfortable, you can always write a management script to put in /etc/init.d and start it like any other service. | 0 | 5,683 | true | 0 | 1 | How can I stop, restart or start Gunicorn running within a virtualenv on a Debian system? | 14,737,918 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I want to display whether firewall is present or not.. if it is not enabled, the user should get an alert.. can it be done using python code? | 0 | python,linux | 2013-02-07T05:23:00.000 | 0 | 14,744,178 | In GNU/linux the firewall (netfilter) is part of the kernel, so I think that if linux is on, the firewall is too.
next, you may ask netfilter if it is configured, and if is there any rules. for this you might parse iptables command (such as iptables -L) output. | 0 | 579 | false | 0 | 1 | Determining presence of firewall using python on linux | 14,744,411 |
2 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I'm using the pyc tool to compile IronPython scripts to executables, but can they be run without IronPython installed? If so what do I have to include? | 0 | python,ironpython,exe | 2013-02-07T08:42:00.000 | 0 | 14,746,917 | I think with py2exe and the .net framework | 0 | 230 | false | 0 | 1 | is it possible to run compiled iron python scripts on PCs without iron python installed? | 14,746,966 |
2 | 2 | 0 | 2 | 0 | 1 | 1.2 | 0 | I'm using the pyc tool to compile IronPython scripts to executables, but can they be run without IronPython installed? If so what do I have to include? | 0 | python,ironpython,exe | 2013-02-07T08:42:00.000 | 0 | 14,746,917 | Yes, you can run it on other PC's without installing IronPy or Visual Studi (ergo, off the bat).
Sometimes you'd might need the Windows runtime libraries that you compiled the application with, but other then that.. yes you can execute it on any other Windows PC equal to the one your compiled it on.
(example, compiling on win7 will most likely run on win7 off the bat but not on a XP without the runtime libraries used on the compiling machine) | 0 | 230 | true | 0 | 1 | is it possible to run compiled iron python scripts on PCs without iron python installed? | 14,746,941 |
2 | 2 | 0 | 2 | 0 | 1 | 0.197375 | 0 | I have files containing compiled Python bytecode. I want to run them through my executable program without the massive overload of the Python interpreter.
Any ideas? | 0 | python,embed,bytecode | 2013-02-07T15:40:00.000 | 0 | 14,755,062 | pyc are not compiled to machine code. Use Shedskin for that. | 0 | 457 | false | 0 | 1 | Is there a light version of Python that only runs .pyc files? | 14,755,102 |
2 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I have files containing compiled Python bytecode. I want to run them through my executable program without the massive overload of the Python interpreter.
Any ideas? | 0 | python,embed,bytecode | 2013-02-07T15:40:00.000 | 0 | 14,755,062 | You mention the massive overhead of the interpreter: do you have any evidence that the compilation step is massive overhead? You might be misunderstanding what is in a .pyc file. Python bytecode is not machine code, it is very high-level bytecodes that are executed by the Python interpreter.
In any case, no, there is not a build of Python that can run .pyc files and not .py files. | 0 | 457 | false | 0 | 1 | Is there a light version of Python that only runs .pyc files? | 14,756,123 |
1 | 2 | 0 | 3 | 4 | 0 | 1.2 | 0 | I am working on a web project with 7 developers. I setup a beta box (debian) so that we can do testing of new code before passing it to staging.
On the beta box, I setup Jenkins and would like to automate the merge/testing process. We also have a test suite which I would like to tie-in somehow.
How should I test and run python web projects with SVN / Jenkins?
I'm trying to formulate a good workflow. Right now each developer works on a feature branch, I run the code in the branch, if it looks good we merge it.
I would love to have developers login to the beta jenkins, and tell it to build from their feature branch. Here is my plan for what Jenkins would do:
Make sure the feature branch is rebased from trunk
Make sure the beta branch is identical to trunk (overwriting any merged-in feature branches)
Merge the feature branch into the beta branch
Kill the running server
Start the server nohup python app.py &
Run the test suite python test.py
Output the test data to the developer's view in Jenkins
If any of the tests fail, revert to the state before the branch was merged
I'm not sure how to handle merge conflicts. Also, the above is probably bad and wrong. Any advice would be appreciated! | 0 | python,svn,jenkins,continuous-integration,release | 2013-02-07T15:40:00.000 | 0 | 14,755,065 | The question is a bit too big to be answered in a simple post, I will therefore try to give a few hints and references as far as I see from my personal view:
A few quick tips:
I like the idea of separating the developers into branches, but I would do the testing on the feature-branch and only merge to the beta branch if the feature passes tests, this way nothing enters beta until it is tested!
I would put the integration steps into a script outside of Jenkins. Make it part of the source code. This way you can test the script itself quickly outside of Jenkins
Use the build-system or scripting language you feel most comfortable with, most of the steps can easily done with any programming language
Make the script return success or failure, so Jenkins can flag the build as failed
For the merge-issues, you have two possibilities
Require the branch to be manually rebased before a developer can submit it for integration, check in the script and fail it if a rebase is necssary. This way merge-errors cannot happen, the build simply fails if the branch is not rebased
If you rather allow non-rebased merges, you need to fail the build on merge errors so the developer can manually resolve the problem (by rebasing his/her branch before submitting again)
Here some books that I found useful in this area:
How Google Tests Software, by James A. Whittaker, Jason Arbon, Jeff Carollo
Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by Jez Humble
Let me know via comments what additional content you would like to have. | 0 | 601 | true | 1 | 1 | How should our devs test python SVN branches with Jenkins? | 14,828,819 |
1 | 2 | 0 | 3 | 3 | 0 | 1.2 | 0 | I'm using twisted to get messages from internet connected sensors in order to store it to a db. I want to check these messages without interfere these process,because I need compare every message with some base values at db, if some is matched I need trigger an alert for this, and the idea is not block any process...
My Idea is create a new process to check and alert, but I need after the first process store the message, it will send the message to the new process in order to check and alert if is required.
I'm need IPC for this, and I was thinking to use ZeroMQ, but also twisted have a approach to work with IPC, I think if I use ZeroMQ, but maybe it will be self-defeating...
What think you about my approach? Maybe I'm completely wrong at all?
Any advice are welcome..
Thanks
PD:This Process will run at a dedicated server, with a expected load of 6000 msg/hour of 1Kb each one | 0 | python,architecture,ipc,twisted,zeromq | 2013-02-07T15:46:00.000 | 0 | 14,755,187 | All of these approaches are possible. I can only speak abstractly because I don't know the precise contours of your application.
If you already have a working application but it just isn't fast enough to handle the number of messages you throw at it, then identify the bottleneck. The two likely causes of your holdup are DB access or alert-triggering because either one of these are probably synchronous IO operations.
How you deal with this depends on your workload:
If your message rate is high and constant, then you need to make sure your database can handle this rate. If your DB can't handle it, then no amount of non-blocking message passing will help you! In this order:
Try tuning your database.
Try putting your database on a bigger comp with more memory.
Try sharding your database across multiple machines to distribute the workload.
Once you know your db can handle the message rate, you can deal with other bottlenecks using other forms of parallelism.
If your message rate is bursty then you can use queueing to handle the bursts. In this order:
Put a load balancer in front of a cluster of message processors. All this balancer should do is redistribute sensor messages to different machines for check-and-alert processing. The advantage of this approach is that you will probably not need to change your existing application, just run it on more machines. This works best if your load balancer does not need to wait for a response, just forward the message.
If your communication needs are more complex or are bidirectional, you can use a message bus (such as ZeroMQ) as the communication layer between message-processors, alert-senders, and database-checkers. The idea is to increase parallelism by having non-blocking communication occur through the bus and having each node on the bus do one thing only. You can then alter the ratio of node types depending on how long each stage of message processing takes. (I.e. to make the queue depth equal across the entire message processing process.) | 0 | 1,496 | true | 0 | 1 | Architecture approach with IPC, Twisted or ZeroMQ? | 14,763,596 |
1 | 3 | 0 | 0 | 4 | 1 | 0 | 0 | I'm working through Doug Hellman's "The Python Standard Library by Example" and came across this:
"1.3.2 Compiling Expressions
re includes module-level functions for working with regular expressions as text strings, but it is more efficient to compile the expressions a program uses frequently."
I couldn't follow his explanation for why this is the case. He says that the "module-level functions maintain a cache of compiled expressions" and that since the "size of the cache" is limited, "using compiled expressions directly avoids the cache lookup overhead."
I'd greatly appreciate it if someone could please explain or direct me to an explanation that I could better understand for why it is more efficient to compile the regular expressions a program uses frequently, and how this process actually works. | 0 | python,regex,compilation | 2013-02-07T16:22:00.000 | 0 | 14,755,882 | It sounds to me like the author is simply saying it's more efficient to compile a regex and save that than to count on a previously compiled version of it still being held in the module's limited-size internal cache. This is probably because to the amount of effort it takes to compile them plus the extra cache lookup overhead that must first occur being greater than the client simply storing them itself. | 0 | 1,111 | false | 0 | 1 | Compiling Regular Expressions in Python | 14,756,210 |
1 | 1 | 0 | 0 | 6 | 0 | 0 | 1 | We run a bunch of Python test scripts on a group of test stations. The test scripts interface with hardware units on these test stations, so we're stuck running one test script at a time per station (we can't virtualize everything). We built a tool to assign tests to different stations and report test results - this allows us to queue up thousands of tests and let these run overnight, or for any length of time.
Occasionally, what we've found is that test stations will drop out of the cluster. When I remotely log into them, I get a black screen, then they reboot, then upon logging in I'm notified that windows XP had a "serious error". The Event Log contains a record of this error, which states Category: (102) and Event ID: 1003.
Previously, we found that this was caused by the creation of hundreds of temporary Firefox profiles - our tests use selenium webdriver to automate website interactions, and each time we started a new browser, a temporary Firefox profile was created. We added a step in the cleanup between each test that empties these temporary Firefox profiles, but we're still finding that stations drop out sometime, and always with this serious error and record in the Event Log.
I would like to find the root cause of this problem, but I don't know how to go about doing this. I've tried searching for information about how to read event log entries, but I haven't turned up anything that helps. I'm open to any suggestions for ways to go about debugging this issue. | 0 | python,selenium,webdriver,selenium-webdriver | 2013-02-07T20:50:00.000 | 0 | 14,760,751 | I've experienced similar problems before with Firefox. The rare times that we managed to catch a machine in the act it was just not closing browser sessions. Hence the BSOD eventually. Obviously this was a bug in either webdriver, firefox, or XP (which we were also using). We solved it by aggressively killing every firefox process between each individual test. This worked for us. And because you are not running tests in parallel it would work for you as well. By agressively I mean putting an axe through it. The windows equivalent of killall -9 firefox. Because these sessions were unresponsive.
As to the root cause? The problem did not occur with specific versions of Firefox. But we never actually managed to debug it properly. Debugging was very difficult because it wasn't reproducible under short test runs and once the issue arose it really did cause a hard crash. | 0 | 855 | false | 0 | 1 | Python selenium webdriver tests causing "serious error" when run in large batches on windows XP | 21,198,235 |
4 | 7 | 0 | 2 | 29 | 0 | 0.057081 | 0 | One of the causes of the local_settings.py anti-pattern is that putting SECRET_KEY, AWS
keys, etc.. values into settings files has problem:
Secrets often should be just that: secret! Keeping them in version control means
that everyone with repository access has access to them.
My question is how to keep all keys as secret? | 0 | python,django,settings | 2013-02-09T07:49:00.000 | 0 | 14,786,072 | Here's one way to do it that is compatible with deployment on Heroku:
Create a gitignored file named .env containing:
export DJANGO_SECRET_KEY = 'replace-this-with-the-secret-key'
Then edit settings.py to remove the actual SECRET_KEY and add this instead:
SECRET_KEY = os.environ['DJANGO_SECRET_KEY']
Then when you want to run the development server locally, use:
source .env
python manage.py runserver
When you finally deploy to Heroku, go to your app Settings tab and add DJANGO_SECRET_KEY to the Config Vars. | 0 | 26,432 | false | 1 | 1 | Keep Secret Keys Out | 53,798,521 |
4 | 7 | 0 | 6 | 29 | 0 | 1 | 0 | One of the causes of the local_settings.py anti-pattern is that putting SECRET_KEY, AWS
keys, etc.. values into settings files has problem:
Secrets often should be just that: secret! Keeping them in version control means
that everyone with repository access has access to them.
My question is how to keep all keys as secret? | 0 | python,django,settings | 2013-02-09T07:49:00.000 | 0 | 14,786,072 | Store your local_settings.py data in a file encrypted with GPG - preferably as strictly key=value lines which you parse and assign to a dict (the other attractive approach would be to have it as executable python, but executable code in config files makes me shiver).
There's a python gpg module so that's not a problem. Get your keys from your keyring, and use the GPG keyring management tools so you don't have to keep typing in your keychain password. Make sure you are reading the data straight from the encrypted file, and not just creating a decrypted temporary file which you read in. That's a recipe for fail.
That's just an outline, you'll have to build it yourself.
This way the secret data remains solely in the process memory space, and not in a file or in environment variables. | 0 | 26,432 | false | 1 | 1 | Keep Secret Keys Out | 14,786,575 |
4 | 7 | 0 | 5 | 29 | 0 | 0.141893 | 0 | One of the causes of the local_settings.py anti-pattern is that putting SECRET_KEY, AWS
keys, etc.. values into settings files has problem:
Secrets often should be just that: secret! Keeping them in version control means
that everyone with repository access has access to them.
My question is how to keep all keys as secret? | 0 | python,django,settings | 2013-02-09T07:49:00.000 | 0 | 14,786,072 | Ideally, local_settings.py should not be checked in for production/deployed server. You can keep backup copy somewhere else, but not in source control.
local_settings.py can be checked in with development configuration just for convenience, so that each developer need to change it.
Does that solve your problem? | 0 | 26,432 | false | 1 | 1 | Keep Secret Keys Out | 14,786,114 |
4 | 7 | 0 | 0 | 29 | 0 | 0 | 0 | One of the causes of the local_settings.py anti-pattern is that putting SECRET_KEY, AWS
keys, etc.. values into settings files has problem:
Secrets often should be just that: secret! Keeping them in version control means
that everyone with repository access has access to them.
My question is how to keep all keys as secret? | 0 | python,django,settings | 2013-02-09T07:49:00.000 | 0 | 14,786,072 | You may need to use os.environ.get("SOME_SECRET_KEY") | 0 | 26,432 | false | 1 | 1 | Keep Secret Keys Out | 46,735,039 |
1 | 2 | 0 | 9 | 0 | 0 | 1.2 | 0 | I am currently building a web/desktop application. The user can create an account online and login either online or via the desktop client.
The client will be built in Python and exported to exe.
I want to encrypt the password before it is sent online as the site has no https connection.
What is the best way to do this so the hashed password will be the same in python and php? Or is their a better way or should I just invest in https?
I have tried using simple hashing but php md5("Hello") will return something different to python's hashlib.md5("Hello").hexdigest() | 0 | php,python,passwords,password-protection,password-encryption | 2013-02-10T16:27:00.000 | 0 | 14,799,847 | Forget this idea. Hashing the password on the client, sending the hash to the server and then compare it to the stored hash is equivalent to storing plain passwords in the database, because the hash becomes the password.
Or should I just invest in https?
Yes! | 0 | 240 | true | 0 | 1 | Password protection for Python and PHP | 14,799,899 |
1 | 1 | 0 | 7 | 1 | 0 | 1.2 | 0 | Is it possible to get the user's browser width and height in Pyramid? I've searched through the response object and Googled.
If it's not available in Pyramid, I'll just grab it in javascript | 0 | python,pyramid | 2013-02-10T20:21:00.000 | 0 | 14,802,197 | No, that is not possible to determine with server-side code only. Browsers do not share that information when making HTTP requests to the server.
You'll have to do this with JavaScript. | 0 | 118 | true | 1 | 1 | Can I get the browser width and height in Pyramid? | 14,802,581 |
2 | 3 | 0 | 0 | 0 | 1 | 0 | 0 | I have a simple cgi script in python collecting a value from form fields submitted through post. After collecting this, iam dumping these values to a single text file.
Now, when multiple users submit at the same time, how do we go about it?
In C\C++ we use semaphore\mutex\rwlocks etc? Do we have anything similar in python. Also, opening and closing the file multiple times doesnt seem to be a good idea for every user request.
We have our code base for our product in C\C++. I was asked to write a simple cgi script for some reporting purpose and was googling with python and cgi.
Please let me know.
Thanks!
Santhosh | 0 | python,cgi | 2013-02-11T17:17:00.000 | 0 | 14,817,290 | The simple (and slow way) is to acquire a lock on the file (in C you'd use flock), write on it and close it. If you think this can be a bottleneck then use a database or something like that. | 0 | 300 | false | 0 | 1 | multiple users doing form submission with python CGI | 14,817,753 |
Subsets and Splits