Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2 | 2 | 0 | 1 | 5 | 0 | 0.099668 | 0 | There is probably a nice document that will help me. Please point to it.
If I write a Thrift server using Python what is the best way to deploy it in a production environment? All I can find is examples of using the Python based servers that come with the distribution. How can I use Apache as the server platform for example? Will it support persistent connections?
Thanks in advance. | 0 | python,thrift | 2012-01-10T18:27:00.000 | 0 | 8,808,476 | I assume that you are using the Python THttpServer? A couple of notes:
1) There is a comment in that code that reads
"""
This class is not very performant, but it is useful (for example) for
acting as a mock version of an Apache-based PHP Thrift endpoint.
"""
I wouldn't recommend that you use it in production if you care about performance. If you read through this code a bit, you'll find that it is fairly easy to re-implement it using a different HTTP server of your choosing. There are a number of good options in the Python ecosystem.
2) Also if you read the code, you'll find that Thrift HTTP servers are regular old HTTP servers. They accept all traffic on a single path ('/' by default) and direct the message to the appropriate method by reading routing information encoded into the message itself (using the Thrift "processor" construct). You should be able to set up Apache/nginx/whatever in the normal way and simply forward all traffic to '/' on the host and port you are running on. | 0 | 1,005 | false | 0 | 1 | how to deploy a hardened Thrift server for python? | 17,220,313 |
1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | I am fairly new to programming and I am trying to learn the python Nose module for testing a code (myscript.py) that takes 2 input files and writes 2 output files. I want to write a test.py script (to run using Nose) that will take a bunch of test files, run them as input files, and then evaluate the output files by comparing them to known output. I understand that it is better to test functions individually, but my questions are applicable to either scenario.
Here is my confusion. How do I specify that test.py is supposed to run on myscript.py? Does test.py need to actually open up myscript.py? If so, I presume I would "import myscript.py"? Could/should I actually generate input/output files during testing, or should I use something like StringIO? | 0 | python,unit-testing,nose | 2012-01-11T03:05:00.000 | 0 | 8,813,669 | Better to create a functions which accept text as an argument and return text as well. These functions should be placed in the myscript.py and tested in the tests.py. | 0 | 348 | false | 0 | 1 | Basics of Using Python Nose | 8,814,746 |
1 | 5 | 0 | 0 | 29 | 0 | 0 | 0 | I have setup a run configuration in Eclipse and need to send SIGINT (Ctrl+C) to the program. There is cleanup code in the program that runs after SIGINT, so pressing Eclipse's "Terminate" buttons won't work (they send SIGKILL I think). Typing CTRL+C into the Console also doesn't work.
How do I send SIGINT to a process running inside an Eclipse Console?
(FWIW I am running a Twisted daemon and need Twisted to shutdown correctly, which only occurs on SIGINT) | 0 | python,eclipse,twisted,sigint | 2012-01-11T05:01:00.000 | 1 | 8,814,383 | in some versions, you can do the following.
In the Debug perspective, you can open a view called "Signals"
(Window/Show View/Signals" or Left-Bottom Icon).
You will get a list of all supported signals. Right-click and "Resume
with Signal" will give you the result you need. | 0 | 15,315 | false | 0 | 1 | Sending SIGINT (Ctrl-C) to program running in Eclipse Console | 61,871,564 |
1 | 2 | 0 | 0 | 2 | 0 | 0 | 0 | I have very little knowledge in testing and I wish to seek guidance in the following scenario - I have a piece of code where it takes in some arguments (filename, path etc) and uploads the file specified to a remote ftp server. So, the goal of the testing would be to check if the file is uploaded to the correct directory in the ftp server.
Now, I don't suppose I should involve the remote server in my test script, so should I setup a ftp server locally and mimick the file structure, or is there a mock ftp tool available in python to facilitate these scenarios?
Also, is this unit testing or functional testing? | 0 | python,unit-testing,testing | 2012-01-11T06:43:00.000 | 0 | 8,815,201 | We have some functional tests that truly do hit a real FTP server that we keep running as part of our staging environment. The config of our Django project is slightly different when running tests, so it hits this 'test' FTP server rather than any of our real servers or any of our client's. Having said that, these are amongst our slowest-to-run tests, so I'm looking to rewrite them to use an ftp server on the localhost, started & shut down by the test that needs it.
A mock ftp server sounds like a good idea. Can't wait to try it out, thanks @Bryce. For third-party servers in general, it might be problematic to be sure that your mocks actually matched the server API, but for FTP, this seems stable and well-understood enough that this shouldn't be a problem.
Functional testing invokes your entire system end-to-end, to check that user-visible behavior is what your spec says it should be, to assure you that your product really works, so you can confidently deploy new versions no matter what changes the code contains. The primary failing of functional tests is to exercise product code in ways that differ from production (e.g. only running part of the code, running against a different database schema or vendor) They generally use real data, are large and hard to write (database setup in particular), and slow to run. It's often very difficult to write enough functional tests to verify your entire program's behavior for all possible permutations of inputs, and if you did write them, they would take forever to run. So instead, you write a small number of judiciously chosen functional tests to demonstrate key user workflows, and then augment these with unit tests:
Unit tests invoke a very small amount of code, such as a single function or method. The primary goal of unit tests is to be fast to run, and easy to write, so that you can write hundreds or thousands of them, get good coverage, and run them all, in a couple of seconds at most, before you commit. (or maybe even every time you hit save from your editor.) The primary failing of unittests is to be slow.
Tests which sound like they are a bit inbetween these two extremes are integration tests, that don't test the whole system end-to-end, but do test several layers or components are integrated properly. Sometimes these are useful, or are the easiest test to write, but they are lacking in the primary virtues of either proving the product as a whole actually works, or being very fast to run. As a result, I think one should strive to write the minimum number of integration tests you can get away with. (personally I think that number is zero for most projects, but other people disagree.) | 0 | 2,285 | false | 0 | 1 | python test sending files over ftp | 10,704,251 |
2 | 2 | 0 | 3 | 5 | 1 | 0.291313 | 0 | I need to manipulate large numbers in Python that fit into 64 bits. Currently, my code is running on a 64-bit platform but there is small but distinct possibility that it will have to run on a 32-bit platform. Consequently, I would prefer to use long type to represent my numbers. I understand there is a performance impact for using long over int type. How bad is it? I'll be performing a lot of divisions and multiplications on them, but the results should all fit into 64 bits, too. | 0 | python,performance,cpython | 2012-01-12T01:08:00.000 | 0 | 8,828,909 | If you're going to be doing a lot of heavy number crunching, have a look at "numpy". | 0 | 904 | false | 0 | 1 | Performance impact of using long vs. int in Python | 8,829,028 |
2 | 2 | 0 | 3 | 5 | 1 | 0.291313 | 0 | I need to manipulate large numbers in Python that fit into 64 bits. Currently, my code is running on a 64-bit platform but there is small but distinct possibility that it will have to run on a 32-bit platform. Consequently, I would prefer to use long type to represent my numbers. I understand there is a performance impact for using long over int type. How bad is it? I'll be performing a lot of divisions and multiplications on them, but the results should all fit into 64 bits, too. | 0 | python,performance,cpython | 2012-01-12T01:08:00.000 | 0 | 8,828,909 | If your program does a lot of numerical computations - to a point that performance matters, you should profile it, and have the numerical part running in native code. You should not have to worry if internally the numbers are Python "integers" or "long" - so much that Python 3 removes the type difference.
There are several approaches for it, from using numpy, cython, a C extension, running your program using pypy instead of the standard cpython, and even take a look at corepy - what you should not do is to have a numeric intensive task running in pure python if performance is an issue there. Event he most complicated of these - creating a C extension in the form of a single function that just perform the calculations is simple enough to be well worth the performance gains in this case. | 0 | 904 | false | 0 | 1 | Performance impact of using long vs. int in Python | 8,829,078 |
2 | 3 | 0 | 0 | 1 | 1 | 0 | 0 | I'm building various python-based projects that use pip/buildout to install dependencies. But I don't like the idea of someone deleting a github project and crippling my apps, or a network outage meaning I can't perform a deployment.
How do other people solve this?
I've got various ideas, but I think perhaps the one that sounds most promising would be some kind of caching proxy server. I'd point pip to use this internal proxy server which would cache a copy of the downloaded project, and periodically check for updates (if there's a net connection) before serving cached versions.
Does anything like this already exist?
Use case:
I have a project which I deploy to web server 1. I add new features with a remote dependency, and when I come to update to the production web server, PyPi is down so I can't deploy. Or perhaps when I come to set up a new web server, a dependency has disappeared from github or wherever.
How can I make it so my deployments/dev environments can always be brought up regardless of what happens in the wider world?
Also, when I deploy, I won't deploy over the top of existing code. Rather I'll build a new virtualenv and switch over to it so I can rollback if anything goes wrong. So each time I deploy I'll need to rebuild my environment and will need dependencies to exist.
So I'm looking for a solution that will insulate me against short-term network outages to servers hosting dependencies, as well as guarding against projects being deleted. | 0 | python,pip,pypi | 2012-01-12T16:44:00.000 | 0 | 8,838,782 | You should keep a "reference copy" of the projects on which you depend.
If someone removes the project from GitHub (and PyPi and all the mirrors, and every other site on the net) then you have the source and can now distribute it. | 0 | 495 | false | 0 | 1 | Caching Python requirements for production deployments | 8,839,036 |
2 | 3 | 0 | 0 | 1 | 1 | 0 | 0 | I'm building various python-based projects that use pip/buildout to install dependencies. But I don't like the idea of someone deleting a github project and crippling my apps, or a network outage meaning I can't perform a deployment.
How do other people solve this?
I've got various ideas, but I think perhaps the one that sounds most promising would be some kind of caching proxy server. I'd point pip to use this internal proxy server which would cache a copy of the downloaded project, and periodically check for updates (if there's a net connection) before serving cached versions.
Does anything like this already exist?
Use case:
I have a project which I deploy to web server 1. I add new features with a remote dependency, and when I come to update to the production web server, PyPi is down so I can't deploy. Or perhaps when I come to set up a new web server, a dependency has disappeared from github or wherever.
How can I make it so my deployments/dev environments can always be brought up regardless of what happens in the wider world?
Also, when I deploy, I won't deploy over the top of existing code. Rather I'll build a new virtualenv and switch over to it so I can rollback if anything goes wrong. So each time I deploy I'll need to rebuild my environment and will need dependencies to exist.
So I'm looking for a solution that will insulate me against short-term network outages to servers hosting dependencies, as well as guarding against projects being deleted. | 0 | python,pip,pypi | 2012-01-12T16:44:00.000 | 0 | 8,838,782 | I have exactly the same requirements, and also use buildout to manage my deployments. I try not to install ANY of my package dependencies system-wide; I let buildout install eggs for all of them into my buildout. That way if I depend on a newer version of some package in rev N+1 of my project, and at "go-live" time N+1 falls on its face, I can roll back to N and automatically get the packge dependencies that N worked with.
We run a private eggbasket server, and configure buildout to fetch packages only from that. Server contents were initialized by allowing buildout to grab eggs from the network one time, then copying the downloaded eggs.
This way, upgrades to each package are totally under control and I can ensure that 2 successive buildouts of the same snapshot of my code will build out the same thing. When I want to upgrade all, I will let buildout fetch most-recent-versions again, test test test, then copy my eggs to the eggbasket server to go into production mode. | 0 | 495 | false | 0 | 1 | Caching Python requirements for production deployments | 8,839,967 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I have a simple need with Quality Center 10
If you noticed in Quality Center Client on IE 8 -> Test Plan. If you create a new Testcase with Test Type = 'VAPI-XP-TEST', it will ask you for Script Language and Script Name. I have selected Script Language to be Python. Once you have gone through this process of creating a new testcase, the testcase is pre-populated with a Default Python Script.
I wish to know how I can modify that Base Default Python Script so that the future new testcases will have my default Python Script? Is there any way to do the same through OTA API?
Thanks,
Amit | 0 | python,hp-quality-center | 2012-01-13T22:47:00.000 | 0 | 8,858,306 | Can you post the default script here ? Iam not sure why you cant edit it ? You can edit the script as you want.
There are 2 ways you can run the QC script one using VAPI-XP-TEST as you are doing , in that case you can use tdhelper object for updating the results.
Second is OTA-API which you can use python script externally to connect to QC and execute your tests and also update run results. there is lot of documentation available on QC help for OTA-API | 0 | 2,376 | false | 0 | 1 | Quality Center VAPI-XP-TEST - Modify Default Python Script | 13,123,269 |
1 | 5 | 0 | 4 | 3 | 0 | 0.158649 | 0 | The documentation only talks about how to do it from ruby. | 0 | python,mongodb,heroku,mlab | 2012-01-14T01:57:00.000 | 0 | 8,859,532 | Get the connection string settings by running heroku config on the command line after installed the add-on to your heroku app.
There will be an entry with the key MONGOLAB_URI in this form:
MONGOLAB_URI => mongodb://user:[email protected]:27707/db
Simply the info from the uri in python by creating a connection from the uri string. | 0 | 6,640 | false | 1 | 1 | How can I use the mongolab add-on to Heroku from python? | 8,859,701 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I've been happily using Aptana for a PHP project. As of yesterday evening, it's been crashing repeatedly and causing no end of grief!
I can pinpoint two events which may have caused this:
Yesterday evening I seem to have hit a combination of keyboard keys which has resulted in 'Python not configured' appearing at the top of the App explorer window. I can't see anyway to turn Python off.
I have also been trying to get Git to behave, and (away from Aptana) have been making changes to TortoiseGit and installing SmartGit.
Any ideas? (Specifically, can I turn off Python somehow to see if this helps?) | 0 | python,git,aptana | 2012-01-16T11:04:00.000 | 0 | 8,879,102 | i suggest you to click on:
HELP -> About Aptana
then: Installation Details (button on the bottom left)
in the new window you can remove software, or revert some installation/configuration with with the "installation history" section.
Hope this help you | 0 | 308 | false | 0 | 1 | Aptana has started crashing - possibly due to Python or Git | 8,897,929 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | Does doxygen doesn't work properly on python script with a shebang?
I tried one python script with shebang to my company's tool directory and ran doxygen.
It was not able to display namespace (functions) at all.
Please share if you guys faced similar experience. | 0 | python,doxygen,shebang | 2012-01-17T05:40:00.000 | 0 | 8,890,199 | that sounds to me like a scripting problem and not like a conflict between Shebang and doxygen | 0 | 221 | false | 0 | 1 | Shebang conflict with doxygen | 8,892,855 |
2 | 2 | 0 | 1 | 0 | 0 | 1.2 | 0 | Does doxygen doesn't work properly on python script with a shebang?
I tried one python script with shebang to my company's tool directory and ran doxygen.
It was not able to display namespace (functions) at all.
Please share if you guys faced similar experience. | 0 | python,doxygen,shebang | 2012-01-17T05:40:00.000 | 0 | 8,890,199 | enable EXTRACT_ALL to YES in the config file, doxygen will generate proper desired results.
But the question still remains, why doxygen failed to generate documentation file for python with EXTRACT_ALL = NO.
Anyone with better answer can help me here. | 0 | 221 | true | 0 | 1 | Shebang conflict with doxygen | 9,170,986 |
1 | 3 | 0 | 0 | 1 | 0 | 0 | 0 | I would like to access the array with something like array[5000][440] meaning 5000ms from the start and 440hz and it would give me a value of the frequency's amplitude at this very position.
I could not find something like that here, if there is, please point me to it. | 0 | python,audio,fft,amplitude | 2012-01-17T17:55:00.000 | 0 | 8,899,336 | You're operating under a couple of misconceptions.
You can't get the frequency of a wave at a particular point in time. You need to select a window of time, including many points before and after the point of interest. The more points you include, the more resolution you'll have in your frequency breakdown. You'll need to run some sort of windowing function on those points, then subject them to a FFT.
Once you have the results of the FFT, the numbers will correspond to frequencies but it won't be a simple relationship. You don't have any control over the frequency corresponding to each output, that was already determined by the sampling frequency of your signal combined with the number of samples. I'm afraid I don't have the conversion formula at hand. Each frequency will have two components, a real and an imaginary, and the amplitude will be sqrt(r**2+i**2). | 0 | 521 | false | 0 | 1 | How do I make python load a big(2hours) wave-file and convert it's contents into a time-frequency array? | 8,903,087 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I have a standard Apache2 installation on an Ubuntu server. The default settings use ScriptAlias to refer /cgi-bin/ to /usr/lib/cgi-bin/. If I place my Python CGI script in /usr/lib/cgi-bin/ it executes.
I created /var/www/cgi-bin/ with appropriate permissions, removed the ScriptAlias line, changed the Directory entry in the default file for the site, moved the CGI file to /var/www/cgi-bin/ and restarted Apache2, but I was not able to get the script to run. It was appearing as a text file in the browser instead of being executed. The HTML file calling the script refers to /cgi-bin/SCRIPT, so I left that unchanged. I tried variations on /cgi-bin and /var/www/cgi-bin in the config files without success. How can I get a Python CGI file to run from /var/www/cgi-bin? | 0 | python,apache2,cgi | 2012-01-18T13:17:00.000 | 1 | 8,910,770 | please make sure:
the file you are calling itself has enough permission
the file you are calling has an extension that is is in the .htaccess file (see newtover's answer)
specify how you want to run the script in the first line of the file. If you are calling foo.pl, make sure #!/usr/bin/perl is in the first line. | 0 | 3,691 | false | 0 | 1 | Execute Python CGI from /cgi-bin/ folder | 8,911,061 |
1 | 1 | 0 | 1 | 1 | 0 | 1.2 | 0 | I've built python 2.7 debug with MSVC 2008 to debug a script that imports M2Crypto. When I run my script, python_d correctly creates a Python_Eggs cache, and copies both the __m2crypto.pyd and __m2crypto_d.pyd into it. But then it attempts to load the non-debug python module from the cache, and terminates because it contains no debug information. I've rebuilt both openssl and M2Crypto and made certain that no other copies exist on the build machine (a VM.) I've traced through python itself and cannot discover why it will not load the _d.pyd.
Any ideas why this is happening? | 0 | python,m2crypto | 2012-01-18T21:34:00.000 | 0 | 8,917,800 | First, the problem was that python attempted to load the non-debug version of __m2crypto.pyd, which failed, because it lacked dependent components. This caused python to terminate - not because the module was not found, but because one if its children was not found. This is a critical error for python, and whether this is a bug in python is for other minds to contemplate.
Using DEPENDENCY, I discovered that the openssl libraries were not installed into the python home folder. This was because the script that makes the M2Crypto distribution package has a "feature" which does not include these files. So the following resolved the issue:
Build openssl with debug
Modify the setup() call in M2Crypto\setup.py to include data_files=['ssleay32.dll','libeay32.dll']
Build M2Crypto with debug, using the openssl debug
Install M2Crypto.
Profit!
Afterwards, I was able to import M2Crypto into both python and python_d. | 0 | 394 | true | 0 | 1 | python_d 2.7 will not load __m2crypto_d.pyd | 8,945,972 |
1 | 1 | 1 | 5 | 0 | 0 | 1.2 | 0 | EDIT: Solved, thanks everyone!
What I would like to be able to do in simple terms is take user input from one programming language, convert it into another programming language and have it compiled automatically.
For example (simplified and not precisely what I want to do but along similar lines):
1) Write a python script, userData = raw_input("blah blah blah, example, example")
2) if userData == "blah blah blah, example, example", serialize to a text file called "example.cpp" and put in some predetermined data which is based on the user's input (written in C++ form, though represented as a string in python script). For simplification this predetermined data will be called predeterminedData.
3) The extent of predeterminedData will be essentially a cout << "this is a different message to before" << endl;
4) The compiler (g++/gcc) compiles this automatically and the overall program structure calls the newly created executable file.
If someone could help point me toward the topic/topics I should read up on to be able to achieve this - if it's possible - that'd be fantastic.
Edit: I've made a classic mistake I think. In an attempt to not be accused of asking other people to do my "homework" for me I've been too vague and consequently misleading. Thank you for the responses so far but perhaps now I should be more specific. It isn't particularly python nor c++ specific but I will explain beneath. I apologize for not being more explicit before.
What I actually want to achieve is quite simple. I want to use user input from one programming language (python, c++, java) and have it create a Lilypond script which will automatically compile and create a midi file.
So for example:
1) User is asked to enter alphabetically a series of notes: e.g. "C" then "E" then "F", so on and so on.
2) These "notes" are checked by a control loop statement and a string is created in the Lilypond script and serialized to a file which is compatible with its compiler (example.ly)
3) This file is automatically compiled by the Lilypond compiler and creates a midi file (example.midi)
4) Later in the program this example.midi can be called on and played back because of this creation process. It would not have existed prior to this creation. | 0 | c++,python | 2012-01-18T22:06:00.000 | 0 | 8,918,183 | To me it sounds like you just want to write a user interface for interactive creation of lilypond files.
I don't see what all this has to do with compilation. Your python script will need to write a file in lilypond notation and afterwards your script needs to call lilypond on that file (e.g. with os.system). You could even skip writing to a file and just pipe the output to stdin which lilypond can also read. | 0 | 161 | true | 0 | 1 | Automatic compiling between languages | 8,918,813 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I am trying to build a set of git utilities with python. I am using subprocess.Popen to run the git binary. Right now I am just trying to find the best way to determine that there was an error in running a git command.
My question is whether or not git will always return a returncode of 0 on a successful git command and always return a returncode of non-zero on a unsuccessful call? I just want to make sure that checking the returncode is a safe way to detect an error so that I can exit the script if a git command was unsuccessful. | 0 | python,git,bash | 2012-01-21T15:58:00.000 | 1 | 8,954,383 | Yes, git (and any well-behaved *nix program) will always return 0 for success and non-zero for failure. This is the paradigm on GNU/Linux systems, and since Git was made by the same person who made Linux, you can bet it follows the convention. | 0 | 220 | true | 0 | 1 | Calling Git Binary From Python And Error Codes | 8,954,535 |
1 | 1 | 0 | 4 | 2 | 1 | 1.2 | 0 | I want to ask if I should include the function that I am testing inside the unittest file (such that I will have one file, unittest.py), or I should just import it in the unittest file (I will have two files, unittest.py and function.py). I am seeing both methods when I read in the web, however I find the first testing that I described as redundant. | 0 | python,unit-testing,function | 2012-01-21T18:58:00.000 | 0 | 8,955,752 | Two separate files of course. The idea is the unit test should be non-intrusive and should sit in the own file, usually clearly put under a test directory and/or named test_*. I have never seen people put it in the same file unless it is the most trivial demo. | 0 | 42 | true | 0 | 1 | Should I include the function that I am testing inside the unittest file, or should I just import it in the unittest file? | 8,955,802 |
2 | 3 | 0 | 0 | 4 | 1 | 0 | 0 | I'm trying to improve the speed of an algorithm and, after looking at which operations are being called, I'm having difficulty pinning down exactly what's slowing things up. I'm wondering if Python's deepcopy() could possibly be the culprit or if I should look a little further into my own code. | 0 | python,complexity-theory,deep-copy | 2012-01-21T22:48:00.000 | 0 | 8,957,400 | The complexity of deepcopy() is dependant upon the size (number of elements/children) of the object being copied.
If your algorithm's inputs do not affect the size of the object(s) being copied, then you should consider the call to deeopcopy() to be O(1) for the purposes of determining complexity, since each invocation's execution time is relatively static.
(If your algorithm's inputs do have an effect on the size of the object(s) being copied, you'll have to elaborate how. Then the complexity of the algorithm can be evaluated.) | 0 | 8,858 | false | 0 | 1 | What is the runtime complexity of Python's deepcopy()? | 8,957,592 |
2 | 3 | 0 | 1 | 4 | 1 | 0.066568 | 0 | I'm trying to improve the speed of an algorithm and, after looking at which operations are being called, I'm having difficulty pinning down exactly what's slowing things up. I'm wondering if Python's deepcopy() could possibly be the culprit or if I should look a little further into my own code. | 0 | python,complexity-theory,deep-copy | 2012-01-21T22:48:00.000 | 0 | 8,957,400 | What are you using deepcopy for? As the name suggests, deepcopy copies the object, and all subobjects recursively, so it is going to take an amount of time proportional to the size of the object you are copying. (with a bit of overhead to deal with circular references)
There isn't really any way to speed it up, if you are going to copy everything, you need to copy everything.
One question to ask, is do you need to copy everything, or can you just copy part of the structure. | 0 | 8,858 | false | 0 | 1 | What is the runtime complexity of Python's deepcopy()? | 8,957,598 |
2 | 4 | 0 | 1 | 4 | 1 | 0.049958 | 0 | On a lab machine where I can't just go clobbering things, there appears to be more than one version of python installed.
If I python --version I see 2.7.1.
I've installed numpy via "apt-get install numpy" and it says it is installed, but when I try to import it it isn't found.
When I do a find on the machine for numpy I see it in the /usr/lib/python2.5/site-packages/numpy folder. I assume this is the problem... that apt-get put it in the 2.5 version instead of the 2.7.
How do I resolve this? Is there a way to tell apt-get which python I'm talking about when I do an install? Or do I abandon aptitude and use pip or something? | 0 | python,numpy,aptitude | 2012-01-22T21:12:00.000 | 0 | 8,964,736 | Apt/dpkg have a Debian way of managing multiple installed versions of Python (I believe it is called python-support). Any extra package, like numpy, that you install will automatically be built and available for all the versions of Python supported by that package AND installed by dpkg. Since numpy supports every Python, your info tells me that the only Debian python package on your system is 2.5, and the 2.7 in your PATH is probably in /usr/local. When you install the numpy package it doesn't know about the locally built 2.7. You can always easy_install.
The suggestion to use virtualenv is a good one. I have a production system I support using python 2.5, which has been dropped from debian unstable; virtualenv makes it possible to work with whatever version you need. SInce python is needed by so many tools it's better leave system python at whatever Debian wants it to be. | 0 | 10,268 | false | 0 | 1 | Multiple versions of python when installing a package with aptitude | 8,966,563 |
2 | 4 | 0 | 0 | 4 | 1 | 0 | 0 | On a lab machine where I can't just go clobbering things, there appears to be more than one version of python installed.
If I python --version I see 2.7.1.
I've installed numpy via "apt-get install numpy" and it says it is installed, but when I try to import it it isn't found.
When I do a find on the machine for numpy I see it in the /usr/lib/python2.5/site-packages/numpy folder. I assume this is the problem... that apt-get put it in the 2.5 version instead of the 2.7.
How do I resolve this? Is there a way to tell apt-get which python I'm talking about when I do an install? Or do I abandon aptitude and use pip or something? | 0 | python,numpy,aptitude | 2012-01-22T21:12:00.000 | 0 | 8,964,736 | Debian allows for multiple Pythons to be installed (the python2.5 and python2.6 packages). A Python library like numpy in the package python-numpy can support multiple of these, but particular libraries installed through the package manager are not necessarily supported on all of these. You can use apt-cache show python-numpy | grep Python-Version to see which versions are supported. If 2.7 is not supported, you'll have to install from source or (e.g.) pip, easy_install, etc.
However, you may have a local installation of Python 2.7 (compiled and installed from sources outside of the repos). Your distro sounds a little out of date (on Linux Mint 12, only 2.6 and 2.7 are supported for numpy), so it's possible there aren't official packages for Python 2.7. If you do which python and it's in /usr/local or anywhere other than /usr/bin, then you've got a local installation and you will need to install the package using source or easy_install and friends.
That said, my opinion is that if you just need these libs for development, you should keep them in a sandbox (like virtualenv) in your home directory. That way you have better control over the exact version you have. | 0 | 10,268 | false | 0 | 1 | Multiple versions of python when installing a package with aptitude | 8,976,574 |
1 | 1 | 0 | 2 | 0 | 0 | 1.2 | 0 | I'm setting up a pyramid application where access to ressources can be shared across registered users.
I would also like to give access to non-members, using non-trivial links to files or directories.
While I see how to do this for registered members, I'm not sure how to do this with anonymous users. Do I need to create an unprotected view and perform security checks myself ?
Maybe a better way would be to append access rights to users sessions using cookies?
Can route factories help me for this purpose? Any other way? | 0 | python,session,authorization,pyramid | 2012-01-23T11:47:00.000 | 0 | 8,971,115 | If you've figured out how to do this for authenticated users, it should be obvious how to do it for anonymous users as well. They will have the pyramid.security.Everyone principal, which you can use in your ACLs to assign various permissions.
Route factories will allow you to assign custom ACLs to individual routes. They simply override the default root factory on the Configurator. | 0 | 88 | true | 1 | 1 | url-enabled access to ressources with pyramid | 8,973,728 |
1 | 1 | 0 | 1 | 4 | 0 | 1.2 | 0 | I am looking for a library to parse command-line parameters that would work identically in Java, C/C++, Python and (preferably) shell. By "identical" I mean (1) have exactly the same rules for parsing of the parameters in all three languages, (2) use the same configuration files or have similar API to specify the parameters, (3) have similar APIs to access the values of the parameters.
I've always used getopt in C and Apache CLI in Java but it would be nice to use the same specification for the parameters across multiple languages. | 0 | java,python,c,parsing,command-line | 2012-01-24T18:31:00.000 | 1 | 8,992,077 | getopt is also usable in Python and shell. Python has the argparse module, which is much easier to use (particularly for more complex argument parsing), but if you want consistency across all those languages, I don't know of any better option than getopt. If Java doesn't have a getopt implementation, you could possibly write one yourself without too much effort. | 0 | 393 | true | 1 | 1 | Parser for command line parameters in Java/C/C++/Python/shell | 8,993,917 |
1 | 2 | 0 | 1 | 1 | 1 | 0.099668 | 0 | I want If I am editing a php file I should be able to press a key combination or click a menu item that'll launch the php-cli and run my current file? How do I do it in Notepad++.
Also I need this for Python. | 0 | php,python,editor,notepad++ | 2012-01-25T10:23:00.000 | 0 | 9,001,046 | You could check out the "Run" menu option. It allows you to bind key combinations to applications. | 0 | 3,513 | false | 0 | 1 | How can I integrate PHP/Python Interpreter to Notepad++ | 9,001,202 |
1 | 3 | 0 | 4 | 15 | 0 | 0.26052 | 0 | I'm using Pyramid framework and I want to access the IP address from which the request originated. I assume it's in the request object (passed to every view function) somewhere, but I can't find documentation which tells me where it is. | 0 | python,pyramid | 2012-01-25T18:12:00.000 | 0 | 9,007,887 | Or you can use request.environ['REMOTE_ADDR'] | 0 | 5,593 | false | 0 | 1 | Getting the request IP address with Pyramid | 9,010,633 |
1 | 4 | 0 | 2 | 3 | 1 | 0.099668 | 0 | I have (3) md5sums that I need to combine into a single hash. The new hash should be 32-characters, but is case-sensitive and can be any letter or number. What's the best way to do this in Python? | 0 | python,md5,hashlib | 2012-01-25T20:30:00.000 | 0 | 9,009,807 | The easiest way would be to combine the 3 sums into a single 96-character string and run an MD5 hash on that. | 0 | 1,664 | false | 0 | 1 | Combine (3) 32-char hex hashes into a single unique 32-char hash? | 9,009,930 |
2 | 4 | 0 | 1 | 8 | 0 | 0.049958 | 0 | In C/C++ like languages, closing zeromq socket explicitly is a must, which I understand. But in some higher level languages, such as php and python, which have garbage collection mechanism, do I need to close the sockets explicitly?
In php, there is no ZMQSocket::close() and in python, pyzmq's doc says socket.close() can be omitted since it will be closed automatically during garbage collection.
So my question is, do I need to manually close it or not?... | 0 | python,zeromq | 2012-01-26T14:47:00.000 | 0 | 9,019,873 | You don't. You might close or delete things explicitly in Python when:
Ordering becomes important, such as requiring the connection to be closed before you can proceed.
Your references to the objects will persist for a long time, and the resource will no longer be required after some time. This might happen if you're storing them in lists or as member variables. You should explicitly close the resource, or remove references to it when you are done.
Generally speaking it's pedantic and premature to even think about such things in Python. | 0 | 13,070 | false | 0 | 1 | Should I close zeromq socket explicitly in python? | 9,019,940 |
2 | 4 | 0 | 3 | 8 | 0 | 1.2 | 0 | In C/C++ like languages, closing zeromq socket explicitly is a must, which I understand. But in some higher level languages, such as php and python, which have garbage collection mechanism, do I need to close the sockets explicitly?
In php, there is no ZMQSocket::close() and in python, pyzmq's doc says socket.close() can be omitted since it will be closed automatically during garbage collection.
So my question is, do I need to manually close it or not?... | 0 | python,zeromq | 2012-01-26T14:47:00.000 | 0 | 9,019,873 | It is always correct to close any I/O resources when you are done with them. The garbage collector will close them off eventually. It may close it immediately once the last reference goes out of scope. It may close it as your program is exiting. While you wait for it to do so, the resource remains open taking up memory, consuming file pointers, and eating up your system resources in general. For a small, short lived program this may not be a big issue, but if your software is long living or establishes a lot of connections, this will come back to hurt you.
The answer is: it depends. If your system is reliant on the socket getting closed, then you are safer closing them explicitly. If you are fine with the socket getting closed at some indeterminate future time, you can save yourself a little bit of coding time and simplify your program a bit by just letting the garbage collector handle it. | 0 | 13,070 | true | 0 | 1 | Should I close zeromq socket explicitly in python? | 9,399,409 |
2 | 3 | 0 | 1 | 0 | 0 | 1.2 | 1 | I am looking through the Tweepy API and not quite sure how to find the event to register for when a user either send or receives a new tweet. I looked into the Streaming API but it seems like that is only sampling the Twitter fire house and not really meant for looking at one indvidual user. What I am trying to do is have my program update whenever something happens to the user. Essentially what a user would see if they were in their account on the twitter homepage. So my question is: What is the method or event I should be looking for in the Tweepy API to make this happen? | 0 | python,twitter,twitter-oauth,tweepy | 2012-01-27T01:25:00.000 | 0 | 9,027,884 | I used the .filter function then filtered for the user I was looking for. | 0 | 183 | true | 0 | 1 | How to register an event for when a user has a new tweet? | 9,056,152 |
2 | 3 | 0 | 1 | 0 | 0 | 0.066568 | 1 | I am looking through the Tweepy API and not quite sure how to find the event to register for when a user either send or receives a new tweet. I looked into the Streaming API but it seems like that is only sampling the Twitter fire house and not really meant for looking at one indvidual user. What I am trying to do is have my program update whenever something happens to the user. Essentially what a user would see if they were in their account on the twitter homepage. So my question is: What is the method or event I should be looking for in the Tweepy API to make this happen? | 0 | python,twitter,twitter-oauth,tweepy | 2012-01-27T01:25:00.000 | 0 | 9,027,884 | I don't think there is any event based pub-sub exposed by twitter. You just have to do the long polling. | 0 | 183 | false | 0 | 1 | How to register an event for when a user has a new tweet? | 9,028,060 |
4 | 8 | 0 | 1 | 6 | 1 | 0.024995 | 0 | I am trying very hard to develop a much deeper understanding of programming as a whole. I understand the textbook definition of "binary", but what I don't understand is exactly how it applies to my day to day programming?
The concept of "binary numbers" vs .. well... "regular" numbers, is completely lost on me despite my best attempts to research and understand the concept.
I am someone who originally taught myself to program by building stupid little adventure games in early DOS Basic and C, and now currently does most (er, all) of my work in PHP, JavaScript, Rails, and other "web" languages. I find that so much of this logic is abstracted out in these higher level languages that I ultimately feel I am missing many of the tools I need to continue progressing and writing better code.
If anyone could point me in the direction of a good, solid practical learning resource, or explain it here, it would be massively appreciated.
I'm not so much looking for the 'definition' (I've read the wikipedia page a few times now), but more some direction on how I can incorporate this new-found knowledge of exactly what binary numbers are into my day to day programming, if at all. I'm primarily writing in PHP these days, so references to that language specifically would be very helpful.
Edit: As pointed out.. binary is a representation of a number, not a different system altogether.. So to revise my question, what are the benefits (if any) of using binary representation of numbers rather than just... numbers. | 0 | php,python,binary,binary-data | 2012-01-27T22:57:00.000 | 0 | 9,041,185 | rather more of an experience rather than a solid answer:
actually, you don't actually need binary because it's pretty much abstracted in programming nowadays (depending on what you program). binary has more use in the systems design and networking.
some things my colleagues at school do in their majors:
processor instruction sets and operations (op codes)
networking and data transmission
hacking (especially memory "tampering". more of hex but still related)
memory allocation (in assembly, we use hex but sometimes binary)
you need to know how these "regular numbers" are represented and understood by the machine - hence all those "conversion lessons" like hex to binary, binary to octal etc. machines only read binary. | 0 | 1,423 | false | 0 | 1 | How do "binary" numbers relate to my everyday programming? | 9,041,341 |
4 | 8 | 0 | 0 | 6 | 1 | 0 | 0 | I am trying very hard to develop a much deeper understanding of programming as a whole. I understand the textbook definition of "binary", but what I don't understand is exactly how it applies to my day to day programming?
The concept of "binary numbers" vs .. well... "regular" numbers, is completely lost on me despite my best attempts to research and understand the concept.
I am someone who originally taught myself to program by building stupid little adventure games in early DOS Basic and C, and now currently does most (er, all) of my work in PHP, JavaScript, Rails, and other "web" languages. I find that so much of this logic is abstracted out in these higher level languages that I ultimately feel I am missing many of the tools I need to continue progressing and writing better code.
If anyone could point me in the direction of a good, solid practical learning resource, or explain it here, it would be massively appreciated.
I'm not so much looking for the 'definition' (I've read the wikipedia page a few times now), but more some direction on how I can incorporate this new-found knowledge of exactly what binary numbers are into my day to day programming, if at all. I'm primarily writing in PHP these days, so references to that language specifically would be very helpful.
Edit: As pointed out.. binary is a representation of a number, not a different system altogether.. So to revise my question, what are the benefits (if any) of using binary representation of numbers rather than just... numbers. | 0 | php,python,binary,binary-data | 2012-01-27T22:57:00.000 | 0 | 9,041,185 | As a web guy, you no doubt understand the importance of unicode. Unicode is represented in hexidecimal format when viewing character sets not supported by your system. Hexidecimal also appears in RGB values, and memory addresses. Hexideciaml is, among many things, a shorthand for writing out long binary characters.
Finally, binary numbers work as the basis of truthiness: 1 is true, while 0 is always false.
Go check out a book on digital fundementals, and try your hand at boolean logic. You'll never look at if a and not b or c the same way again! | 0 | 1,423 | false | 0 | 1 | How do "binary" numbers relate to my everyday programming? | 9,041,392 |
4 | 8 | 0 | 4 | 6 | 1 | 0.099668 | 0 | I am trying very hard to develop a much deeper understanding of programming as a whole. I understand the textbook definition of "binary", but what I don't understand is exactly how it applies to my day to day programming?
The concept of "binary numbers" vs .. well... "regular" numbers, is completely lost on me despite my best attempts to research and understand the concept.
I am someone who originally taught myself to program by building stupid little adventure games in early DOS Basic and C, and now currently does most (er, all) of my work in PHP, JavaScript, Rails, and other "web" languages. I find that so much of this logic is abstracted out in these higher level languages that I ultimately feel I am missing many of the tools I need to continue progressing and writing better code.
If anyone could point me in the direction of a good, solid practical learning resource, or explain it here, it would be massively appreciated.
I'm not so much looking for the 'definition' (I've read the wikipedia page a few times now), but more some direction on how I can incorporate this new-found knowledge of exactly what binary numbers are into my day to day programming, if at all. I'm primarily writing in PHP these days, so references to that language specifically would be very helpful.
Edit: As pointed out.. binary is a representation of a number, not a different system altogether.. So to revise my question, what are the benefits (if any) of using binary representation of numbers rather than just... numbers. | 0 | php,python,binary,binary-data | 2012-01-27T22:57:00.000 | 0 | 9,041,185 | Here is a brief history to help your understanding and I will get to your question at the end.
Binary is a little weird because we are so used to using a base 10 number system. This is because humans have 10 fingers, when they ran out they had to use a stick, toe or something else to represent 10 fingers. This it not true for all cultures though, some of the hunter gatherer populations (such as the Australian Aboriginal) used a base 5 number system (one hand) as producing large numbers were not necessary.
Anyway, the reason base 2 is important in computing is because a circuit can have two states, low voltage and high voltage; think of this like a switch (on and off). Place 8 of these switches together and you have 1 byte (8 bits). The best way to think of a bit is 1=on and 0=off which is exactly how it is represented in binary. You might then have something like this 10011100 where 1's are high volts and 0 are low volts. In early computers, physical switches were used which the the operator could turn on and off to create a program.
Nowadays, you will rarely need to use binary number in modern programming. The only exceptions I can think of is bitwise arithmetic which are very fast and efficient ways of solving certain problems or maybe some form of computer hacking. All I can suggest is learn the basics of it but don't worry about actually using it in everyday programming. | 0 | 1,423 | false | 0 | 1 | How do "binary" numbers relate to my everyday programming? | 9,041,557 |
4 | 8 | 0 | 2 | 6 | 1 | 0.049958 | 0 | I am trying very hard to develop a much deeper understanding of programming as a whole. I understand the textbook definition of "binary", but what I don't understand is exactly how it applies to my day to day programming?
The concept of "binary numbers" vs .. well... "regular" numbers, is completely lost on me despite my best attempts to research and understand the concept.
I am someone who originally taught myself to program by building stupid little adventure games in early DOS Basic and C, and now currently does most (er, all) of my work in PHP, JavaScript, Rails, and other "web" languages. I find that so much of this logic is abstracted out in these higher level languages that I ultimately feel I am missing many of the tools I need to continue progressing and writing better code.
If anyone could point me in the direction of a good, solid practical learning resource, or explain it here, it would be massively appreciated.
I'm not so much looking for the 'definition' (I've read the wikipedia page a few times now), but more some direction on how I can incorporate this new-found knowledge of exactly what binary numbers are into my day to day programming, if at all. I'm primarily writing in PHP these days, so references to that language specifically would be very helpful.
Edit: As pointed out.. binary is a representation of a number, not a different system altogether.. So to revise my question, what are the benefits (if any) of using binary representation of numbers rather than just... numbers. | 0 | php,python,binary,binary-data | 2012-01-27T22:57:00.000 | 0 | 9,041,185 | To me, one of the biggest impacts of a binary representation of numbers is the difference between floating point values and our "ordinary" (base-10 or decimal) notion of fractions, decimals, and real numbers.
The vast majority of fractions cannot be exactly represented in binary. Something like 0.4 seems like it's not a hard number to represent; it's only got one place after the decimal, it's the same as two fifths or 40%, what's so tough? But most programming environments use binary floating point, and cannot represent this number exactly! Even if the computer displays 0.4, the actual value used by the computer is not exactly 0.4. So you get all kinds of unintuitive behavior when it comes to rounding and arithmetic.
Note that this "problem" is not unique to binary. For example, using our own base-10 decimal notation, how do we represent one third? Well, we can't do it exactly. 0.333 is not exactly the same as one third. 0.333333333333 is not exactly one third either. We can get pretty close, and the more digits you let us use, the closer we can get. But we can never, ever be exactly right, because it would require an infinite number of digits. This is fundamentally what's happening when binary floating point does something we don't expect: The computer doesn't have an infinite number of binary digits (bits) to represent our number, and so it can't get it exactly right, but gives us the closest thing it can. | 0 | 1,423 | false | 0 | 1 | How do "binary" numbers relate to my everyday programming? | 9,041,603 |
3 | 3 | 0 | 0 | 1 | 0 | 0 | 0 | I have the following requirements (from the client) for zipping a number of files.
If the zip file created is less than 2**31-1 ~2GB use compression to create it (use zipfile.ZIP_DEFLATED), otherwise do not compress it (use zipfile.ZIP_STORED).
The current solution is to compress the file without zip64 and catching the zipfile.LargeZipFile exception to then create the non-compressed version.
My question is whether or not it would be worthwhile to attempt to calculate (approximately) whether or not the zip file will exceed the zip64 size without actually processing all the files, and how best to go about it? The process for zipping such large amounts of data is slow, and minimizing the duplicate compression processing might speed it up a bit.
Edit: I would upvote both solutions, as I think I can generate a useful heuristic from a combination of max and min file sizes and compression ratios. Unfortunately at this time, StackOverflow prevents me from upvoting anything (until I have a reputation higher than noob). Thanks for the good suggestions. | 0 | python,zip | 2012-01-28T01:09:00.000 | 0 | 9,042,086 | A heuristic approach will always involve some false positives and some false negatives.
The eventual size of the zipped file will depend on a number of factors, some of which are not knowable without running the compression process itself.
Zip64 allows you to use many different compression formats, such as bzip2, LZMA, etc.
Even the compression format may do the compression differently depending on the data to be compressed. For example, bzip2 can use Burrows-Wheeler, run length encoding and Huffman among others. The eventual size of the file will then depend on the statistical properties of the data being compressed.
Take Huffman, for instance; the size of the symbol table depends on how randomly-distributed the content of the file is.
One can go on and try to profile different types of data, serialized binary, text, images etc. and each will have a different normal distribution of final zipped size.
If you really need to save time by doing the process only once, apart from building a very large database and using a rule-based expert system or one based on Bayes' Theorem, there is no real 100% approach to this problem.
You could also try sampling blocks of the file at random intervals and compressing this sample, then linearly interpolating based on the size of the file. | 0 | 1,620 | false | 0 | 1 | Calculate (approximately) if zip64 extensions are required without relying on exceptions? | 9,042,877 |
3 | 3 | 0 | 0 | 1 | 0 | 0 | 0 | I have the following requirements (from the client) for zipping a number of files.
If the zip file created is less than 2**31-1 ~2GB use compression to create it (use zipfile.ZIP_DEFLATED), otherwise do not compress it (use zipfile.ZIP_STORED).
The current solution is to compress the file without zip64 and catching the zipfile.LargeZipFile exception to then create the non-compressed version.
My question is whether or not it would be worthwhile to attempt to calculate (approximately) whether or not the zip file will exceed the zip64 size without actually processing all the files, and how best to go about it? The process for zipping such large amounts of data is slow, and minimizing the duplicate compression processing might speed it up a bit.
Edit: I would upvote both solutions, as I think I can generate a useful heuristic from a combination of max and min file sizes and compression ratios. Unfortunately at this time, StackOverflow prevents me from upvoting anything (until I have a reputation higher than noob). Thanks for the good suggestions. | 0 | python,zip | 2012-01-28T01:09:00.000 | 0 | 9,042,086 | I can only think of two ways, one simple but requires manual tuning, and the other may not provide enough benefit to justify the complexity.
Define a file size at which you just skip the zip attempt, and tune it to your satisfacton by hand.
Keep a record of the last N filesizes between the smallest failure to zip ever observed and the largest successful zip ever observed. Decide what the acceptable probability of an incorrect choice resulting in an file that should be zipped not being zipped (say 5%). set your "don't bother trying to zip" threshold such that it would have resulted in that percentage of files that would have been erroneously left unzipped.
If you absolutely can never miss an opportunity to zip file that should have been zipped then you've already got the solution. | 0 | 1,620 | false | 0 | 1 | Calculate (approximately) if zip64 extensions are required without relying on exceptions? | 9,042,227 |
3 | 3 | 0 | 0 | 1 | 0 | 0 | 0 | I have the following requirements (from the client) for zipping a number of files.
If the zip file created is less than 2**31-1 ~2GB use compression to create it (use zipfile.ZIP_DEFLATED), otherwise do not compress it (use zipfile.ZIP_STORED).
The current solution is to compress the file without zip64 and catching the zipfile.LargeZipFile exception to then create the non-compressed version.
My question is whether or not it would be worthwhile to attempt to calculate (approximately) whether or not the zip file will exceed the zip64 size without actually processing all the files, and how best to go about it? The process for zipping such large amounts of data is slow, and minimizing the duplicate compression processing might speed it up a bit.
Edit: I would upvote both solutions, as I think I can generate a useful heuristic from a combination of max and min file sizes and compression ratios. Unfortunately at this time, StackOverflow prevents me from upvoting anything (until I have a reputation higher than noob). Thanks for the good suggestions. | 0 | python,zip | 2012-01-28T01:09:00.000 | 0 | 9,042,086 | The only way I know of to estimate the zip file size is to look at the compression ratios for previously compressed files of a similar nature. | 0 | 1,620 | false | 0 | 1 | Calculate (approximately) if zip64 extensions are required without relying on exceptions? | 9,042,092 |
1 | 2 | 0 | 0 | 0 | 0 | 1.2 | 1 | I am looking for an existing library or code samples, to extract the relevant parts from a mime message structure in order to perform analysis on the textual content of those parts.
I will explain:
I am writing a library (in Python) that is part of a project that needs to iterate over very large amount of email messages through IMAP. For each message, it needs to determine what are the mime parts it will need in order to analyze the textual content of the message that require the least amount of parsing (e.g. prefer text/plain over text/html or rich text) and without duplicates (i.e. if text/plain exists, ignore the matching text/html). It also needs to address nested parts (text attachments, forwarded messages, etc) and all this without downloading the entire message body (takes too much time and bandwidth). The end goal is later to retrieve only those parts in order to perform some statistical and pattern analysis on the text content of those messages (excluding any markup, meta data, binary data, etc).
The libraries and examples I've seen, require the full message body in order to assemble the message structure and understand the content of the message. I am trying to achieve this using the response from the IMAP FETCH command with the BODYSTRUCTURE data item.
BODYSTRUCTURE should contain enough information to achieve my goal but although the structure and returned data are officially documented in the relevant RFCs (3501, 2822, 2045), the amount of nesting, combinations and various quirks all add up to make the task very tedious and error prune.
Does anyone know any libraries that can help to achieve this or any code samples (preferably in Python but any language will do)? | 0 | python,email,imap,mime | 2012-01-28T13:32:00.000 | 0 | 9,045,626 | Answering my own question for the sake of completeness and to close this question.
I couldn't find any existing library that answers the requirements. I ended up writing my own code to fetch BODYSTRUCTURE tree, parse it and store it in an internal structure. This gives me the control I need to decide which exact parts of the message I need to actually download and take into account various cases like attachments, forwards, redundant parts (plain text vs html) etc. | 0 | 1,090 | true | 0 | 1 | MIME message structure parsing and analysis | 13,953,238 |
1 | 4 | 0 | 1 | 4 | 0 | 1.2 | 0 | I am retrieving emails from my email server using IMAPClient (Python), by checking for emails flagged with "\Recent". After the email has been read the email server automatically sets the email flag to "\Seen".
What I want to do is reset the email flag to "\Recent" so when I check the email directly on the server is still appears as unread.
What I'm finding is that IMAPClient is throwing an exception when I try to add the "\Recent" flag to an email using IMAPClient's "set_flag" definition. Adding any other flag works fine.
The IMAPClient documentation say's the Recent flag is read-only, but I was wondering if there is still a way to mark an email as un-read.
From my understanding email software like Thunderbird allows you to set emails as un-read so I assume there must be a way to do it.
Thanks. | 0 | python,imaplib | 2012-01-30T03:00:00.000 | 0 | 9,058,865 | Disclaimer: I'm familiar with IMAP but not Python-IMAPClient specifically.
Normally the 'seen' flag determines if an email summary will be shown normal or bold.
You should be able to reset the seen flag. However the recent flag may not be under your direct control. The imap server will set it if notices new messages arriving. | 0 | 5,938 | true | 0 | 1 | How to change email flag to Recent using IMAPClient | 9,059,000 |
2 | 2 | 0 | 2 | 0 | 1 | 1.2 | 0 | I have a GPS module connected through serial port(USB->Virtual COM port). A measurement software is using this port, so with other software I can't access to the data. I would like to create two virtual COM port and share this data through that. Is it possible using Python? Is there any opensource example written in Python? | 0 | python,serial-port | 2012-01-30T14:56:00.000 | 0 | 9,065,831 | I don't think you can do that if you cannot modify the sources of the measurement software.
Serial port protocols are written as "point to point" protocols, so there's no general way to multiplex them. You can write a program that shares the access to the GPS module (handling it exclusively and exposing an API to multiple programs), but every program that wanted to use the GPS module should be written to talk to your API and not directly to the serial port - and in this case it can be done only if you can change the measurement software.
Notice that probably it's not impossible to implement your "virtual port" solution, but it would be an ad-hoc hack (it would work just with that specific protocol) and it may be quite complicated: you would need to emulate two GPS modules and multiplex the requests to the real GPS module; depending on how does it work (e.g. if it has a "complicated" persistent state) it may be simple or very complicated. But surely Python wouldn't be enough, to emulate serial ports you have to go in kernel mode. | 0 | 732 | true | 0 | 1 | Share serial port on Windows using python | 9,065,933 |
2 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I have a GPS module connected through serial port(USB->Virtual COM port). A measurement software is using this port, so with other software I can't access to the data. I would like to create two virtual COM port and share this data through that. Is it possible using Python? Is there any opensource example written in Python? | 0 | python,serial-port | 2012-01-30T14:56:00.000 | 0 | 9,065,831 | Do you need two-way communication, or just reading? You could build or buy hardware to physically split the Rx data line so you could use two COM ports, each of which would read the same data. You could do this with Tx data as well, but you would have to be careful about trashing the data if both ports tried to write at the same time. | 0 | 732 | false | 0 | 1 | Share serial port on Windows using python | 9,068,214 |
1 | 1 | 0 | 2 | 1 | 1 | 0.379949 | 0 | I need to add logging to a milter that I wrote a few months back. It is occasionally rejecting some messages, but I'm not sure why. I know how to add logging to a Python script from the HowTo, but is it necessary for me to add log output commands at every point in my script, or is there a way Python automatically handles that?
Basically, I don't know where in the script it fails and don't want to add the overhead of 60 logging lines. I'm looking for the simplest method of doing this. | 0 | python,logging | 2012-01-30T15:04:00.000 | 0 | 9,065,936 | If you have no idea where it fails you could run a debugging session with input that you know causes the error, and step through the code if that is an option.
Another pretty obvious option is to log all exceptions at the entrance of your script and then drill down from there, but I honestly don't think that there is a way that will find the right places to log for you - if this would be the case that program could just as well track the bug down on itself. | 0 | 540 | false | 0 | 1 | How to automate python logging | 9,065,992 |
1 | 1 | 0 | 1 | 1 | 1 | 0.197375 | 0 | I just created an EGG file on a python project and it created a zip file which contained the source as well as the python compiled file(s). Does it make sense to ship the source as part of the EGG file, if so how can I avoid it during egg file creation? | 0 | python,python-3.x | 2012-01-31T11:46:00.000 | 0 | 9,078,985 | (a) It makes sense
(b) If you really want to avoid it, then just delete the .py files from the egg.
(c) I bet one can reconstruct the full source (less comments) from .pyc files. | 0 | 121 | false | 0 | 1 | Does it make sense to ship python source files as part of the egg file | 9,080,000 |
1 | 3 | 0 | 0 | 9 | 1 | 0 | 0 | I am currently working on a project using python to implement p2p communication between two (or more) computers. Although I am pretty proficient with python, I am by no means an expert; programming and encryption are by no means my profession, simply a hobby. However, in working on this project I have been attempting to learn more about encryption as well as network programming.
As of right now I have written a pretty powerful class that communicates well over a network and I am trying to improve it by implementing RSA to encrypt the connections between peers on the network; this is where I've run into some difficulty.
I have previously used pycrypto to do some basic encryption/decryption in python and am thus-far quite comfortable with all of the tools involved -- including the necessary public-key ciphers. Moreover, I am also aware that pycrypto has some shortcomings, in the fact that it only implements the bare-bones, low level encryption/decryption algorithms needed to implement RSA and does not implement a full protocol for public-key encryption. I also know that pycrypto contains some other useful tools such as an AllOrNothing transform which can be used for padding the communication, etc. However, my question is: can anyone recommend any online articles, books, blog posts, projects, etc. which can help me in my quest to implement an effective RSA protocol?
Lastly, I understand that this is a touchy subject with cryptologists in that amateur-implemented protocols usually mean less security in the program. As I noted above, this project is a mere learning experience; if I was completing this project professionally I would surely use M2Crypto or some other professionally-implemented, secure protocol -- i.e. SSL/TLS. Alas, I am merely trying to learn more about encryption by implementing my own model of a proven protocol to create a secure connection between two peers.
Thanks,
Kevin | 0 | python,encryption,rsa,pycrypto | 2012-02-01T08:50:00.000 | 0 | 9,093,046 | pycrypto has some shortcomings, in the fact that it only implements the bare-bones, low level encryption/decryption algorithms needed to implement RSA and does not implement a full protocol for public-key encryption.
The current version PyCrypto (2.6) does support all major RSA protocols for signature and encryption namely those specified in PKCS#1 (v1.5, PSS, OAEP). | 0 | 6,921 | false | 0 | 1 | Implementing full RSA in Python | 13,454,184 |
1 | 4 | 0 | 56 | 50 | 1 | 1.2 | 0 | I recently came across the dataType called bytearray in python. Could someone provide scenarios where bytearrays are required? | 0 | python,types | 2012-02-01T16:09:00.000 | 0 | 9,099,145 | A bytearray is very similar to a regular python string (str in python2.x, bytes in python3) but with an important difference, whereas strings are immutable, bytearrays are mutable, a bit like a list of single character strings.
This is useful because some applications use byte sequences in ways that perform poorly with immutable strings. When you are making lots of little changes in the middle of large chunks of memory, as in a database engine, or image library, strings perform quite poorly; since you have to make a copy of the whole (possibly large) string. bytearrays have the advantage of making it possible to make that kind of change without making a copy of the memory first.
But this particular case is actually more the exception, rather than the rule. Most uses involve comparing strings, or string formatting. For the latter, there's usually a copy anyway, so a mutable type would offer no advantage, and for the former, since immutable strings cannot change, you can calculate a hash of the string and compare that as a shortcut to comparing each byte in order, which is almost always a big win; and so it's the immutable type (str or bytes) that is the default; and bytearray is the exception when you need it's special features. | 0 | 36,917 | true | 0 | 1 | Where are python bytearrays used? | 9,099,337 |
2 | 2 | 0 | 13 | 22 | 1 | 1.2 | 0 | We have numerous python classes that do not seem to need __init__, initialising them empty is either perfectly acceptable or even preferable. PyLint seems to think this is a bad thing. Am I missing some insight into why having no __init__ is a Bad Smell? Or should I just suppress those warnings and get over it? | 0 | python,pylint | 2012-02-01T17:46:00.000 | 0 | 9,100,616 | What are you using these classes for?
If they are just a grouping of functions that do not need to maintain any state, there is no need for an __init__() but it would make more sense to just move all of those functions into their own module.
If they do maintain a state (they have instance variables) then you should probably have an __init__() so that those variables can be initialized. Even if you never provide values for them when the class is created, it is generally a good idea to have them defined so that your method calls are not referencing instance variables that may or may not exist.
That being said, if you don't need an __init__(), feel free to ignore that warning.
edit: Based on your comment, it seems like you are fine with the AttributeError you will get on referencing variables before initialization. That is a perfectly fine way to program your classes so in that case ignoring the warning from PyLint is reasonable. | 0 | 9,818 | true | 0 | 1 | Why does PyLint warn about no __init__? | 9,100,718 |
2 | 2 | 0 | 2 | 22 | 1 | 0.197375 | 0 | We have numerous python classes that do not seem to need __init__, initialising them empty is either perfectly acceptable or even preferable. PyLint seems to think this is a bad thing. Am I missing some insight into why having no __init__ is a Bad Smell? Or should I just suppress those warnings and get over it? | 0 | python,pylint | 2012-02-01T17:46:00.000 | 0 | 9,100,616 | Usually you will at least use the __init__() method to initialize instance variables. If you are not doing this, then by all means turn off that warning. | 0 | 9,818 | false | 0 | 1 | Why does PyLint warn about no __init__? | 9,100,640 |
2 | 4 | 0 | 3 | 2 | 1 | 0.148885 | 1 | For imported module, is it possible to get the importing module (name)? I'm wondering if inspect can achieve it or not~ | 0 | python | 2012-02-02T02:04:00.000 | 0 | 9,106,166 | Even if you got it to work, this is probably less useful than you think since subsequent imports only copy the existing reference instead of executing the module again. | 0 | 106 | false | 0 | 1 | Is it possible to get "importing module" in "imported module" in Python? | 9,106,241 |
2 | 4 | 0 | 3 | 2 | 1 | 0.148885 | 1 | For imported module, is it possible to get the importing module (name)? I'm wondering if inspect can achieve it or not~ | 0 | python | 2012-02-02T02:04:00.000 | 0 | 9,106,166 | It sounds like you solved your own problem: use the inspect module. I'd traverse up the stack until I found a frame where the current function was not __import__. But I bet if you told people why you want to do this, they'd tell you not to. | 0 | 106 | false | 0 | 1 | Is it possible to get "importing module" in "imported module" in Python? | 9,106,211 |
1 | 2 | 0 | 8 | 4 | 0 | 1 | 0 | In many different code environments' official documentation I see UTF-8 expressed either as upper- or lower- case, and also with and without the dash. Are there any places where one or the other is important to use?
Some places where these strings are found include:
The PHP manual in reference to header() arguments (HTTP headers)
The PHP manual in reference to PHP function arguments
The PHP manual in reference to internal configuration
The MySQL manual in reference to configuration
Python 2 code encoding declaration
Bash locale configuration
HTML meta tags
XML doctypes | 0 | php,python,mysql,html,utf-8 | 2012-02-02T17:56:00.000 | 0 | 9,117,378 | This is indeed wildly different. One place will accept only one form; the other place will only accept the other.
Listing here which is correct in which situation is not a good idea - it would be a huge and pointless open-ended list. Simply always look up in the respective documentation which form(s) is/are accepted for the specific situation. | 0 | 699 | false | 0 | 1 | Are there any places where utf8 vs. utf-8 vs. UTF8 vs. UTF-8 makes a difference? | 9,117,402 |
3 | 7 | 0 | 1 | 3 | 0 | 0.028564 | 1 | I have a bunch of mp3 files that are pretty old and don't have any copy rights. Yet, the place I got them from has filled the copy right tags with its own website url.
I was wondering if there's an easy way to remove these tags programmatically? There's a winamp add on that allows me to do this for each song, but that's not very feasible.
Edit: Is copyright part of the ID3 tags?
Thanks,
-Roozbeh | 0 | php,python,id3 | 2012-02-03T08:31:00.000 | 0 | 9,125,733 | You can just use VLC player. Click on Tools->Media Information | 0 | 45,647 | false | 0 | 1 | How can I remove the copyright tag from ID3 of mp3s in python or php? | 19,575,869 |
3 | 7 | 0 | -1 | 3 | 0 | -0.028564 | 1 | I have a bunch of mp3 files that are pretty old and don't have any copy rights. Yet, the place I got them from has filled the copy right tags with its own website url.
I was wondering if there's an easy way to remove these tags programmatically? There's a winamp add on that allows me to do this for each song, but that's not very feasible.
Edit: Is copyright part of the ID3 tags?
Thanks,
-Roozbeh | 0 | php,python,id3 | 2012-02-03T08:31:00.000 | 0 | 9,125,733 | No Need Of any PHP code
Just Reproduce the mp3 file i.e.either burn & rip or cut the size &time making a new file where you can specify your own multitudes of options | 0 | 45,647 | false | 0 | 1 | How can I remove the copyright tag from ID3 of mp3s in python or php? | 39,645,756 |
3 | 7 | 0 | 0 | 3 | 0 | 0 | 1 | I have a bunch of mp3 files that are pretty old and don't have any copy rights. Yet, the place I got them from has filled the copy right tags with its own website url.
I was wondering if there's an easy way to remove these tags programmatically? There's a winamp add on that allows me to do this for each song, but that's not very feasible.
Edit: Is copyright part of the ID3 tags?
Thanks,
-Roozbeh | 0 | php,python,id3 | 2012-02-03T08:31:00.000 | 0 | 9,125,733 | Yes Yes This works!
Just download the latest version of VLC media player. Open the mp3 file in it.
Right click on file > choose 'information' > edit publisher & copyright information there > click 'Save Metadata' below.
And u r done. :) | 0 | 45,647 | false | 0 | 1 | How can I remove the copyright tag from ID3 of mp3s in python or php? | 26,053,995 |
1 | 4 | 0 | -1 | 12 | 0 | -0.049958 | 1 | I am using Google's Oauth 2.0 to get the user's access_token, but I dont know how to use it with imaplib to access inbox. | 0 | python,gmail,oauth-2.0,gmail-imap,imaplib | 2012-02-03T19:38:00.000 | 0 | 9,134,491 | IMAP does not support accessing inbox without password -> so imaplib doesnt | 0 | 6,657 | false | 0 | 1 | Access Gmail Imap with OAuth 2.0 Access token | 11,414,012 |
1 | 2 | 0 | 5 | 6 | 1 | 0.462117 | 0 | I have one testing module that I want to use for android testing. I have the files but there is no installation file for it, so I added the module to PATH variable, even then it doesn't work I try to import it.
Any way to make it work. Do I have to paste them in Python folder only (and what it the Python file location).
In windows, I use to paste all the files in Python folder and everything works perfectly fine. Here in Ubuntu I'm not able to find the location so I added it in PATH.
Any way out!
Any help is appreciated.
Cheers
Some details: Python version: 2.7.2, Ubuntu 11.10 OS, Python module is in file/folder format with no "setup.py" file to install, Location of module already in PATH variable, Everything else in Python is working beside that module, same worked in Windows XP with Python 2.7.2 after copy pasting. | 0 | python | 2012-02-04T19:00:00.000 | 0 | 9,143,570 | You can add a __init__.py file without any content to the directory which yo want to import.
The init.py files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as string , from unintentionally hiding valid modules that occur later (deeper) on the module search path. | 0 | 14,142 | false | 0 | 1 | Python module not working (not able to import) even after including it in PATH variable | 42,944,983 |
1 | 2 | 0 | 2 | 2 | 0 | 1.2 | 1 | Given a protobuf serialization is it possible to get a list of all tag numbers that are in the message? Generally is it possible to view the structure of the message without the defining .proto files? | 0 | java,python,google-api,protocol-buffers | 2012-02-06T09:54:00.000 | 0 | 9,158,329 | Most APIs will indeed have some form of reader-based API that allows you to enumerate a raw protobuf stream. However, that by itself is not enough to fully understand the data, since without the schema the interpretation is ambiguous:
a varint could be zig-zag encoded (sint32/sint64), or not (int32/int64/uint32/uint64) - radically changing the meaning, or a boolean, or an enum
a fixed-32/fixed-64 could be a signed or unsigned integer, or could be an IEEE754 float/double
a length-prefixed chunk could be a UTF-8 string, a BLOB, a sub-message, or a "packed" repeated set of primitives; if it is a sub-message, you'll have to repeat recursively
So... yes and no. Certainly you can get the field numbers of the outermost message.
Another approach would be to use the regular API against a type with no members (message Naked {}), and then query the unexpected data (i.e. all of it) via the "extension" API that many implementations provide. | 0 | 428 | true | 0 | 1 | Can all tag numbers be extracted from a given protobuf serialization? | 9,158,407 |
1 | 1 | 0 | 5 | 1 | 0 | 0.761594 | 0 | I have made an extensive script that runs fine when started from the command line or IDLE. But when I try to run it with cron it keeps giving errors:
IOError: [Errno 32] Broken pipe | 0 | python | 2012-02-07T06:43:00.000 | 1 | 9,172,046 | If your script runs too long, cron will close its stdout/stderr that are normally redirected to a log file (through cron). Attempting to print after the timeout will give you broken pipe.
A solution is to use logging or print only to your own log files and never to stdout.
Also, cron has different envinronment, specified at the top of crontab or cron.(daily|hourly|...) files. Make sure it is correct, especially if you rely on PATH or HOME that are set at login. | 0 | 1,106 | false | 0 | 1 | Broken pipe" when running python with cron | 9,173,292 |
1 | 1 | 0 | 3 | 0 | 1 | 1.2 | 0 | I'm working with a binary file that references another file using absolute paths.
The path contains both japanese and ascii characters.
The length of the string is given, so I can just read that many bytes and convert it into a string.
However the problem is trying to convert the string. If I specify the encoding as ascii, it'll fail on the japanese characters. If I specify it as japanese encoding (shift-jis or something), it won't read the english characters properly.
One byte is used for each ascii character, while two bytes are used for each japanese character.
What is the fastest and cleanest way to convert these bytes into a string? The encodings are known. Will the same technique work in older versions of python. | 0 | unicode,python-3.x,string-parsing | 2012-02-08T03:36:00.000 | 0 | 9,187,540 | This sounds like you have fallen victim for a misunderstand the basics of Unicode and encodings. It may be that you have not, but misunderstandnings are common and understandable, while the situation you describe are not.
A string of bytes that contains mixed encodings are, per definition, invalid in any of these encodings. If this really was the case, you would have to split the bytes string into it's parts, and decode every part separately. In this case it would probably mean splitting on the path separators, so it would be reasonably easy, but in other cases it would not. However, I serously doubt that this is the case, as it would mean that your source is insane. That happens, but it is unlikely. :-)
If the source gives you one path as a bytes string, it is most likely that this string uses only one encoding. It may contain both Japanese and ASCII-characters and still be using one encoding. The most common encodings that can handle both Japanese and ASCII are UTF-8 and UTF-16. My guess is that your source uses one of those. In fact, since you write "One byte is used for each ascii character, while two bytes are used for each japanese character" it is probably UTF-8. It could also be Shift JIS, but it seems you already tried that.
If not, please explain what your source is, and give examples of the byte strings (in ASCII/HEX) that you are given. | 0 | 993 | true | 0 | 1 | Working with strings with mixed encodings in python 3.x | 9,191,732 |
1 | 2 | 0 | 3 | 6 | 1 | 0.291313 | 0 | I have a solid understanding of OOP and its idioms in Java.
Now I am coding in python, and I am in a situation where having multiple inheritance may be useful, however (and this may be due to years of java code), i am reluctant to do it and I am considering using composition instead of inheritance in order to avoid potential conflicts with potential equal method names.
Question is, am i being to strict or too java focused regarding this thing. Or using multiple inheritance in python is not only possible but also encouraged.
Thanks for your time :) | 0 | python | 2012-02-08T04:40:00.000 | 0 | 9,187,921 | I would still prefer composition to inheritance, whether multiple or single. Really getting into duck typing is a bit like having loads of implicit interfaces everywhere, so you don't even need inheritance (or abstract classes) very much at all in Python. But that's prefer composition, not never use inheritance. If inheritance (even multiple) is a good fit and composition isn't, then use inheritance. | 0 | 2,803 | false | 0 | 1 | Multiple inheritance in python vs composition | 9,188,059 |
1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | I have some assemblies written in C#, which I want to use with IronPython interpreter. These assemblies use NLog for logging, and if I use them from C# code, I can provide NLog settings with the NLog.config. But how can I configure logging if I use ipy.exe interpreter? | 0 | c#,ironpython,nlog | 2012-02-08T08:17:00.000 | 0 | 9,189,734 | Put nlog.config next to the ipy.exe. | 0 | 169 | false | 0 | 1 | How do I use NLog with IronPython interpreter? | 9,235,739 |
3 | 3 | 0 | 1 | 11 | 0 | 0.066568 | 0 | I'm looking at using a crypto lib such as pycrypto for encrypting/decrypting fields in my python webapp db. But encryption algorithms require a key. If I have an unencrypted key in my source it seems silly to attempt encryption of db fields as on my server if someone has access to the db files they will also have access to my python sourcecode.
Is there a best-practice method of securing the key used? Or an alternative method of encrypting the db fields (at application not db level)?
UPDATE: the fields I am trying to secure are oauth tokens.
UPDATE: I guess there is no common way to avoid this. I think I'll need to encrypt the fields anyway as it's likely the db files will get backed up and moved around so at least I'll reduce the issue to a single vulnerable location - viewing my source code.
UPDATE: The oauth tokens need to be used for api calls while the user is offline, therefore using their password as a key is not suitable in this case. | 0 | python,database,web-applications,cryptography,pycrypto | 2012-02-08T17:34:00.000 | 0 | 9,198,494 | Symmetric encryption is indeed useless, as you have noticed; however for certain fields, using asymmetric encryption or a trapdoor function may be usable:
if the web application does not need to read back the data, then use asymmetric encryption. This is useful e.g. for credit card data: your application would encrypt the data with the public key of the order processing system, which is on a separate machine that is not publically accessible.
if all you need is equality comparison, use a trapdoor function, such as a message digest, ideally with a salt value. This is good for passwords that should be unrecoverable on the server. | 0 | 6,258 | false | 0 | 1 | How to store a crypto key securely? | 9,198,661 |
3 | 3 | 0 | 0 | 11 | 0 | 0 | 0 | I'm looking at using a crypto lib such as pycrypto for encrypting/decrypting fields in my python webapp db. But encryption algorithms require a key. If I have an unencrypted key in my source it seems silly to attempt encryption of db fields as on my server if someone has access to the db files they will also have access to my python sourcecode.
Is there a best-practice method of securing the key used? Or an alternative method of encrypting the db fields (at application not db level)?
UPDATE: the fields I am trying to secure are oauth tokens.
UPDATE: I guess there is no common way to avoid this. I think I'll need to encrypt the fields anyway as it's likely the db files will get backed up and moved around so at least I'll reduce the issue to a single vulnerable location - viewing my source code.
UPDATE: The oauth tokens need to be used for api calls while the user is offline, therefore using their password as a key is not suitable in this case. | 0 | python,database,web-applications,cryptography,pycrypto | 2012-02-08T17:34:00.000 | 0 | 9,198,494 | Before you can determine what crypto approach is the best, you have to think about what you are trying to protect and how much effort an attacker will be ready to put into getting the key/information from your system.
What is the attack scenario that you are trying to remedy by using crypto? A stolen database file? | 0 | 6,258 | false | 0 | 1 | How to store a crypto key securely? | 9,198,676 |
3 | 3 | 0 | 5 | 11 | 0 | 1.2 | 0 | I'm looking at using a crypto lib such as pycrypto for encrypting/decrypting fields in my python webapp db. But encryption algorithms require a key. If I have an unencrypted key in my source it seems silly to attempt encryption of db fields as on my server if someone has access to the db files they will also have access to my python sourcecode.
Is there a best-practice method of securing the key used? Or an alternative method of encrypting the db fields (at application not db level)?
UPDATE: the fields I am trying to secure are oauth tokens.
UPDATE: I guess there is no common way to avoid this. I think I'll need to encrypt the fields anyway as it's likely the db files will get backed up and moved around so at least I'll reduce the issue to a single vulnerable location - viewing my source code.
UPDATE: The oauth tokens need to be used for api calls while the user is offline, therefore using their password as a key is not suitable in this case. | 0 | python,database,web-applications,cryptography,pycrypto | 2012-02-08T17:34:00.000 | 0 | 9,198,494 | If you are encrypting fields that you only need to verify (not recall), then simple hash with SHA or one-way encrypt with DES, or IDEA using a salt to prevent a rainbow table to actually reveal them. This is useful for passwords or other access secrets.
Python and webapps makes me think of GAE, so you may want something that is not doing an encrypt/decrypt on every DB transaction since these are already un-cheap on GAE.
Best practice for an encrypted databased is to encrypt the fields with the users own secret, but to include an asymmetric backdoor that encrypts the users secret key so you (and not anyone who has access to the DB source files, or the tables) can unencrypt the users key with your secret key, should recovery or something else necessitate.
In that case, the user (or you or trusted delegate) can retireve and unencrypt their own information only. You may want to be more stringent in validating user secrets if you are thinking you need to secure their fields by encryption.
In this regards, a passphrase (as opposed to a password) of some secret words such "in the jungle the mighty Jungle" is a good practice to encourage.
EDIT: Just saw your update. The best way to store OAuth is to give them a short lifespan, only request resources your need and re-request them over getting long tokens. It's better to design around getting authenticated, getting your access and getting out, than leaving the key under the backdoor for 10 years.
Since, if you need to recall OAuth when the user comes online, you can do as above and encrypt with a user specfic secret. You could also keygen from an encrypted counter (encrypted with the user secret) so the actual encryption key changes at each transaction, and the counter is stored in plaintext. But check specific crypto algo discussion of this mode before using. Some algorithms may not play nice with this. | 0 | 6,258 | true | 0 | 1 | How to store a crypto key securely? | 9,198,785 |
2 | 2 | 0 | 1 | 4 | 1 | 0.099668 | 0 | I've never written a proper test until now, only small programs that I would dispose of after the test succeeded. I was looking through Python's unittest module and tutorials around the web, but something's not clear to me.
How much should one TestCase cover? I've seen examples on the web that have TestCase classes with only one method, as well as classes that test almost the entire available functionality.
In my case, I'm trying to write a test for a simple bloom filter. How do you think I should organize my test cases? | 0 | python,unit-testing | 2012-02-08T18:48:00.000 | 0 | 9,199,551 | I would create one TestCase with several test methods. A bloom filter has simple semantics, so only one TestCase. I usually add a TestCase per feature. | 0 | 204 | false | 0 | 1 | How much should one TestCase cover? | 9,199,631 |
2 | 2 | 0 | 5 | 4 | 1 | 1.2 | 0 | I've never written a proper test until now, only small programs that I would dispose of after the test succeeded. I was looking through Python's unittest module and tutorials around the web, but something's not clear to me.
How much should one TestCase cover? I've seen examples on the web that have TestCase classes with only one method, as well as classes that test almost the entire available functionality.
In my case, I'm trying to write a test for a simple bloom filter. How do you think I should organize my test cases? | 0 | python,unit-testing | 2012-02-08T18:48:00.000 | 0 | 9,199,551 | To put it simple: one unit test should cover single feature of your program. That's all there is to say. That's why they're called unit tests.
Of course, what we understand by feature may vary. Think about smallest parts of your program that might break or not work as expected. Think about business requirements of your code. Those are parts that you want each to be covered by dedicated unit test.
Usually, unit tests are small, isolated and atomic. They should be easy to understand, they should fail/pass independently from one another, and should execute fast. Fairly good indication of proper unit tests is single assertion - if you find yourself writing more, you probably test too much and it's a sign you need more than one test for given feature. However, this is not a strict rule - the more complex code is involved, the more complex unit tests tend to be.
When writing tests, it's easy to split your code functionality and test those separated parts (this should give you the idea of atomicity of your tests). For example, if you have a method that verifies input then calls a service and finally returns result, you usually want to have all three (verify, call, return) steps covered. | 0 | 204 | true | 0 | 1 | How much should one TestCase cover? | 9,199,764 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I'm using Eclipse + PyDev to work on python web projects.
Sometimes I need to run debug session on production server rather then locally, due to specific environment.
I was wondering if there is a way to run isolated remote debugging session, so the other users don't experience any issues, and code execution doesn't suspend for them?
Thanks. | 0 | python,eclipse,debugging,pydev | 2012-02-09T23:20:00.000 | 1 | 9,220,493 | I don't think this is possible out of the box... you'd need to architecture your production server so that this would be possible (i.e.: when you send a given request it should spawn a different interpreter just to handle your request for debugging purposes and shutdown that interpreter after the debug session ends), but you have to make sure that the debugger will actually run in a separate interpreter, otherwise it could end up tracing more things from other people (and in the best situation it'd only make things slower and in the worse it could end up having unexpected consequences because of some interaction of the debugger with your code). | 0 | 79 | false | 1 | 1 | Isolated debugging session with PyDev | 9,226,643 |
1 | 2 | 1 | 0 | 0 | 0 | 0 | 0 | I'm currently using the C/Python API to read data from a large binary file.
This result in Python is not as efficient as the pure C result (time x2) because, I think, of the time took to wrap things up into a PyObject. Typically, I store 42-elements tuples in a PyArrayObject. To do this, I use:
PyObject *r = Py_BuildValue("(f, I, i, K, f, K, K, etc..)", a, b, c, etc...) ;
My question is the following: Is there a more efficient way to do it (quicker execution time)?
For example: will PyTuple_Pack(n, args) do it more quickly ? | 0 | python,c | 2012-02-10T13:51:00.000 | 0 | 9,228,771 | For time critical code, I create a tuple of the desired length and then create the components individually and stuff them into the tuple. | 0 | 796 | false | 0 | 1 | C/Python API : efficiency of Py_BuildValue use | 9,231,111 |
1 | 1 | 0 | 2 | 3 | 0 | 1.2 | 0 | I am running OSX Lion and have installed python2.7 from python.org (this distribution can run in both 64bit and 32bit mode). I have also installed the wxPython package. I can run python scripts that import wxPython from the Terminal by explicitly using the 32-bit version. I would like to run the same scripts in Eclipse, but cannot. I configure PyDev to use python.org's interpreter, but it defaults to 64-bit (I check this by printing sys.maxint). I cannot figure out how to make PyDev use the 32-bit interpreter.
I have tried configuring the PyDev python interpreter to point to:
/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7-32
but it ends up using:
/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python
How can I configure PyDev to use the 32-bit python interpreter in Eclipse on OSX Lion?
I appreciate any input regarding this matter. Thank you. | 0 | python,eclipse,osx-lion,32bit-64bit,pydev | 2012-02-11T03:11:00.000 | 1 | 9,237,508 | The interpreter used in PyDev is computed from sys.executable...
Now, a doubt: if you start a shell with /Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7-32 and do 'print sys.executable', which executable appears?
Now, onto a workaround... you can try replacing the places where sys.executable appears in plugins/org.python.pydev/PySrc/interpreterInfo.py to point to '/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7-32'
That's the script where it decides which interpreter to actually use... (still, it's strange that sys.executable would point to a different location...) | 0 | 1,922 | true | 0 | 1 | How to configure PyDev to use 32-bit Python Interpreter In Eclipse, on OSX Lion | 9,282,173 |
1 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 0 | I want to calculate all the three dihedral angles in a residue.
calc_dihedral(atom1, atom2, atom3, atom4) of Biopython requires vector coordinates of four atoms as arguments and returns an output of a single value. I'm not sure which of the three angles output represents.
Please suggest which atoms in the residue are required to calculate which angle and in what order the atom coordinates should be given in the function as arguments. | 0 | python,biopython | 2012-02-11T12:12:00.000 | 0 | 9,240,115 | We need backbone atoms only N,CA,C. so for the protein chain we get N,CA,C,N,CA,C,N,CA,C,N,CA,C.
we need to define them in plane, to find out the angle we need two planes (plane1: C,N,CA)(plane2: N,CA,C). we neglect the N,CA for first residue. Consider the bolded atoms only. so you submit the bolded atoms (3 atoms of one residue and 4th atom from the second residue.) I dont know about omega. | 0 | 1,649 | false | 0 | 1 | Which atoms are required by Biopython's calc_dihedral() to calculate all 3 dihedral angles? | 10,908,585 |
1 | 2 | 0 | 4 | 2 | 0 | 1.2 | 0 | I have learned in my University time Pascal and C and RedHat Linux/Unix .
To get quickly one job, i started learning Microsoft Visual Basic 6.0 for speed in development etc. In that time, with C its like more time consuming and i was not confident to use it for job purpose, where most of the companies demand fast/rapid development.
After that i had problems with my companies because they want web applications, then i started using PHP which is also great, because customers demand web projects and they expect Google like applications in short time frame, which is doable because PHP gives that speed and its huge community.
To explain my need for Go-lang is following:
PHP the syntax is friendly compared to C/Pascal.
I was very happy to learn Python, but its syntax is very much different then C.
Which just not gonna work with me to accept and really learn it better and better.
I have tried to learn Ruby, at-least so that i can have the knowledge of Python
like syntax, but i really skipped Ruby because of 2x time slower then PHP
Therefore,
Is Go-lang is the perfect choice for SPEED vs PHP vs Ruby, for Web development + Gtk? | 0 | php,python,ruby-on-rails,ruby,go | 2012-02-11T14:48:00.000 | 0 | 9,241,091 | Alas, I'd love to have 1 asset that I could use for all conditions but it's just not available in the world of computing. You're going to have to learn 2 or more.
PHP is very widely used, so you might as well stick with it. If you can create decent webapps using it, go for it. I would suggest learning C/C++ too so you can write any high-performance modules using that and call them from your PHP code. That's probably the best of all worlds for your webapps.
If you wanted to write for desktops, I think you'll be best off learning C++ with Qt (and look at Wt) (as it appears you're a Linux dev), or C#/VB.NET for Windows.
For mobiles, learn C/C++ as you can write apps in that no matter which platform even if you have to put up with some platform-dependant extensions - you either have to learn Java for Android, Objective-C for iOS, or (well we're not quite sure what MS has planned for Windows Phone 8, but I hear they like native code again, that means C++/CX). You can see where I'm going with this!
so anyway, if you're happy with PHP then keep with it. There is a ton of code out there that runs PHP so it's not like you're working with some bleeding-edge or hardly-used obscure language. | 0 | 8,545 | true | 0 | 1 | Stick with PHP or learn Go-lang? | 9,241,134 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | I have to run some student-made code that they made as homework, I'm running this on my own computer (Mac OS), and I'm not entirely certain they won't accidentally "rm -rf /" my machine. Is there an easy way to run their python scripts with reduced permissions, so that e.g. they can access only a certain directory? Is PyPy the way to go? | 0 | python,unix,sandbox | 2012-02-11T18:13:00.000 | 1 | 9,242,713 | Create a new user account "student". Run your students' scripts as "student". Worst case, the script will destroy this special user account. If this happens, just delete user "student" and start over. | 0 | 224 | false | 0 | 1 | Run python script with reduced permissions | 9,242,880 |
1 | 1 | 0 | 0 | 3 | 0 | 0 | 0 | (Note: I’ve Linux in mind, but the problem may apply on other platforms.)
Problem: Linux doesn’t do suid on #! scripts nor does it activate “Linux capabilities” on them.
Why dow we have this problem? Because during the kernel interpreter setup to run the script, an attacker may have replaced that file. How? The formerly trusted suid/capability-enabled script file may be in a directory he has control over (e.g. can delete the not-owned trusted file, or the file is actually a symbolic link he owns).
Proper solution: make the kernel allow suid/cap scripts if: a) it is clear that the caller has no power over the script file -or- like a couple of other operating systems do b) pass the script as /dev/fd/x, referring to the originally kernel-opened trusted file.
Answer I’m looking for: for kernels which can’t do this (all Linux), I need a safe “now” solution.
What do I have in mind? A binary wrapper, which does what the kernel does not, in a safe way.
I would like to
hear from established wrappers for (Python) scripts that pass Linux capabilities and possibly suid from the script file to the interpreter to make them effective.
get comments on my wrapper proposed below
Problems with sudo: sudo is not a good wrapper, because it doesn’t help the kernel to not fall for that just explained “script got replaced” trap (“man sudo” under caveats says so).
Proposed wrapper
actually, I want a little program, which generates the wrapper
command line, e.g.: sudo suid_capability_wrapper ./script.py
script.py has already the suid bit and capabilites set (no function, just information)
the generator suid_capability_wrapper does
generate C(?) source and compile
compile output into: default: basename script.py .py, or argument -o
set the wrapper owner, group, suid like script.py
set the permitted capabilities like script.py, ignore inheritable and effective caps
warn if the interpreter (e.g. /usr/bin/python) does not have the corresponding caps in its inheritable set (this is a system limitation: there is no way to pass on capabilites without suid-root otherwise)
the generated code does:
check if file descriptors 0, 1 and 2 are open, abort otherwise (possibly add more checks for too crazy environment conditions)
if compiled-in target script is compiled-in with relative path, determine self’s location via /proc/self/exe
combine own path with relative path to the script to find it
check if target scripts owner, group, permissions, caps, suid are still like the original (compiled-in) [this is the only non-necessary safety-check I want to include: otherwise I trust that script]
set the set of inherited capabilities equal to the set of permitted capabilities
execve() the interpreter similar to how the kernel does, but use the script-path we know, and the environment we got (the script should take care of the environment)
A bunch of notes and warnings may be printed by suid_capability_wrapper to educate the user about:
make sure nobody can manipulate the script (e.g. world writable)
be aware that suid/capabilities come from the wrapper, nothing cares about suid/xattr mounts for the script file
the interpreter (python) is execve()ed, it will get a dirty environment from here
it will also get the rest of the standard process environment passed through it, which is ... ... ... (read man-pages for exec to begin with)
use #!/usr/bin/python -E to immunize the python interpreter from environment variables
clean the environment yourself in the script or be aware that there is a lot of code you run as side-effect which does care about some of these variables | 0 | python,c,sudo,suid | 2012-02-11T18:46:00.000 | 1 | 9,242,989 | You don't want to use a shebang at all, on any file - you want to use a binary which invokes the Python interpreter, then tells it to start the script file for which you asked.
It needs to do three things:
Start a Python interpreter (from a trusted path, breaking chroot jails and so on). I suggest statically linking libpython and using the CPython API for this, but it's up to you.
Open the script file FD and atomically check that it is both suid and owned by root. Don't allow the file to be altered between the check and the execution - be careful.
Tell CPython to execute the script from the FD you opened earlier.
This will give you a binary which will execute all owned-by-root-and-suid scripts under Python only. You only need one such program, not one per script. It's your "suidpythonrunner".
As you surmised, you must clear the environment before running Python. LD_LIBRARY_PATH is taken care of by the kernel, but PYTHONPATH could be deadly. | 0 | 2,470 | false | 0 | 1 | Is this a safe suid/capability wrapper for (Python) scripts? | 9,243,141 |
2 | 2 | 0 | 1 | 2 | 1 | 1.2 | 0 | Right now, I'm learning Python and Javascript, and someone recently suggested to me that I learn tcl. Being a relative noob to programming, I have no idea what tcl is, and if it is similar to Python. As i love python, I'm wondering how similar the two are so I can see if I want to start it. | 0 | python,tcl | 2012-02-13T21:53:00.000 | 1 | 9,268,611 | Tcl is not really very similar to Python. It has some surface similarities I guess, as it is a mostly procedural language, but its philosophy is rather different. Whereas Python takes the approach that everything is an object, Tcl's approach is sometimes described as "everything is (or can be) a string." There are some interesting things to learn from Tcl deriving from this approach, but it's one of the lesser-used languages, so maybe hold off until you have a tangible reason to use it. In any case, you have two very different languages on your plate already; no need (IMHO) to add a third just yet. | 0 | 3,587 | true | 0 | 1 | Similarities between tcl and Python | 9,268,696 |
2 | 2 | 0 | 5 | 2 | 1 | 0.462117 | 0 | Right now, I'm learning Python and Javascript, and someone recently suggested to me that I learn tcl. Being a relative noob to programming, I have no idea what tcl is, and if it is similar to Python. As i love python, I'm wondering how similar the two are so I can see if I want to start it. | 0 | python,tcl | 2012-02-13T21:53:00.000 | 1 | 9,268,611 | While this question will obviously be closed as inconstructive in a short time, I'll leave my answer here anyway.
Joe, you appear to be greatly confused about what should drive a person who count himself a programmer to learn another programming language: in fact, one should have a natural desire to learn different languages because only this can widen one's idea about how problems can be solved by programming (programming is about solving problems). Knowing N similar programming languages basically gives you nothing besides an immediate ability to use those programming languages. This doesn't add anything to your mental toolbox.
I suggest you to at least look at functional languages (everyone's excited about them these days anyway), say, Haskell. Also maybe look at LISP or a similar thing.
Tcl is also quite interesting in its concepts (almost no syntax, everything is a string, uniformity of commands etc). Python is pretty boring in this respect--it's certainly enables a programmer to do certain things quick and efficient but it does not contain anything to satisfy a prying mind.
So my opinion is that your premises are wrong. Hope I was able to explain why. | 0 | 3,587 | false | 0 | 1 | Similarities between tcl and Python | 9,268,859 |
1 | 3 | 0 | 1 | 11 | 1 | 0.066568 | 0 | I'm creating a game in which I have a somewhat complex method for creating entities.
When a level is loaded, the loading code reads a bunch of YAML files that contain attributes of all the different possible units. Using the YAML file, it creates a so-called EntityResource object. This EntityResource object serves as the authoritative source of information when spawning new units. The goal is twofold:
Deter cheating by implementing a hash check on the output of the YAML file
Aid in debugging by having all unit information come from a single, authoritative source.
These EntityResource objects are then fed into an EntityFactory object to produce units of a specific type.
My question is as follows. Is there a way to create sublcasses of EntityResource dynamically, based on the contents of the YAML file being read in?
Also, I would like each of these YAML-file-derived subclasses to be assigned a singleton metaclass. Any caveats? | 0 | python,class,singleton,subclass,subclassing | 2012-02-13T23:59:00.000 | 0 | 9,269,902 | When I hear "creating subclasses on the fly" I understand "create objects that behave differently on the fly", which is really a question of configuration.
Is there anything you need that you can't get by just reading in some data and creating an object that decides how it is going to behave based on what it reads?
Here's the metaphor: I'm a handy guy -- I can put together any IKEA item you throw at me. But I'm not a different person each time, I'm just the same handy guy reading a different set of diagrams and looking for different kinds of screws and pieces of wood. That's my reasoning for subclassing not being the natural solution here. | 0 | 9,225 | false | 0 | 1 | Is there a way to create subclasses on-the-fly? | 9,269,944 |
2 | 2 | 0 | 1 | 3 | 1 | 0.099668 | 0 | If I place my project in /usr/bin/
will my python interpreter generate bytecode? If so where does it put them as the files do not have write permission in that folder. Does it cache them in a temp file?
If not, is there a performance loss for me putting the project there?
I have packaged this up as a .deb file that is installed from my Ubuntu ppa, so the obvious place to install the project is in /usr/bin/
but if I don't generate byte code by putting it there what should I do? Can I give the project write permission if it installs on another persons machine? that would seem to be a security risk.
There are surely lots of python projects installed in Ubuntu ( and obviously other distros ) how do they deal with this?
Thanks | 0 | python,linux | 2012-02-15T08:27:00.000 | 1 | 9,290,018 | Regarding the script in /usr/bin, if you execute your script as a user that doesn't have permissions to write in /usr/bin, then the .pyc files won't be created and, as far as I know, there isn't any other caching mechanism.
This means that your file will be byte compiled by the interpreter every time so, yes, there will be a performance loss. However, probably that loss it's not noticeable. Note that when a source file is updated, the compiled file is updated automatically without the user noticing it (at least most of the times).
What I've seen is the common practice in Ubuntu is to use small scripts in /usr/bin without even the .py extension. Those scripts are byte compiled very fast, so you don't need to worry about that. They just import a library and call some kind of library.main.Application().run() method and that's all.
Note that the library is installed in a different path and that all library files are byte compiled for different python versions. If that's not the case in your package, then you have to review you setup.py and your debian files since that's not the way it should be. | 0 | 738 | false | 0 | 1 | Out of home folder .pyc files? | 9,290,219 |
2 | 2 | 0 | 1 | 3 | 1 | 0.099668 | 0 | If I place my project in /usr/bin/
will my python interpreter generate bytecode? If so where does it put them as the files do not have write permission in that folder. Does it cache them in a temp file?
If not, is there a performance loss for me putting the project there?
I have packaged this up as a .deb file that is installed from my Ubuntu ppa, so the obvious place to install the project is in /usr/bin/
but if I don't generate byte code by putting it there what should I do? Can I give the project write permission if it installs on another persons machine? that would seem to be a security risk.
There are surely lots of python projects installed in Ubuntu ( and obviously other distros ) how do they deal with this?
Thanks | 0 | python,linux | 2012-02-15T08:27:00.000 | 1 | 9,290,018 | .pyc/.pyo files are not generated for scripts that are run directly. Python modules placed where Python modules are normally expected and packaged up have the .pyc/.pyo files generated at either build time or install time, and so aren't the end user's problem. | 0 | 738 | false | 0 | 1 | Out of home folder .pyc files? | 9,290,322 |
1 | 2 | 0 | 4 | 1 | 0 | 0.379949 | 0 | I am making a python script that in the case of EXT filesystem, will create symbolic links of some stuff, otherwise it will move the files.
How can I know the type of the filesystem of a directory? | 0 | python,windows,linux | 2012-02-16T21:10:00.000 | 1 | 9,319,122 | What you probably should do is to just try to make the link and if it fails, copy.
It'll give you the advantage that you'll automatically support all file systems with soft links without having to do advanced detection or keeping an updated list of supported file systems. | 0 | 168 | false | 0 | 1 | Finding out if the current filesystem supports symbolic links | 9,319,169 |
1 | 2 | 0 | 2 | 0 | 0 | 0.197375 | 0 | I have a python web application that I have configured in apache as:
WSGIScriptAlias /firetalk /scripts/firetalkServer2
When I access this from javascript using XMLHttpRequest, WSGI/Apache end up launching multiple instances which breaks what I am trying to accomplish.
So, is there any way to limit WSGI/Apache to a single instance of the specified python script?
Thank you. | 0 | python,apache,wsgi | 2012-02-17T00:38:00.000 | 0 | 9,321,335 | Put the WSGI app in daemon mode and tell it to use a single process. Note that this could have a detrimental effect on performance. | 0 | 676 | false | 1 | 1 | How to keep WSGI from launching multiple instances | 9,321,430 |
1 | 2 | 0 | 1 | 0 | 1 | 0.099668 | 0 | My friend told me that I should use assembly to get my code to run faster, but it's really hard to program in and I don't know where to begin.
Are there any programs that can generate assembly from an easier language like python?? | 0 | python,assembly | 2012-02-17T01:54:00.000 | 0 | 9,321,875 | Your friend is wrong. Most programs don't get demonstrably faster when written in assembly. What makes assembly code fast is that assembly code programmers generally worry a lot about speed and size, and so that's the focus of their efforts. Most compilers can do a much better job of creating fast programs than an only-average programmer can in assembly. | 0 | 945 | false | 0 | 1 | need an assembly code generator | 9,338,218 |
1 | 5 | 0 | 3 | 2 | 0 | 0.119427 | 0 | I want my users to write code and run it inside a controlled environment, like for example Lua or Perl. My site runs on Perl CGI's.
Is there a way to run an isolated perl/Lua/python/etc script without access to the filesystem and returns data via stdout to be saved in a database?
What i need is a secure environment, how do i apply the restrictions? Thanks in advance.
FYI: I want to achieve something like ideone.com or codepad.org
I've been reading about sandboxes in Lua or inline code, but they don't allow me to limit resources and time, just operations. I think i'll have a virtual machine and run the code in there, any tips? | 0 | python,perl,scripting,lua,cgi | 2012-02-17T02:19:00.000 | 1 | 9,322,042 | One idea that comes to my mind is to create a chroot'ed env for each of your user and run the user's script in that chroot'ed env. | 0 | 303 | false | 0 | 1 | How to run a script that can only write to STDOUT and read from STDIN? | 9,323,660 |
3 | 7 | 0 | 1 | 23 | 0 | 0.028564 | 0 | When writing a Python 3.1 CGI script, I run into horrible UnicodeDecodeErrors. However, when running the script on the command line, everything works.
It seems that open() and print() use the return value of locale.getpreferredencoding() to know what encoding to use by default. When running on the command line, that value is 'UTF-8', as it should be. But when running the script through a browser, the encoding mysteriously gets redefined to 'ANSI_X3.4-1968', which appears to be a just a fancy name for plain ASCII.
I now need to know how to make the cgi script run with 'utf-8' as the default encoding in all cases. My setup is Python 3.1.3 and Apache2 on Debian Linux. The system-wide locale is en_GB.utf-8. | 0 | python,unicode,python-3.x,cgi | 2012-02-17T03:18:00.000 | 1 | 9,322,410 | Your best bet is to explicitly encode your Unicode strings into bytes using the encoding you want to use. Relying on the implicit conversion will lead to troubles like this.
BTW: If the error is really UnicodeDecodeError, then it isn't happening on output, it's trying to decode a byte stream into Unicode, which would happen somewhere else. | 0 | 7,021 | false | 0 | 1 | Set encoding in Python 3 CGI scripts | 9,322,497 |
3 | 7 | 0 | 4 | 23 | 0 | 0.113791 | 0 | When writing a Python 3.1 CGI script, I run into horrible UnicodeDecodeErrors. However, when running the script on the command line, everything works.
It seems that open() and print() use the return value of locale.getpreferredencoding() to know what encoding to use by default. When running on the command line, that value is 'UTF-8', as it should be. But when running the script through a browser, the encoding mysteriously gets redefined to 'ANSI_X3.4-1968', which appears to be a just a fancy name for plain ASCII.
I now need to know how to make the cgi script run with 'utf-8' as the default encoding in all cases. My setup is Python 3.1.3 and Apache2 on Debian Linux. The system-wide locale is en_GB.utf-8. | 0 | python,unicode,python-3.x,cgi | 2012-02-17T03:18:00.000 | 1 | 9,322,410 | You shouldn't read your IO streams as strings for CGI/WSGI; they aren't Unicode strings, they're explicitly byte sequences.
(Consider that Content-Length is measured in bytes and not characters; imagine trying to read a multipart/form-data binary file upload submission crunched into UTF-8-decoded strings, or return a binary file download...)
So instead use sys.stdin.buffer and sys.stdout.buffer to get the raw byte streams for stdio, and read/write binary with them. It is up to the form-reading layer to convert those bytes into Unicode string parameters where appropriate using whichever encoding your web page has.
Unfortunately the standard library CGI and WSGI interfaces don't get this right in Python 3.1: the relevant modules were crudely converted from the Python 2 originals using 2to3 and consequently there are a number of bugs that will end up in UnicodeError.
The first version of Python 3 that is usable for web applications is 3.2. Using 3.0/3.1 is pretty much a waste of time. It took a lamentably long time to get this sorted out and PEP3333 passed. | 0 | 7,021 | false | 0 | 1 | Set encoding in Python 3 CGI scripts | 9,337,200 |
3 | 7 | 0 | 3 | 23 | 0 | 0.085505 | 0 | When writing a Python 3.1 CGI script, I run into horrible UnicodeDecodeErrors. However, when running the script on the command line, everything works.
It seems that open() and print() use the return value of locale.getpreferredencoding() to know what encoding to use by default. When running on the command line, that value is 'UTF-8', as it should be. But when running the script through a browser, the encoding mysteriously gets redefined to 'ANSI_X3.4-1968', which appears to be a just a fancy name for plain ASCII.
I now need to know how to make the cgi script run with 'utf-8' as the default encoding in all cases. My setup is Python 3.1.3 and Apache2 on Debian Linux. The system-wide locale is en_GB.utf-8. | 0 | python,unicode,python-3.x,cgi | 2012-02-17T03:18:00.000 | 1 | 9,322,410 | Summarizing @cercatrova 's answer:
Add PassEnv LANG line to the end of your /etc/apache2/apache2.conf or .htaccess.
Uncomment . /etc/default/locale line in /etc/apache2/envvars.
Make sure line similar to LANG="en_US.UTF-8" is present in /etc/default/locale.
sudo service apache2 restart | 0 | 7,021 | false | 0 | 1 | Set encoding in Python 3 CGI scripts | 44,271,683 |
1 | 6 | 0 | 4 | 8 | 0 | 0.132549 | 0 | I am a fairly proficient vim user, but friends of mine told me so much good stuff about emacs that I decided to give it a try -- especially after finding about the aptly-named evil mode...
Anyways, I am currently working on a python script that requires user input (a subclass of cmd.Cmd). In vim, if I wanted to try it, I could simply do :!python % and then could interact with my script, until it quits. In emacs, I tried M-! python script.py, which would indeed run the script in a separate buffer, but then RETURNs seems not to be sent back to the script, but are caught by the emacs buffer instead. I also tried to have a look at python-mode's C-c C-c, but this runs the script in some temporary directory, whereas I just want to run it in (pwd).
So, is there any canonical way of doing that? | 0 | python,emacs | 2012-02-17T08:04:00.000 | 1 | 9,324,802 | I don't know about canonical, but if I needed to interact with a script I'd do M-xshellRET and run the script from there.
There's also M-xterminal-emulator for more serious terminal emulation, not just shell stuff. | 0 | 10,228 | false | 0 | 1 | Running interactive python script from emacs | 9,325,028 |
1 | 1 | 1 | 3 | 1 | 1 | 1.2 | 0 | (first question on StackOverflow, glad to be there :))
I am using IronPython 2.7.1 and C# .net 4.0.
I use C# to launch my python script.
I have about 20 personal modules that are imported a lots of time.
E.g :
If I have module1.py, module2.py, module3.py, module4.py
and main_script.py.
main_script.py imports module1 and module2
Both module1 and module2 import module3.
module1 and module3 import module4
etc.
Modules can have a large amount of code lines.
What I see is when I execute my main_script.py, it takes about 4-5 sec to just import modules.
I tried to use pyc.py to compile all my modules in a dll, and then used ngen on it, but I saw no differences when adding this dll using myEngine.Runtime.LoadAssembly().
Then I wanted to use py_compile.py to get the pyc files, but is seems not working as the IronPython.Runtime.FunctionCode type is not supported in the IronPython.Modules.MarshalWriter class (function WriteObject(object o). (I got "unmarshallable object" exception when trying to compile).
I am not very familiar with Python nor IronPython, and maybe I did not understood all the subtleties of the language (I think so, actually). I was searching the net for a solution, but it seems I am stuck right now.
Any idea to improve the import performance ? | 0 | import,module,ironpython | 2012-02-17T11:41:00.000 | 0 | 9,327,606 | Taking 4-5 seconds to do imports, especially for large modules, is not unexpected for IronPython 2.7.1. I would pyc.py to improve it, but I also think that it isn't as useful as it once was - IronPython's imports are a lot faster than they used to be, so pyc.py is less useful.
The thing is, IronPython does a lot more than Python does when it imports a module[1]. Python has to parse it and produce bytecode which it then executes. IronPython has to produce DLR trees which are then converted to interpreter instructions - and possibly also IL if they trip the compilation limit, which means running the .NET JIT to produce machine code.
All of that work is wasted if the script only takes a few seconds to run; IronPython is better for long-running processes. However, the short Python script is extremely common, and IronPython is extremely poor for those sorts of scripts.
There are two ways we're working at solving this, one of which you alluded to. Work is being done to support standard .pyc files with an interpreter optimized for startup time but not throughput - short scripts will benefit, but long-running code will suffer. Second, porting IronPython to mobile platforms requires disabling dynamic code generation, so making the DLR interpreter fast will be very important; this work will make uncompiled code faster to start as well.
The one thing we cannot overcome is the fact that .NET processes generally take longer to start than plain C ones. That overhead can be reduced, but it requires some fairly deep optimization that probably won't be done for a while.
[1] Python's import process is so fast that the stat calls to find the file are much greater than the time to parse & compile it. | 0 | 1,258 | true | 0 | 1 | Import module in IronPython 2.7.1 very slow | 9,332,166 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | When I log into a page in my browser, I get 3 cookies: tips, ipb_member_id and ip_pass_hash. I need those last two to access some pages I can only see when logged in. When I log in via the browser it works fine, but under mechanize I only get the tips cookie.
Are there any flags I have to set up for this to work, or is there any module I might need? I can't link to the page here. Though I do know Python's Mechanize + cookielib stores the cookies correctly, since I already have a working version for it. | 0 | python,ruby,cookies,mechanize | 2012-02-18T05:46:00.000 | 0 | 9,338,948 | I am working on the same issue (I want to get all cookies loaded on a page).
I think it's impossible with mechanize. One reason is that it doesn't support javascript, so anything a little bit complex (such as a img loaded on a js event, which set a new cookie) will not work.
I am considering other options as webkit :http://stackoverflow.com/questions/4730906/automating-chrome
if you find a good way to gather all the cookies, let me know :) | 0 | 362 | false | 0 | 1 | Ruby/Mechanize: Not getting all the cookies after logging into a page | 11,950,195 |
1 | 2 | 0 | 1 | 3 | 0 | 0.099668 | 0 | I'd like to know if there is any implemented python library for GPS trajectory pre-processing such as compression, smoothing, filtering, etc. | 0 | python,gps | 2012-02-18T06:22:00.000 | 0 | 9,339,169 | Expanding on my comment, a Kalman filter is the usual choice for estimating position and velocity from noisy sensor readings.
Here's what Wikipedia has to say on the topic (emphasis mine:)
The Kalman filter is an algorithm, commonly used since the 1960s for
improving vehicle navigation (among other applications, although
aerospace is typical), that yields an optimized estimate of the
system's state (e.g. position and velocity). The algorithm works
recursively in real time on streams of noisy input observation data
(typically, sensor measurements) and filters out errors using a
least-squares curve-fit optimized with a mathematical prediction of
the future state generated through a modeling of the system's physical
characteristics.
The Kalman filter is the basic version; there's also the extended Kalman filter and unscented Kalman filter (though my control systems lecturer never got around to telling us what those were actually used for.)
@stark has provided a link to an implementation of the Kalman filter in Python (not sure of the quality.) You may be able to find others, or roll your own with scipy. | 0 | 2,133 | false | 0 | 1 | Python library for GPS trajectory pre-processing? | 9,343,253 |
2 | 2 | 0 | 5 | 14 | 1 | 0.462117 | 0 | Python extension modules written in C are faster than the equivalent programs written in pure Python. How do these extension modules compare (speed wise) to programs written in pure C? Are programs written in pure C even faster than the equivalent Python extension module? | 0 | python,c | 2012-02-18T23:35:00.000 | 0 | 9,345,201 | Being a Python extension doesn't affect the execution speed of a piece of code, except insofar as the Python invoking it is slower than the equivalent C would be, and the compiler is less able to aggressively unroll and inline code which crosses the C/Python boundary.
That is to say, if you just have Python code call a C function, and then you do all your work in that function, the only performance difference is going to be the amount of time you spent before getting into the C side of things. From that point on, it is native C. | 0 | 2,535 | false | 0 | 1 | Speed of Python Extensions in C vs. C | 9,345,227 |
2 | 2 | 0 | 15 | 14 | 1 | 1.2 | 0 | Python extension modules written in C are faster than the equivalent programs written in pure Python. How do these extension modules compare (speed wise) to programs written in pure C? Are programs written in pure C even faster than the equivalent Python extension module? | 0 | python,c | 2012-02-18T23:35:00.000 | 0 | 9,345,201 | How do these extension modules compare (speed wise) to programs written in pure C?
They are slightly slower due to the translation between Python data structures -> C types. Disregarding this translation the actual C code runs at exactly the same speed as a regular C function would.
Are programs written in pure C even faster than the equivalent Python extension module?
C programs (written entirely in C) can be faster than Python programs using the C extension modules. If the C program and the extension module are written with the same level of complexity, coder skill, algorithmic complexity, etc., the C program will win every time. However, if you're not a C guru and you're competing with a highly optimized Python C extension Python could be faster. | 0 | 2,535 | true | 0 | 1 | Speed of Python Extensions in C vs. C | 9,345,231 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I want to ask you guys, how to make my php (or python) socket server to start when a client make request to a specific file and to stop, when client stops. Also, is there a way to make a php or python socket server not to open any ports (maybe to use port 80, which I think is possible, thanks to the request above). I'm using a public hosting which doesn't allow me to open ports or to use terminal commands. | 0 | php,python,sockets | 2012-02-20T19:06:00.000 | 0 | 9,366,899 | Erm, sorry, you can't do WebSockets (at least not properly to my knowledge) without opening ports. You might be able to fake it with PHP, but the timeout would defeat it.
I would recommend Comet AJAX/long-polling instead. | 0 | 87 | false | 0 | 1 | html5 websockets OR flash sockets activated on load? | 9,367,348 |
1 | 1 | 0 | 1 | 3 | 0 | 0.197375 | 0 | We send email using appengine's python send_mail api.
Is there any way to tell why an email that is sent to only one recipient would be marked as SPAM. This seems to only happen when appengine's python send_mail api sends to Gmail.
In our case we are sending email as one of the administrators of our appengine application.
And the email is a confirmation letter for an order that the user just purchased, so it is definitely NOT SPAM.
Can anyone help with this?
It seems odd because it is only GMail users that seem to be reporting this issue and we are sending from appengine (All Google servers) I love Google but sometimes Google is stricter to itself than to others :)
I've added the spf TXT record to DNS such as "v=spf1 include:_spf.google.com ~all"
(I'm hoping that will help)
I've tried to add a List-Unsubscribe header to the email but it seems app engine python send mail does not support this header.
Thanks,
Ralph | 0 | python,google-app-engine,email,gmail,spam-prevention | 2012-02-20T19:16:00.000 | 1 | 9,367,049 | My guess would be that the content of the mail looks "spammy" for Google, but you can do some things that might help you.
I would suggest you, since this is a confirmation mail, add another admin for your app an email like: [email protected] and use that one for the confirmation emails. Add more text to the body and include the unsubscribe links as well, so your users will have the possibility to not receive more email from your app. Maybe you wouldn't like the last part, but you have to give that options to your users, so this email won't be marked as SPAM. | 0 | 691 | false | 1 | 1 | AppEngine python send email api is marked as SPAM by Gmail email reader | 9,374,887 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | Does anybody know a python function (proven to work and having its description in internet) which able to make minimum search for a provided user function when argument is an array of integers?
Something like
scipy.optimize.fmin_l_bfgs_b
scipy.optimize.leastsq
but for integers | 0 | python,numpy,scipy | 2012-02-20T20:01:00.000 | 0 | 9,367,630 | There is no general solution for this problem. If you know the properties of the function it should be possible to deduce some bounds for the variables and then test all combinations. But that is not very efficient.
You could approximate a solution with scipy.optimize.leastsq and then round the results to integers. The quality of the result of course depends on the structure of the function. | 1 | 569 | false | 0 | 1 | Optimizer/minimizer for integer argument | 9,367,777 |
1 | 2 | 0 | 14 | 13 | 1 | 1.2 | 0 | I can't seem to find what's the default encoding for io.StringIO in Python3. Is it the locale as with stdio?
How can I change it?
With stdio, seems that just reopening with correct encoding works, but there's no such thing as reopening a StringIO. | 0 | encoding,utf-8,python-3.x,stringio | 2012-02-20T21:43:00.000 | 0 | 9,368,865 | The class io.StringIO works with str objects in Python 3. That is, you can only read and write strings from a StringIO instance. There is no encoding -- you have to choose one if you want to encode the strings you got from StringIO in a bytes object, but strings themselves don't have an encoding.
(Of course strings need to be internally represented in some encoding. Depending on your interpreter, that encoding is either UCS-2 or UCS-4, but you don't see this implementation detail when working with Python.) | 0 | 19,442 | true | 0 | 1 | io.StringIO encoding in python3 | 9,368,909 |
2 | 2 | 0 | 4 | 3 | 0 | 0.379949 | 1 | I am working on some programs in spanish, so I need to use accent marks. This is why I use
# -*- coding: iso-8859-1 -*- and <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> on all my programs (python). I tested in chrome,firefox and safari and they all work puttin the accent marks. The only one that does not work is IE8. It does not apply the accent mark, and add some other character instead.
Does anyone know if there is a problem with IE8?
Is it better to use UTF-8 instead? | 0 | python,html,utf-8,iso-8859-1 | 2012-02-21T00:08:00.000 | 0 | 9,370,343 | It is better to use UTF-8.
Note that "iso-8859-1" is a common mislabeling of "windows-1252", also known as "cp1252". Try being more explicit and see if this resolves your issues. | 0 | 1,203 | false | 1 | 1 | ISO-8859-1 Not working on IE | 9,370,450 |
2 | 2 | 0 | 2 | 3 | 0 | 1.2 | 1 | I am working on some programs in spanish, so I need to use accent marks. This is why I use
# -*- coding: iso-8859-1 -*- and <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> on all my programs (python). I tested in chrome,firefox and safari and they all work puttin the accent marks. The only one that does not work is IE8. It does not apply the accent mark, and add some other character instead.
Does anyone know if there is a problem with IE8?
Is it better to use UTF-8 instead? | 0 | python,html,utf-8,iso-8859-1 | 2012-02-21T00:08:00.000 | 0 | 9,370,343 | Yes, it is better to use UTF-8 instead.
Your question really cannot be answered unless you also provide the bytes that you are sending. | 0 | 1,203 | true | 1 | 1 | ISO-8859-1 Not working on IE | 9,370,369 |
2 | 3 | 0 | 1 | 3 | 0 | 0.066568 | 0 | I am using emacs 23 -nw and xterm installed on Debian Squeeze. I need highlighting with python but I don't have it. How can I enable it?
Edit:
Thanks for all answers, the problem is that
I have googled a lot, really.
I have the code on a file with extension .py
The script starts with #!/usr/bin/python, as one of the the answers points I have changed to !#/usr/bin/env python
I used M-x and tried to find something related to python, well there many options which do not solve my problem.
Sorry my question was not very precise and I even accept -10 but I don't have highlight which would give me red highlight for lines starting # etc. To be more precise I have a very a dull highlight; lines with # are white, lines between """ """ are green, some of the variable names are yellow but don't know why not all. [import, as, from] are light blue, [open, max, and other function names] are dark blue etc. And besides my 200 lines of code is working. | 0 | python,emacs,highlight | 2012-02-21T03:02:00.000 | 0 | 9,371,542 | I'm not sure if this is right, but try the following.
1) M-x
2) type in "python-mode". Tab completion works here so type in "pyth" and hit tab and you can see what your options are.
mj | 0 | 4,284 | false | 0 | 1 | Python highlighting in emacs | 9,371,607 |
2 | 3 | 0 | 0 | 3 | 0 | 0 | 0 | I am using emacs 23 -nw and xterm installed on Debian Squeeze. I need highlighting with python but I don't have it. How can I enable it?
Edit:
Thanks for all answers, the problem is that
I have googled a lot, really.
I have the code on a file with extension .py
The script starts with #!/usr/bin/python, as one of the the answers points I have changed to !#/usr/bin/env python
I used M-x and tried to find something related to python, well there many options which do not solve my problem.
Sorry my question was not very precise and I even accept -10 but I don't have highlight which would give me red highlight for lines starting # etc. To be more precise I have a very a dull highlight; lines with # are white, lines between """ """ are green, some of the variable names are yellow but don't know why not all. [import, as, from] are light blue, [open, max, and other function names] are dark blue etc. And besides my 200 lines of code is working. | 0 | python,emacs,highlight | 2012-02-21T03:02:00.000 | 0 | 9,371,542 | Emacs 23 should know about Python out of the box. Does the name of your Python file end with .py, or does the file have #!/usr/bin/env python as the first line? If you're creating a new file, make sure the filename ends with .py. You can also use M-x python-mode as mentioned in another answer. If none of that works, check that your terminal actually supports color. | 0 | 4,284 | false | 0 | 1 | Python highlighting in emacs | 9,371,634 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.