Available Count
int64
1
31
AnswerCount
int64
1
35
GUI and Desktop Applications
int64
0
1
Users Score
int64
-17
588
Q_Score
int64
0
6.79k
Python Basics and Environment
int64
0
1
Score
float64
-1
1.2
Networking and APIs
int64
0
1
Question
stringlengths
15
7.24k
Database and SQL
int64
0
1
Tags
stringlengths
6
76
CreationDate
stringlengths
23
23
System Administration and DevOps
int64
0
1
Q_Id
int64
469
38.2M
Answer
stringlengths
15
7k
Data Science and Machine Learning
int64
0
1
ViewCount
int64
13
1.88M
is_accepted
bool
2 classes
Web Development
int64
0
1
Other
int64
1
1
Title
stringlengths
15
142
A_Id
int64
518
72.2M
10
15
0
0
32
0
0
0
I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded. I have tried to clean up and install but the installation does not seem to be working. $ python --version Python 2.7.3 $ sudo pip install dnspython Downloading/unpacking dnspython Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded Running setup.py egg_info for package dnspython Installing collected packages: dnspython Running setup.py install for dnspython Successfully installed dnspython Cleaning up... $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import dns Traceback (most recent call last): File "", line 1, in ImportError: No module named dns Updated Output of python version and pip version command $ which python /usr/bin/python $ python --version Python 2.7.3 $ pip --version pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) Thanks a lot for your help. Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work.
0
python,python-2.7,module,resolver
2014-02-08T03:57:00.000
1
21,641,696
I have faced similar issue when importing on mac.i have python 3.7.3 installed Following steps helped me resolve it: pip3 uninstall dnspython sudo -H pip3 install dnspython Import dns Import dns.resolver
0
147,472
false
0
1
Python DNS module import error
66,007,768
10
15
0
0
32
0
0
0
I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded. I have tried to clean up and install but the installation does not seem to be working. $ python --version Python 2.7.3 $ sudo pip install dnspython Downloading/unpacking dnspython Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded Running setup.py egg_info for package dnspython Installing collected packages: dnspython Running setup.py install for dnspython Successfully installed dnspython Cleaning up... $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import dns Traceback (most recent call last): File "", line 1, in ImportError: No module named dns Updated Output of python version and pip version command $ which python /usr/bin/python $ python --version Python 2.7.3 $ pip --version pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) Thanks a lot for your help. Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work.
0
python,python-2.7,module,resolver
2014-02-08T03:57:00.000
1
21,641,696
ok to resolve this First install dns for python by cmd using pip install dnspython (if you use conda first type activate and then you will go in base (in cmd) and then type above code) it will install it in anaconda site package ,copy the location of that site package folder from cmd, and open it . Now copy all dns folders and paste them in python site package folder. it will resolve it . actually the thing is our code is not able to find the specified package in python\site package bcz it is in anaconda\site package. so you have to COPY IT (not cut).
0
147,472
false
0
1
Python DNS module import error
61,213,715
10
15
0
1
32
0
0.013333
0
I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded. I have tried to clean up and install but the installation does not seem to be working. $ python --version Python 2.7.3 $ sudo pip install dnspython Downloading/unpacking dnspython Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded Running setup.py egg_info for package dnspython Installing collected packages: dnspython Running setup.py install for dnspython Successfully installed dnspython Cleaning up... $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import dns Traceback (most recent call last): File "", line 1, in ImportError: No module named dns Updated Output of python version and pip version command $ which python /usr/bin/python $ python --version Python 2.7.3 $ pip --version pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) Thanks a lot for your help. Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work.
0
python,python-2.7,module,resolver
2014-02-08T03:57:00.000
1
21,641,696
I faced the same problem and solved this like i described below: As You have downloaded and installed dnspython successfully so Enter into folder dnspython You will find dns directory, now copy it Then paste it to inside site-packages directory That's all. Now your problem will go If dnspython isn't installed you can install it this way : go to your python installation folder site-packages directory open cmd here and enter the command : pip install dnspython Now, dnspython will be installed successfully.
0
147,472
false
0
1
Python DNS module import error
59,751,991
10
15
0
0
32
0
0
0
I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded. I have tried to clean up and install but the installation does not seem to be working. $ python --version Python 2.7.3 $ sudo pip install dnspython Downloading/unpacking dnspython Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded Running setup.py egg_info for package dnspython Installing collected packages: dnspython Running setup.py install for dnspython Successfully installed dnspython Cleaning up... $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import dns Traceback (most recent call last): File "", line 1, in ImportError: No module named dns Updated Output of python version and pip version command $ which python /usr/bin/python $ python --version Python 2.7.3 $ pip --version pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) Thanks a lot for your help. Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work.
0
python,python-2.7,module,resolver
2014-02-08T03:57:00.000
1
21,641,696
In my case, I hava writen the code in the file named "dns.py", it's conflict for the package, I have to rename the script filename.
0
147,472
false
0
1
Python DNS module import error
57,668,403
10
15
0
1
32
0
0.013333
0
I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded. I have tried to clean up and install but the installation does not seem to be working. $ python --version Python 2.7.3 $ sudo pip install dnspython Downloading/unpacking dnspython Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded Running setup.py egg_info for package dnspython Installing collected packages: dnspython Running setup.py install for dnspython Successfully installed dnspython Cleaning up... $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import dns Traceback (most recent call last): File "", line 1, in ImportError: No module named dns Updated Output of python version and pip version command $ which python /usr/bin/python $ python --version Python 2.7.3 $ pip --version pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) Thanks a lot for your help. Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work.
0
python,python-2.7,module,resolver
2014-02-08T03:57:00.000
1
21,641,696
I was getting an error while using "import dns.resolver". I tried dnspython, py3dns but they failed. dns won't install. after much hit and try I installed pubdns module and it solved my problem.
0
147,472
false
0
1
Python DNS module import error
57,207,302
10
15
0
0
32
0
0
0
I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded. I have tried to clean up and install but the installation does not seem to be working. $ python --version Python 2.7.3 $ sudo pip install dnspython Downloading/unpacking dnspython Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded Running setup.py egg_info for package dnspython Installing collected packages: dnspython Running setup.py install for dnspython Successfully installed dnspython Cleaning up... $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import dns Traceback (most recent call last): File "", line 1, in ImportError: No module named dns Updated Output of python version and pip version command $ which python /usr/bin/python $ python --version Python 2.7.3 $ pip --version pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) Thanks a lot for your help. Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work.
0
python,python-2.7,module,resolver
2014-02-08T03:57:00.000
1
21,641,696
I installed DNSpython 2.0.0 from the github source, but running 'pip list' showed the old version of dnspython 1.2.0 It only worked after I ran 'pip uninstall dnspython' which removed the old version leaving just 2.0.0 and then 'import dns' ran smoothly
0
147,472
false
0
1
Python DNS module import error
53,703,267
10
15
0
0
32
0
0
0
I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded. I have tried to clean up and install but the installation does not seem to be working. $ python --version Python 2.7.3 $ sudo pip install dnspython Downloading/unpacking dnspython Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded Running setup.py egg_info for package dnspython Installing collected packages: dnspython Running setup.py install for dnspython Successfully installed dnspython Cleaning up... $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import dns Traceback (most recent call last): File "", line 1, in ImportError: No module named dns Updated Output of python version and pip version command $ which python /usr/bin/python $ python --version Python 2.7.3 $ pip --version pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) Thanks a lot for your help. Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work.
0
python,python-2.7,module,resolver
2014-02-08T03:57:00.000
1
21,641,696
This issue can be generated by Symantec End Point Protection (SEP). And I suspect most EPP products could potentially impact your running of scripts. If SEP is disabled, the script will run instantly. Therefore you may need to update the SEP policy to not block python scripts accessing stuff.
0
147,472
false
0
1
Python DNS module import error
40,167,343
10
15
0
4
32
0
0.053283
0
I have been using python dns module.I was trying to use it on a new Linux installation but the module is not getting loaded. I have tried to clean up and install but the installation does not seem to be working. $ python --version Python 2.7.3 $ sudo pip install dnspython Downloading/unpacking dnspython Downloading dnspython-1.11.1.zip (220Kb): 220Kb downloaded Running setup.py egg_info for package dnspython Installing collected packages: dnspython Running setup.py install for dnspython Successfully installed dnspython Cleaning up... $ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import dns Traceback (most recent call last): File "", line 1, in ImportError: No module named dns Updated Output of python version and pip version command $ which python /usr/bin/python $ python --version Python 2.7.3 $ pip --version pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7) Thanks a lot for your help. Note:- I have firewall installed on the new machine. I am not sure if it should effect the import. but i have tried disabling it and still it does not seem to work.
0
python,python-2.7,module,resolver
2014-02-08T03:57:00.000
1
21,641,696
I installed dnspython 1.11.1 on my Ubuntu box using pip install dnspython. I was able to import the dns module without any problems I am using Python 2.7.4 on an Ubuntu based server.
0
147,472
false
0
1
Python DNS module import error
21,643,858
2
4
0
0
0
0
1.2
0
I have python 2.6 and python installed on my Freebsd box. I want my bash script to execute a particular python script using python2.6 interpreter. It is showing import error.... Undefined symbol "PyUnicodeUCS2_DecodeUTF8"
0
python,bash
2014-02-10T06:16:00.000
1
21,670,272
It is probably caused by the following. Your script imports some third-party library which was compiled by an older python version. To fix this, reinstall the up-to-date library.
0
921
true
0
1
How to make bash script use a particular python version for executing a python script?
21,670,472
2
4
0
0
0
0
0
0
I have python 2.6 and python installed on my Freebsd box. I want my bash script to execute a particular python script using python2.6 interpreter. It is showing import error.... Undefined symbol "PyUnicodeUCS2_DecodeUTF8"
0
python,bash
2014-02-10T06:16:00.000
1
21,670,272
Use the absolute path to the python version you want.
0
921
false
0
1
How to make bash script use a particular python version for executing a python script?
21,670,553
1
1
0
0
0
0
0
0
I have a server implementing a python API. I am calling functions from a frontend that uses Angular.js. Is there any way to add an authentication key to my calls so that random people cannot see the key through the Angular exposed code? Maybe file structure? I am not really sure.
0
python,security,angularjs,authentication
2014-02-10T17:50:00.000
0
21,684,420
As long as you send the sensitive data outside, you are at risk. You can obfuscate your code so that first grade malicious users have a hard time finding the key, but basically breaking your security is just a matter of time as an attacker will have all the elements to analyse your protocol and exchanged data and design a malicious software that will mimic your original client. One possible (although not unbreakable) solution would be to authenticate the users themselves so that you keep a little control over who is accessing the data and revoke infected accounts.
0
82
false
1
1
Private Python API key when using Angular for frontend
21,684,701
1
1
0
0
0
0
1.2
0
(1) How do you "un-share" a bundle? (I know, this must be in the documentation. But I really can't find it. Sorry!) (2) Is there any kind of user mailing list for Wakari, where questions like the above would be better targeted?
0
ipython-notebook
2014-02-10T20:53:00.000
0
21,687,659
OK, well I ended up eventually finding it. So, just in case anybody else should look here. You can pull down a menu under your user name. One of the options is "settings." Within that there is an item called "sharing." When you click on that you get a list of your currently shared bundles with an option to delete them.
0
235
true
0
1
Two questions about Wakari.io
21,767,885
1
1
0
0
1
0
0
1
I have a bunch of websites, I want to find all the IP addresses that they communicate with while we browse them. For example once we browse Yahoo.com, it contacts several destinations until it is getting loaded. Is there any library in the C++ or Python that can help me? One way that I'm thinking about is to get the HTML file of the website and look for the format "src = http://", but it is not quite correct.
0
python,c++,ip,web-crawler,fetch
2014-02-10T22:56:00.000
0
21,689,654
You can do that with http, urllib or urllib2. You have to look for "src=" (images, flash etc.) and for "href=" (hyperlinks). Why do you think it's not correct?
0
80
false
0
1
How to get all IP addresses fetched
21,689,802
1
1
0
0
1
0
1.2
0
So I am using Flask to serve some files. I recently downgraded the project from Python 3 to Python 2.7 so it would work with more extensions, and ran into a problem I did not have before. I am trying to serve a file from the filesystem with a Japanese filename, and when I try return send_from_directory(new_folder_path, filename, as_attachment=True) I get UnicodeEncodeError: 'ascii' codec can't encode characters in position 15-20: ordinal not in range(128). in quote_header_value = str(value) (that is a werkzeug thing). I have template set to display the filename on the page by just having {{filename}} in the HTML and it is displaying just fine, so I'm assuming it is somehow reading the name from the filesystem? Only when I try send_from_directory so the user can download it does it throw this error. I tried a bunch of combinations of .encode('utf-8') and.decode('utf-8')`none of which worked at all and I'm getting very frustrated with this. In Python 3 everything just worked seamlessly because everything was treated as unicode, and searching for a way to solve this brought up results that it seems I would need a degree in compsci to wrap my head around. Does anyone have a fix for this? Thanks.
0
python,unicode,flask
2014-02-11T23:07:00.000
0
21,715,132
OK, after wrestling with it under the hood for a while I fixed it, but not in a very elegant way, I had to modify the source of some werkzeug things. In "http.py", I replaced str(value) with unicode(value), and replaced every instance of "latin-1" with "utf-8" in both http.py and datastructures.py. It fixed the problem, file gets downloaded fine in both the latest Firefox and Chrome. As I said before, I would rather not have to modify the source of the libraries I am using because this is a pain when deploying/testing on different systems, so if anyone has a better fix for this please share. I've seen some people recommend just making the filename part of the URL but I cannot do this as I need to keep my URLs simple and clean.
0
887
true
1
1
Python - how to send file from filesystem with a unicode filename?
21,716,642
1
2
0
0
0
0
0
0
I have a python script that loads up a webpage on a raspberry. This script MUST run at startup, and then every 15 minutes. In future there will be many of these, maybe 1000 or even more. Currently i am doing this with a cronjob, but the problem with that is that all 1000 raspberries will connect to the webpage at the very same time (plus minus a few seconds given that they take the precise clock from the web) It would be good to execute the command after 15 minutes from the last run, regardless of the time. I like the cronjob solution because i have nothing running in background, so it simply executes does its job and then it's over. at the other hand, cronjob takes care only of the minutes, and not the seconds, so even if i scatter the 1000 pi's over these 15 minutes I will still end having about 80 simultaneous requests to the webpage every single minute. Is there a nice solution to this?
0
python,cron,raspberry-pi
2014-02-13T17:25:00.000
0
21,761,216
You can always add the unix command sleep xx to the cronjob before executing your command. Example: */15 * * * * (sleep 20; /root/crontabjob.sh) Now the job will run every 15 minutes and 20 seconds (00:15:20, 00:30:20), 00:45:20 ....)
0
52
false
1
1
Cronjob at given interval
21,785,587
2
5
0
-1
7
1
-0.039979
0
tl;dr - I want to write a Python unittest function that deletes a file, runs a test, and the restores the file. This causes race conditions because unittest runs multiple tests in parallel, and deleting and creating the file for one test messes up other tests that happen at the same time. Long Specific Example: I have a Python module named converter.py and it has associated tests in test_converter.py. If there is a file named config_custom.csv in the same directory as converter.py, then the custom configuration will be used. If there is no custom CSV config file, then there is a default configuration built into converter.py. I wrote a unit test using unittest from the Python 2.7 standard library to validate this behavior. The unit test in setUp() would rename config_custom.csv to wrong_name.csv, then it would run the tests (hopefully using the default config), then in tearDown() it would rename the file back the way it should be. Problem: Python unit tests run in parallel, and I got terrible race conditions. The file config_custom.csv would get renamed in the middle of other unit tests in a non-deterministic way. It would cause at least one error or failure about 90% of the time that I ran the entire test suite. The ideal solution would be to tell unittest: Do NOT run this test in parallel with other tests, this test is special and needs complete isolation. My work-around is to add an optional argument to the function that searches for config files. The argument is only passed by the test suite. It ignores the config file without deleting it. Actually deleting the test file is more graceful, that is what I actually want to test.
0
python,unit-testing,python-unittest
2014-02-14T08:03:00.000
0
21,773,821
The problem is that the name of config_custom.csv should itself be a configurable parameter. Then each test can simply look for config_custom_<nonce>.csv, and any number of tests may be run in parallel. Cleanup of the overall suite can just clear out config_custom_*.csv, since we won't be needing any of them at that point.
1
2,425
false
0
1
Making a Python unit test that never runs in parallel
21,775,708
2
5
0
0
7
1
0
0
tl;dr - I want to write a Python unittest function that deletes a file, runs a test, and the restores the file. This causes race conditions because unittest runs multiple tests in parallel, and deleting and creating the file for one test messes up other tests that happen at the same time. Long Specific Example: I have a Python module named converter.py and it has associated tests in test_converter.py. If there is a file named config_custom.csv in the same directory as converter.py, then the custom configuration will be used. If there is no custom CSV config file, then there is a default configuration built into converter.py. I wrote a unit test using unittest from the Python 2.7 standard library to validate this behavior. The unit test in setUp() would rename config_custom.csv to wrong_name.csv, then it would run the tests (hopefully using the default config), then in tearDown() it would rename the file back the way it should be. Problem: Python unit tests run in parallel, and I got terrible race conditions. The file config_custom.csv would get renamed in the middle of other unit tests in a non-deterministic way. It would cause at least one error or failure about 90% of the time that I ran the entire test suite. The ideal solution would be to tell unittest: Do NOT run this test in parallel with other tests, this test is special and needs complete isolation. My work-around is to add an optional argument to the function that searches for config files. The argument is only passed by the test suite. It ignores the config file without deleting it. Actually deleting the test file is more graceful, that is what I actually want to test.
0
python,unit-testing,python-unittest
2014-02-14T08:03:00.000
0
21,773,821
The best testing strategy would be to make sure your testing on disjoint data sets. This will bypass any race conditions and make the code simpler. I would also mock out open or __enter__ / __exit__ if your using the context manager. This will allow you to fake the event that a file doesn't exist.
1
2,425
false
0
1
Making a Python unit test that never runs in parallel
34,140,669
1
1
0
3
6
0
1.2
0
There seems to be mod_wsgi module in Apache and uwsgi module in Nginx. And there also seems to be the wsgi protocol and uwsgi protocol. I have the following questions. Are mod_wsgi and uwsgi just different implementations to provide WSGI capabilities to the Python web developer? Is there a mod_wsgi for Nginx? Does uwsgi also offer the application(environ, start_response) entry point to the developers? Is uwsgi also a separate protocol apart from wsgi? In this case, how is the uwsgi protocol different from the wsgi protocol?
0
python,apache,nginx,wsgi,uwsgi
2014-02-16T17:14:00.000
1
21,814,585
They are just 2 different ways of running WSGI applications. Have you tried googling for mod_wsgi nginx? Any wsgi compliant server has that entry point, that's what the wsgi specification requires. Yes, but that's only how uwsgi communicates with Nginx. With mod_wsgi the Python part is run from within Nginx, with uwsgi you run a separate app.
0
5,775
true
1
1
What is the difference between mod_wsgi and uwsgi?
21,814,847
1
7
0
1
5
0
0.028564
0
I'm using the library called psutil to get system/network stats, but I can only get the total uploaded/downloaded bytes on my script. What would be the way to natively get the network speed using Python?
0
python
2014-02-18T22:34:00.000
0
21,866,951
The (effective) network speed is simply bytes transferred in a given time interval, divided by the length of the interval. Obviously there are different ways to aggregate / average the times and they give you different "measures" ... but it all basically boils down to division.
0
12,062
false
0
1
Get upload/download kbps speed
21,867,082
1
1
0
0
0
0
1.2
0
I am looking to find a way to increment the numOfViews field when an item is retrieve from GET, my current approach is hock on the app.on_post_GET_items event and update the field accordingly, is it something we do typically? my concern is this will slow down the 'GET' i.e. read operation as we always 'write' afterward. Do we have a better solution in general?
0
python,flask,eve
2014-02-18T22:35:00.000
0
21,866,960
If performance is a concern you should consider using Redis, or something like that, to store this sort of frequently updating data. You could then reconcile with the database when appropriate (idle moments, etc.). This being said, since you are writing to the database after the response has been sent, you aren't actively delaying the response (something you would do if you hooked to the on_fetch event instead). I guess it all depends on 1) the kind of traffic your API is going to handle and 2) where you are storing these stats. If you are going to get a lot of traffic (or want to be ready for it), then consider using an alternate storage (possibly in memory) other than your main database.
0
317
true
0
1
In Python-Eve, what is the most efficient way to update the NumOfView field?
21,874,409
1
1
0
0
0
0
1.2
1
I'm not sure if the language I'm using makes a difference or not, but for the record it's python (2.7.3). I recently tried to add functionality to a project I forked on GitHub. Specifically, I changed the underlying http request library from httplib2 to requests, so that I could easily add proxies to requests. The resultant function calls changed slightly (more variables passed and in a slightly different order), and the mock unit test calls failed as a result. What's the best approach to resolving this? Is it OK to just jump in and rewrite the unit test so that they pass with the new function calls? Intuitively, that would seem to be undermining the purpose of unit tests somewhat.
0
python,unit-testing
2014-02-19T11:07:00.000
0
21,878,696
The purpose of a unit test is to verify the implementation of a requirement. As any other piece of software, you have to distinguish what the unit test does, how it tests the requirement (roughly speaking its design), and how it is implemented. Unless the requirement itself is changed, the design of the unit test should not be changed. However, it may happen that a change from another requirement impacts its implementation (because of side effect, interface change, etc.). Then according to your process, you may let the new implementation be reviewed to make sure that the change doesn't impact the nature of the test and that the original requirement is still fulfilled.
0
29
true
0
1
Changing unit tests based on added functionality
21,879,161
1
2
0
3
1
0
1.2
0
How to create shedule on OpenShift hosting to run python script that parses RSS feeds and will send filtered information to my email? It feature is available? Please help, who works with free version of this hosting. I have script that works fine. But i dont know how to run it every 10 min to catch freelance jobs. Or anyone does know free hosting with python that can create shedule for scripts.
0
python,openshift
2014-02-19T19:54:00.000
1
21,890,973
You are looking for the add-on cartridge that is called cron. However, by default the cron cartridge only supports jobs that run every minute or every hour. You would have to write a job that runs minutely to determine if its a 10 minute interval and then execute your script. Make sense? rhc cartridge add cron -a yourAppName Then you will have a cron directory in application directory under .openshift for placing the cron job.
0
1,892
true
0
1
OpenShift, Python Application run script every 10 min
21,893,287
1
4
0
0
1
0
0
0
I would like to be able to gather the values for number of CPUs on a server and stuff like storage space etc and assign them to local variables in a python script. I have paramiko set up, so I can SSH to remote Linux nodes and run arbitrary commands on them, and then have the output returned to the script. However, many commands are very verbose "such as df -h", when all I want to assign is a single integer or value. For the case of number of CPUs, there is Python functionality such as through the psutil module to get this value. Such as 'psutil.NUM_CPUS' which returns an integer. However, while I can run this locally, I can't exactly execute it on remote nodes as they don't have the python environment configured. I am wondering how common it is to manually parse output of linux commands (such as df -h etc) and then grab an integer from it (similar to how bash has a "cut" function). Or whether it is somehow better to set up an environment on each remote server (or a better way).
0
python
2014-02-21T00:56:00.000
1
21,923,046
If you can put your own programs or scripts on the remote machine there are a couple of things you can do: Write a script on the remote machine that outputs just what you want, and execute that over ssh. Use ssh to tunnel a port on the other machine and communicate with a server on the remote machine which will respond to requests for information with the data you want over a socket.
0
1,205
false
0
1
Reading values over ssh in python
21,923,164
1
3
0
0
0
1
0
0
I am writing a python module and I am using many imports of other different modules. I am bit confused that whether I should import all the necessary dependent modules in the opening of the file or shall I do it when necessary. I also wanted to know the implications of both. I come from C++ back ground so I am really thrilled with this feature and does not see any reason of not using __import__(), importing the modules only when needed inside my function. Kindly throw some light on this.
0
python,import
2014-02-21T11:43:00.000
0
21,933,555
Firstly, it's a violation of PEP8 using imports inside functions. Calling import it's an expensive call EVEN if the module is already loaded, so if your function is gonna being called many times this will not compensate the performance gain. Also when you call "import test" python do this: dataFile = __ import__('test') The only downside of imports at the top of file it's the namespace that get polluted very fast depending on complexity of the file, but if your file it's too complex it's a signal of bad design.
0
2,473
false
0
1
import module_name Vs __import__('module_name')
21,934,129
1
3
0
0
3
0
0
0
Here is the deal, how do I put the simplest password protection on an entire site. I simply want to open the site to beta testing but don't really care about elegance - just a dirty way of giving test users a username and password without recourse to anything complex and ideally i'd like to not to have to install any code or third party solutions. I'm trying to keep this simple.
0
python,html,django
2014-02-21T14:25:00.000
0
21,937,072
Why you do not do simple form on index page when user is not authenticated?
0
4,949
false
1
1
Whats the smartest way to password protect an entire Django site for testing purposes
21,937,650
1
1
0
3
1
1
1.2
0
In php, people often call a bootstrap file to set variables used throughout a program. I have a python program that calls methods from different modules. I want those methods from different modules to share some variables. Can I set these variables up in something like a boostrap.py? Or is this not very "pythonic" because a module should contain all of the variables it needs?
0
python
2014-02-22T07:22:00.000
0
21,951,190
The best way would be to create something called a settings.py file, that houses are your shared variables of importance. This approach is followed by the django team for their web framework called django, whcih creates a settings.py file to house all the data that needs to be shared, for example database logins, and static file roots.
0
113
true
0
1
Is there an equivalent of a bootstrap.php for python?
21,951,214
1
1
0
0
0
0
0
0
I have learned online that there are several ways of running a python program in the background: sudo python scriptfile.py& sudo python scriptfile.py, then Control+Z, then bg Using nohup Using screen However, I would like to know if when doing any of the first two options, after I close and reopen SSH again, I can recover what the python program is internally printing by the print commands. So I run python and I start to see my print commands output, but if I close the SSH, even though the program is still running, I need to restart it in order to again see my print statements.
0
python,printing,background,raspberry-pi
2014-02-22T09:56:00.000
1
21,952,650
1) You should never run a script with sudo. You could potentially destroy your system. 2) Once your SSH session is closed all processes go with it. That is unless you use nohup or screen as you have found.
0
655
false
0
1
Raspberry Pi (python) Run in background and reopen print output
21,953,931
2
8
0
0
13
0
0
0
I'm executing a .py file, which spits out a give string. This command works fine execfile ('file.py') But I want the output (in addition to it being shown in the shell) written into a text file. I tried this, but it's not working :( execfile ('file.py') > ('output.txt') All I get is this: tugsjs6555 False I guess "False" is referring to the output file not being successfully written :( Thanks for your help
0
python,output
2014-02-23T02:09:00.000
1
21,963,270
You could also do this by going to the path of the folder you have the python script saved at with cmd, then do the name.py > filename.txt It worked for me on windows 10
0
92,210
false
0
1
How to execute a python script and write output to txt file?
49,316,597
2
8
0
2
13
0
0.049958
0
I'm executing a .py file, which spits out a give string. This command works fine execfile ('file.py') But I want the output (in addition to it being shown in the shell) written into a text file. I tried this, but it's not working :( execfile ('file.py') > ('output.txt') All I get is this: tugsjs6555 False I guess "False" is referring to the output file not being successfully written :( Thanks for your help
0
python,output
2014-02-23T02:09:00.000
1
21,963,270
The simplest way to run a script and get the output to a text file is by typing the below in the terminal: PCname:~/Path/WorkFolderName$ python scriptname.py>output.txt *Make sure you have created output.txt in the work folder before executing the command.
0
92,210
false
0
1
How to execute a python script and write output to txt file?
33,993,200
1
2
0
0
0
0
0
0
I want to access some functions from a large C-project from Python. It seems to me that SWIG is the way to go. I'm not very used to programming i C and my experience with "make" is mostly from downloading source tars. The functions I want to access resides in a large C-project (Gnuplot) and I have no idea who to use SWIG on such a large number of source files. The functions I want to access are all in a single c-file but there are many recursive includes. I would like some suggestions on how to get started. What I want to access: term/emf.trm Reason: Missing support for symbols an LaTex in the EMF-backend to matplotlib (this backend has even been removed from matplotlib). I'm stuck with an old version of Word at work and there is no way to get plots in this program that are suitable for my purpose without EMF. I could use Gnuplot instead of matplotlib but many of the plots are specialized for a certain purpose and matplotlib is much easier to use than Gnuplot. Any suggestions would be much appreciated.
0
python,matplotlib,gnuplot,swig
2014-02-23T19:19:00.000
0
21,973,249
You should start by reading the first few chapters of swig manual, and making some of its example projects for python, the distrib has many that illustrate many different capabilities of swig and the make files are already built so one less thing to learn.
0
728
false
0
1
Using SWIG to interface large C-project with Python
21,977,146
1
1
0
0
2
1
1.2
0
I have a strong background in Java, which obviously is statically-typed, and type-safe language. I find it that I am able to read through large amounts of code very quickly and easily assuming that the programmer who had written it followed basic conventions and best practices. I am also able to write code pretty quickly, given a pretty good IDE like Eclipse and IntelliJ because of the benefits of compilation and auto completion. I'd like to become more proficient, effective and efficient at reading/writing code in more dynamic languages like Python and JavaScript. The problem is that I can't find myself understanding code nearly as fast as I would in Java mainly because I comprehend code very quickly based on their types. Also when writing, there really is no auto complete available to quickly see what methods are available. Edit -- I ask this in the context of larger-scale projects where the code continues to grow and evolve. What are general strategies or caveats when reading and writing in languages like these when the project sizes are much larger and are non-trivial? Or does it come with time? Much thanks!
0
javascript,python,language-agnostic,dynamic-typing,static-typing
2014-02-25T04:13:00.000
0
22,004,427
I'm a C++/C# dev by training, and I found that I got better with JS after I started writing it. Try going all in on JS and write something in it. Maybe Node.js. Maybe learn to use it with a frontend framework like Angular or Knockout. Maybe both together. If you want to improve from there, check out Douglas Crockford's "JavaScript: The Good Parts". He writes some good suggestions on how to write better JS. It's not ironclad, community-proven best practices, but he offers some solid stuff.
0
93
true
0
1
Strategies to be more effective at programming in dynamic languages
22,004,578
2
4
0
-1
4
1
-0.049958
0
C++: cout << -1/2 evaluates to 0 Python: -1/2 evaluates to -1. Why is this the case?
0
python,c++,python-2.x,integer-division
2014-02-26T02:14:00.000
0
22,030,342
I am not sure about Python, but in C++ integer/integer = integer, and therefore in case of -1/2 is -0.5 which is rounded automatically to integer and therefore you get the 0 answer. In case of Python, maybe the system used the floor function to convert the result into an integer.
0
1,668
false
0
1
Why is -1/2 evaluated to 0 in C++, but -1 in Python?
22,030,432
2
4
0
1
4
1
0.049958
0
C++: cout << -1/2 evaluates to 0 Python: -1/2 evaluates to -1. Why is this the case?
0
python,c++,python-2.x,integer-division
2014-02-26T02:14:00.000
0
22,030,342
From the Python docs (emphasis mine): The / (division) and // (floor division) operators yield the quotient of their arguments. The numeric arguments are first converted to a common type. Plain or long integer division yields an integer of the same type; the result is that of mathematical division with the ‘floor’ function applied to the result. The floor function rounds to the number closest to negative infinity, hence -1.
0
1,668
false
0
1
Why is -1/2 evaluated to 0 in C++, but -1 in Python?
22,030,413
1
1
0
5
0
0
1.2
0
I am planning to develop a E-commerce website. i was thinking to use Wordpress CMS so that there will be plugins available for implementing the E-commerce feature. but questions was raised about the security of wordpress. i have got few suggestions from by friends about developing site in python. Can anyone please help me with the advantages of python over wordpress. is it a good idea to build website in python than wordpress?
0
php,python,wordpress
2014-02-26T10:06:00.000
0
22,037,983
Your question doesn't really have a clear answer, because you're not comparing apples to apples here. Wordpress is a Content Management System (CMS), a piece of software built using the php language. Python is simply a language. Vulnerabilities have certainly been found in Wordpress before, it's true. Similarly, software developed in Python can have vulnerabilities. If your real question is "Would it be better securitywise for me to develop an entirely new CMS in Python, or use Wordpress?" then my answer is that you should almost certainly use Wordpress. If you're asking the question, you probably wouldn't be able to do better than the community of Wordpress developers at security - I know I couldn't.
0
186
true
0
1
Is python secure than wordpress?
22,038,195
1
1
0
0
0
1
1.2
0
Somehow VS2008 knows about the character encoding of a source file. I need this information for a tool I wrote in python that does some processing of legacy code (some sophisticated include path remappings etc.), where each file might have a different character encoding. And for processing each file I need to know the character encoding of the file. Where does Visual Studio 2008 store the information about a file's character encoding? Or does it infer this information automatically from the content?
0
python,visual-studio-2008,encoding
2014-02-26T11:32:00.000
0
22,040,054
In Options - Text Editor - General, there's a setting "Auto-detect UTF-8 encoding without signature". Maybe that's all there is to it.
0
32
true
0
1
Where does Visual Studio 2008 store the information about a file's character encoding?
22,040,482
1
2
0
3
2
1
0.291313
0
Is there a way to force import x to always reload x in Python (i.e., as if I had called reload(x), or imp.reload(x) for Python 3)? Or in general, is there some way to force some code to be run every time I run import x? I'm OK with monkey patching or hackery. I've tried moving the code into a separate module and deleting x from sys.modules in that separate file. I dabbled a bit with import hooks, but I didn't try too hard because according to the documentation, they are only called after the sys.modules cache is checked. I also tried monkeypatching sys.modules with a custom dict subclass, but whenever I do that, from module import submodule raises KeyError (I'm guessing sys.modules is not a real dictionary). Basically, I'm trying to write a debugging tool (which is why some hackery is OK here). My goal is simply that import x is shorter to type than import x;x.y.
0
python,metaprogramming,python-import,monkeypatching
2014-02-27T00:08:00.000
0
22,056,351
If you really want to change the semantics of the import statement, you will have to patch the interpreter. import checks whether the named module already is loaded and if so it does nothing more. You would have to change exactly that, and that is hard-wired in the interpreter. Maybe you can live with patching the Python sources to use myImport('modulename') instead of import modulename? That would make it possible within Python itself.
0
603
false
0
1
Python: force every import to reload
22,056,768
1
2
0
0
0
0
0
0
I have big data hex files from which I need to compare some hex values.When i read through python read it automatically converts it into ascii and so I have to decode it again.How can i directly read file in hex?? Till now i have tried using Intelhex python package but it is throwing an error : intelhex.HexRecordError: Hex files contain invalid record.So is there any issues with my files only? How much performance difference it is going to make if I successfully read hex data without decoding
0
python,performance,algorithm,file,hex
2014-02-27T11:15:00.000
0
22,066,818
split file into hex words consisting of purely [0-9a-fA-F] characters then int(word, 16) will change a word to a normal python integer. You can directly compare integers. Alternatively you can keep the hex words and then convert an integer to a hex string using '{0:x}'.format(someinteger), prior to comparing the hex strings.
0
1,698
false
0
1
how to compare hex files in python
22,069,185
1
1
0
0
1
0
1.2
0
I need to monitor NAS file system disk space, whenever file-system disk space goes above from a threshold value, I am I deleting oldest files from file system to bring back file system disk space below to threshold value. I read several article which suggested me two alternatives: by creating a daemon process which will run in background by creating a script and run through crontab which would be a better way to run a file system monitoring service? I need to run the monitoring script every 60 sec.For both options I will use python. it will run on *nix(unix/linux) environment.
0
python,linux,filesystemwatcher,inotify
2014-02-28T18:32:00.000
1
22,103,096
Create a script (you wouldn't need python for this task, just df and find). This is pretty lightweight, needs less code than a daemon (much less maintenance in the long run), and running scripts once a minute by cron is not unheard of. :-)
0
154
true
0
1
which is better way to run a file system monitoring service?
22,106,444
1
2
0
1
3
1
0.099668
0
I'm running a python program that's a fairly intensive test of many possible scenarios using a big-O of n algorithm. It's just brute-forcing it by testing over a billion different possibilities using at least five nested loops. Anyway, I'm not concerned with how much time the program takes. It's fine to run in the background for long periods of time, it's just that I can't have it clogging up the CPU. Is there any way in Python (3.3) to devote less CPU to a program in exchange for giving it more time? Thanks in advance.
0
python,performance,cpu,cpu-usage
2014-03-01T01:57:00.000
0
22,109,120
First recommendation is the simpler lower than process priority to absolute minimum. If still not reponsive, you could sprinkle in sleep() calls from from time module to surrender CPU Or buy a new computer with 4 cores and just let it run. I do this all the time -- works great. ADDED Adding time.sleep() calls will leave a single cpu system running "bursty". Also sleep(0) may be effective in an inner loop as is will yield the cpu, but get rescheduled quickly if nothing else wants to use the cpu. OOPS, forgot to check, you are using Linux -- sleep(0) does nothing. You can call the native sched_yield() API, don't think it is built into Python anywhere.
0
245
false
0
1
Is it possible to force your computer to devote less CPU in exchange for more time when running a python program?
22,109,186
1
1
0
1
0
0
1.2
1
I am trying to set the ethernet port pins directly to send High / Low signals to light up four LEDs. Is there any way I can simply connect LEDs to the ethernet cable? What would be the best approach, if using ethernet is not a good option, to light up four LED lights and switch them on/off using a PC. I am planning to create a command-line tool in Python. So far I have gone through a lot of articles about PySerial, LibUSB etc. and have been suggested to use USB to UART converter modules, or USB to RS-232 converters. Actually I don't want any interfaces except the PC port and the LEDs as far as possible. Please suggest!
0
python,usb,ethernet,libusb
2014-03-01T07:18:00.000
0
22,111,408
No, it is not possible. There is no sane way to affect the PHY in PC software.
0
250
true
0
1
Toggle pins of ethernet port to high / low state?
22,111,428
1
1
0
6
6
0
1.2
0
I have a stream of requests in my RabbitMQ cluster, and multiple consumers handling them. The thing is - each consumer must handle requests in batches for performance reasons. Specifically there is a network IO operation that I can amortize by batching requests. So, each consumer would like to maximize the number of requests that it can batch, but not add too much latency. I could potentially start a timer when a consumer receives the first request and keep collecting requests until one of the two things happen - timer expires, or 500 requests have been received. Is there a better way to achieve this - without blocking each consumer?
0
python,rabbitmq
2014-03-02T22:02:00.000
1
22,134,173
in general, the network aspect of "batching messages" is handled at the level of the basic.qos(prefetch-size, prefetch-count) parameters. In this scheme, the broker will send some number of bytes/messages(respectively) beyond the the unacknowledged messages for a consumer, but the client library doles out messages, in process, one at a time to the application. To maximize the benefit, the appication can withhold basic.ack() for each message, and periodically issue basic.ack(delivery-tag=n, multiple=True) to acknowledge all messages with a delivery tag <= n.
0
4,628
true
0
1
Batch message from a rabbitMQ queue
22,134,400
1
3
0
0
0
0
0
1
i have read other Stackoverflow threads on this. Those are older posts, i would like to get the latest update. Is it possible to send multiple commands over single channel in Paramiko ? or is it still not possible ? If so, is there any other library which can do the same. Example scenario, automating the Cisco router confi. : User need to first enter "Config t" before entering the other other commands. Its currently not possible in paramiko. THanks.
0
python,paramiko
2014-03-03T08:11:00.000
0
22,141,637
if you are planning to use the exec_command() method provided within the paramiko API , you would be limited to send only a single command at a time , as soon as the command has been executed the channel is closed. The below excerpt from Paramiko API docs . exec_command(self, command) source code Execute a command on the server. If the server allows it, the channel will then be directly connected to the stdin, stdout, and stderr of the command being executed. When the command finishes executing, the channel will be closed and can't be reused. You must open a new channel if you wish to execute another command. but since transport is also a form of socket , you can send commands without using the exec_command() method, using barebone socket programming. Incase you have a defined set of commands then both pexpect and exscript can be used , where you read a set of commands form a file and send them across the channel.
0
4,222
false
0
1
Paramiko - python SSH - multiple command under a single channel
22,160,534
1
1
0
17
9
0
1
0
We are using Behave BDD tool for automating API's. Is there any tool which give code coverage using our behave cases? We tried using coverage module, it didn't work with Behave.
0
python,automated-tests,coverage.py,python-behave
2014-03-03T10:39:00.000
0
22,144,504
You can run any module with coverage to see the code usage. In your case should be close to coverage run --source='.' -m behave Tracking code coverage for Aceptace/Integration/Behaviour test will give a high coverage number easily but can lead to the idea that the code are properly tested. Those are for see things working together, not to track how much code are well 'covered'. Tying together unittests and coverages makes more sense to me.
0
4,754
false
0
1
Test coverage tool for Behave test framework
23,836,778
1
1
0
5
4
1
1.2
0
I'm writing a .py file which will be regularly imported at the start of some of my IPython sessions in the first cells but will also be imported from other non-interactive sessions, since it contains functions that can be run in batch in non-interactive mode. It is basically a module containing many classes and functions that are very common. Since I'm using IPython with the --pylab=inline option, numpy as well as matplotlib functions are already imported, but when run in batch with a simple python mymodule.py the numpy functions have to be imported specifically. At the end I'd come up with double imports during the IPython session, a thing I don't like very much. What is the best practice in this case? Isn't importing modules twice a bad practice?
0
python,numpy,module,matplotlib,ipython
2014-03-03T10:51:00.000
0
22,144,748
Repeated imports aren't a problem. No matter how many times a module is imported in a program, Python will only run its code once and only make one copy of the module. All imports after the first will merely refer to the already-loaded module object. If you're coming from a C++ background, you can imagine the modules all having implicit include guards.
0
619
true
0
1
Best practices when importing in IPython
22,144,980
1
1
0
1
1
0
1.2
1
I'm looking into the new paypal REST api. I want the ability to be able to pay another paypal account, transfer money from my acount to their acount. All the documentation I have seen so far is about charging users. Is paying someone with the REST api possible? Similar to the function of the mass pay api or adaptive payments api.
0
python,paypal
2014-03-03T23:32:00.000
0
22,159,824
At this moment, paying another user via API is not possible via REST APIs, so mass pay/Adaptive payments would be the current existing solution. It is likely that this ability will be part of REST in a future release.
0
105
true
0
1
Paypal REST Api for Paying another paypal account
22,611,605
1
2
0
0
1
0
0
0
I'm working for a university and they have their own libraries and paths for python libraries. Every time I start ipython, I need to run a shell script (e.g. /etc/university/env.sh) The problem is that emacs doesn't recognize the env.sh file. When I do py-shell, emacs always envokes Python WITHOUT any pre-set environment variables. Is there a way to make emacs run /etc/corporate/env.sh before starting python?
0
python,bash,shell,emacs,ipython
2014-03-04T14:56:00.000
1
22,175,349
After running your /etc/university/env.sh, start Emacs from this shell. Then the variables set before are known.
0
392
false
0
1
how to load shell environment variables when Emacs starts py-shell?
22,208,324
1
1
0
0
0
0
0
0
One of my python scripts has to be started as root but after some initialization changes its process ownership to something else by calling setuid/setgid. Works like a champ except for one thing: most of the files under /prod/pid are still owned by root and most important /proc/pid/io is owned by root so I can't monitor that process's I/O stats. Might there be some additional calls I can make to change /proc ownership?
0
python-2.7
2014-03-04T15:28:00.000
1
22,176,118
That is by design. The process may still hold a capability (i.e. a handle to something external to the process) that would not be available to the process owner otherwise, so most debugging facilities are available to the root user only.
0
35
false
0
1
Is it possible to make ownership of /proc/pid/io follow changed process ownership?
22,176,917
2
2
0
0
0
0
0
0
I've a bit of code that involves sending an e-mail out to my housemates when it's time to top up the gas meter. This is done by pressing a button and picking whoever's next from the database and sending an email. This is open to a lot of abuse as you can just press the button 40 times and send 40 emails. My plan was to add the time the e-mail was sent to my postgres database and any time the button is pressed after, it checks to see if the last time the button was pressed was greater than a day. Is this the most efficient way to do this? (I realise an obvious answer would be to password protect the site so no outside users can access it and mess with the gas rota but unfortunately one of my housemates the type of gas-hole who'd do that)
1
python
2014-03-04T17:13:00.000
0
22,178,513
I asked about a soft button earlier. If your computer program is password/access protected you could just store it all in a pickle/config file somewhere, I am unsure what the value of the sql file is: use last_push = time.time() and check the difference to current push if seconds difference less than x do not progress, if bigger than x reset last_push and progress.... or am I missing something
0
76
false
1
1
Check time since last request
22,181,923
2
2
0
0
0
0
0
0
I've a bit of code that involves sending an e-mail out to my housemates when it's time to top up the gas meter. This is done by pressing a button and picking whoever's next from the database and sending an email. This is open to a lot of abuse as you can just press the button 40 times and send 40 emails. My plan was to add the time the e-mail was sent to my postgres database and any time the button is pressed after, it checks to see if the last time the button was pressed was greater than a day. Is this the most efficient way to do this? (I realise an obvious answer would be to password protect the site so no outside users can access it and mess with the gas rota but unfortunately one of my housemates the type of gas-hole who'd do that)
1
python
2014-03-04T17:13:00.000
0
22,178,513
If this is the easiest solution for you to implement, go right ahead. Worst case scenario, it's too slow to be practical and you'll need to find a better way. Any other scenario, it's good enough and you can forget about it. Honestly, it'll almost certainly be efficient enough to serve your purposes. The number of users at any one time will very rarely exceed one. An SQL query to determine if the timestamp is over a day before the current time will be quick, enough so that even the most determined gas-hole(!) wouldn't be able to cause any damage by spam-clicking the button. I would be very surprised if you ran into any problems.
0
76
false
1
1
Check time since last request
22,179,026
1
2
0
1
1
0
0.099668
1
I am writing an application that would asynchronously trigger some events. The test looks like this: set everything up, sleep for sometime, check that event has triggered. However because of that waiting the test takes quite a time to run - I'm waiting for about 10 seconds on every test. I feel that my tests are slow - there are other places where I can speed them up, but this sleeping seems the most obvious place to speed it up. What would be the correct way to eliminate that sleep? Is there some way to cheat datetime or something like that? The application is a tornado-based web app and async events are triggered with IOLoop, so I don't have a way to directly trigger it myself. Edit: more details. The test is a kind of integration test, where I am willing to mock the 3rd party code, but don't want to directly trigger my own code. The test is to verify that a certain message is sent using websocket and is processed correctly in the browser. Message is sent after a certain timeout which is started at the moment the client connects to the websocket handler. The timeout value is taken as a difference between datetime.now() at the moment of connection and a value in database. The value is artificially set to be datetime.now() - 5 seconds before using selenium to request the page. Since loading the page requires some time and could be a bit random on different machines I don't think reducing the 5 seconds time gap would be wise. Loading the page after timeout will produce a different result (no websocket message should be sent). So the problem is to somehow force tornado's IOLoop to send the message at any moment after the websocket is connected - if that happened in 0.5 seconds after setting the database value, 4.5 seconds left to wait and I want to try and eliminate that delay. Two obvious places to mock are IOLoop itself and datetime.now(). the question is now which one I should monkey-patch and how.
0
python,tdd,tornado
2014-03-04T18:57:00.000
0
22,180,528
I you want to mock sleep then you must not use it directly in your application's code. I would create a class method like System.sleep() and use this in your application. System.sleep() can be mocked then.
0
481
false
0
1
Mocking "sleep"
22,180,586
1
2
0
2
1
0
0.197375
0
I have a project that already written in php, and now i am using python in google app engine, actually i want to use the api that google support for python, for example : datastore, blobstore ... and also to save my time to re write the code again in python ! so, is it possible to run php script in python code ?
0
php,google-app-engine,python-2.7
2014-03-04T20:05:00.000
1
22,181,860
Those runtimes (Py, PHP, Java. etc.) are isolated from each other and are tightly sandboxed. So when you deploy a Python app, for example, it doesn't have access to the PHP or Java runtime. So, it's not possible to run PHP inside a python sandbox, at least not in the appengine platform.
0
361
false
1
1
Using php inside python code ,google app engine
22,183,262
1
1
0
2
1
0
1.2
0
I have a raspberry pi that is setup to run different videos depending on the key press on a keyboard. If someone accidentally hits two keys at once, it causes the unit to temporarily freeze up. What is the best way and code to limit one key press of keys x,y,z for two seconds?
0
python,keyboard-shortcuts,raspberry-pi
2014-03-04T20:47:00.000
0
22,182,710
Just record the time when each keypress comes in, and store the last couple. If the time of the next keypress is shorter than your required threshold, just ignore it.
0
277
true
0
1
Python limit key input per second
22,183,462
1
3
0
25
19
1
1.2
0
Pylint generates this error for subclasses of an abstract class, even when those subclasses are not themselves instantiated and the methods are overridden in the concrete subclasses. Why does Pylint think my abstract subclasses are intended to be concrete? How can I shut up this warning without getting out the hammer and disabling it altogether in the rc file?
0
python,pylint
2014-03-05T01:43:00.000
0
22,186,843
For some reason pylint think the class isn't abstract (currenly detection is done by checking for method which raise NotImplementedError). Adding a comment like #pylint: disable=W0223 at the top of the module (for disabling only in this module) or class (only in this class), should do the trick.
0
10,479
true
0
1
Pylint W0223: Method ... is abstract in class ... but is not overridden
22,224,042
1
2
0
4
4
1
0.379949
0
I've recently declared emacs bankruptcy and in rebuilding my config switched from the old python-mode.el to the built-in python.el. One thing that's I'm missing is the old behaviour of auto-indenting to the correct level when hitting RET. Is there any way to re-enable this?
0
emacs,elisp,python-mode
2014-03-05T08:35:00.000
0
22,192,424
In upcoming Emacs 24.4 auto-indendation is enabled by default thanks to electric-indent-mode. Since Emacs 24.4 has been in feature-freeze for quite some time now, there should be no major breaking bugs left, so you could already make a switch.
0
1,623
false
0
1
Enable auto-indent in python-mode (python.el) in Emacs 24?
22,218,973
1
2
0
1
0
1
0.099668
0
I have a large integer array that I need to store in a file, what is the most efficient way so I can have quick retrieval speed? I'm not concerned with efficiency of writing to disk, but reading only I am wondering if there is a good solution other than json and pickle?
0
python
2014-03-05T18:05:00.000
0
22,205,644
JSON/pickle are very low efficiency solutions as they require at best several memory copies to get your data in or out. Keep you data binary if you want the best efficiency. The pure python approach would involve using struct.unpack, however this is a little kludgy as you still need a memory copy. Even better is something like numpy.memmap which directly maps your file to a numpy array. Very fast, very memory efficient. Problem solved. You can also write your file using the same approach.
0
315
false
0
1
Python - storing integer array to disk for efficient retrieval
22,205,974
1
2
0
0
1
0
0
0
Inside an web application ( Pyramid ) I create certain objects on POST which need some work done on them ( mainly fetching something from the web ). These objects are persisted to a PostgreSQL database with the help of SQLAlchemy. Since these tasks can take a while it is not done inside the request handler but rather offloaded to a daemon process on a different host. When the object is created I take it's ID ( which is a client side generated UUID ) and send it via ZeroMQ to the daemon process. The daemon receives the ID, and fetches the object from the database, does it's work and writes the result to the database. Problem: The daemon can receive the ID before it's creating transaction is committed. Since we are using pyramid_tm, all database transactions are committed when the request handler returns without an error and I would rather like to leave it this way. On my dev system everything runs on the same box, so ZeroMQ is lightning fast. On the production system this is most likely not an issue since web application and daemon run on different hosts but I don't want to count on this. This problem only recently manifested itself since we previously used MongoDB with a write_convern of 2. Having only two database servers the write on the entity always blocked the web-request until the entity was persisted ( which is obviously is not the greatest idea ). Has anyone run into a similar problem? How did you solve it? I see multiple possible solutions, but most of them don't satisfy me: Flushing the transaction manually before triggering the ZMQ message. However, I currently use SQLAlchemy after_created event to trigger it and this is really nice since it decouples this process completely and thus eliminating the risk of "forgetting" to tell the daemon to work. Also think that I still would need a READ UNCOMMITTED isolation level on the daemon side, is this correct? Adding a timestamp to the ZMQ message, causing the worker thread that received the message, to wait before processing the object. This obviously limits the throughput. Dish ZMQ completely and simply poll the database. Noooo!
1
python,postgresql,sqlalchemy,zeromq
2014-03-07T08:48:00.000
0
22,245,407
This comes close to your second solution: Create a buffer, drop the ids from your zeromq messages in there and let you worker poll regularly this id-pool. If it fails retrieving an object for the id from the database, let the id sit in the pool until the next poll, else remove the id from the pool. You have to deal somehow with the asynchronous behaviour of your system. When the ids arrive constantly before persisting the object in the database, it doesnt matter whether pooling the ids (and re-polling the the same id) reduces throughput, because the bottleneck is earlier. An upside is, you could run multiple frontends in front of this.
0
2,062
false
1
1
ZeroMQ is too fast for database transaction
22,247,025
1
1
0
4
2
0
0.664037
0
In an existing application we are using Mako templates (unfortunately..). That works ok for HTML output since newlines do not matter. However, we now need to generate a text/plain email using a template - so any newlines introduced by control statements are not acceptable. Does Mako provide any options to make statement lines (i.e. those starting with %) not cause a newline in the output? I checked the docs but couldn't find anything so far...
0
python,template-engine,mako,plaintext
2014-03-07T15:49:00.000
0
22,254,612
If you add a backslash at the end of the line like this: "<% %>\", you can suppress the newline.
0
855
false
1
1
Is there a way to use Mako templates for plain-text files where newlines matter?
27,262,018
1
5
0
0
2
0
0
0
I've created a simple python script and therefor have a .py file. I can run it from the terminal but if I double click it only opens up in gedit. I've read this question other places and tried the solutions, however none have worked. I'm running Ubuntu 13.04, I've selected the box to make the file executable. I've even installed a fresh instance of Ubuntu 13.10 on another computer and it does the same thing. What might I be missing here?
0
python,ubuntu
2014-03-07T22:39:00.000
1
22,262,073
In my case it works after includes at the first line: #!/home/yourusername/anaconda3/bin/python You can check the appropiate path running which python in your console. It is also neccessary to change the file manager setting and configure it to run your scripts.
0
15,466
false
0
1
execute python script from linux desktop
65,688,297
1
1
0
1
1
0
0.197375
0
I have a Python application that I launch from a form using mod_wsgi. I would like to display in real time the output of the script, while it is running, to a web page. Does anybody know how I can do this?
0
python,web,mod-wsgi
2014-03-08T11:53:00.000
0
22,268,968
You'll need JavaScript to do this Possibility 1, data generated by server - Make a static HTML page with an empty div. - Place a piece of Javascript code onto it that is run after the page is loaded. - The JavaScript will contain a timer that downloads the output of your script say every 5 seconds, using AJAX ands sets your div's html to the result. - The easiest way to get this working is probably to use the AJAX facilities in JQuery. Possibility 2, data generated by client - If it is possible to have your dynamic output generated on the client by a piece of JavaScript code this will scale better (since it takes the burden off the server.) - You may still load the input data needed to compute the dynamic output formatted as JSON by means of AJAX.
0
406
false
1
1
Real time output of script on web page using mod_wsgi
22,933,666
1
1
0
4
0
1
1.2
0
How to suppress __init__.py from showing in the project tree [Eclipse PyDev]?
0
python,eclipse-plugin,pydev
2014-03-09T17:13:00.000
0
22,285,294
You can open the dropdown menu on the toolbar (Ctrl + F10) and choose Setup Custom Filters. Here you should be able to add a custom filter for __init__.py files.
0
417
true
0
1
Suppress __init__.py in Eclipse PyDev
22,285,769
1
2
0
0
1
0
0
1
This is the second time today this has happened.. I tried to import requests earlier and I got an Import Error: no module named requests Same thing for serial I googled the crap out of this and nothing I've found works. Any ideas as to what's going on? I'm trying to use pyserial to take input from an arduino
0
python,serial-port,pyserial
2014-03-10T09:05:00.000
0
22,295,895
are you looking for urllib.requests? if you are using python 2.7 when you ask for requests you import urllib and you don't actually use request, but its methods are available to the urllib handle, so for instance: urllib.urlopen("http://google.com") will work in python 2.7.x, where urllib.request.urlopen("http://google.com") will work in python3.x.x
0
2,633
false
0
1
Python no module named serial / no module named requests
24,922,412
1
2
0
0
0
0
0
0
I recently created a python app on openshift. I found a file called app.py.disabled when I git cloned the repo. Can anyone explain what it does?
0
python-2.7,openshift,paas
2014-03-11T07:43:00.000
1
22,319,312
I created a new gear with each cartridge type [python-2.6, python-2.7, python-3.3] and when the code was cloned to my workstation, none of them contained an app.py.disabled file. Can you give more information about how you created the application? Did you use a specific quickstart or url?
0
196
false
0
1
The use of app.py.disabled on openshift
22,352,836
1
1
0
1
2
0
0.197375
0
I've been trying to integrate C code into Python under Linux and I came up with the following problem: ¿is it possible to share an already opened file between C and Python? I mean a C FILE and a Python file object. The C function which I'm struggling with is called exhaustively, so I'd like to avoid opening/closing the file each time this happens and pass the opened file from Python to C. I'm open to any efficient solution.
0
python,c,file
2014-03-12T14:39:00.000
1
22,354,902
It should be possible. In C, you can get the file descriptor with fileno(fh) and open it in Python with os.fdopen(fd). Make sure you remember to close it -- I doubt that the Python file object going out of scope would accomplish this.
0
258
false
0
1
Share file stream between Python and C
22,355,036
2
2
0
2
0
0
1.2
1
I'd like to make my own crypto currency. I don't want to just recompile the Bitcoin source code and the rename it. I'd like to do it from scratch just to learn more about it. I'm thinking of using Python as the language for the implementation but I heard that in terms of performance Python isn't the best. My question is, would a network written in Python be able to perform well under the possibility of millions of peers (I know it's not going to happen but I'd like to make my network scalable.)
0
python,networking,p2p,bitcoin,peer
2014-03-12T18:02:00.000
0
22,360,093
Depends which part is in Python. The network is, by definition, I/O bound. It's unlikely that using Python rather than C/C++/etc. will cause a noticeable performance drop for the client itself. Your choice of cryptographic algorithm will also have a large impact on performance (how quick it is to verify transactions, etc.). Now, as for 'mining' the currency, it would be silly to do that with Python since that's very much a CPU-bound task. In fact, using a GPU which allows for massive parallelism on trivially parallel problems is a much better idea (CUDA or OpenCL work great here).
0
744
true
0
1
Python Peer to Peer Network
22,360,849
2
2
0
2
0
0
0.197375
1
I'd like to make my own crypto currency. I don't want to just recompile the Bitcoin source code and the rename it. I'd like to do it from scratch just to learn more about it. I'm thinking of using Python as the language for the implementation but I heard that in terms of performance Python isn't the best. My question is, would a network written in Python be able to perform well under the possibility of millions of peers (I know it's not going to happen but I'd like to make my network scalable.)
0
python,networking,p2p,bitcoin,peer
2014-03-12T18:02:00.000
0
22,360,093
Nothing beats good ol' C for performance. However, if you plan on parallelising everything for multi-CPU support I would give Haskell a try. It is inherently parallel, so you won't have to put in extra effort for optimizations. You can also do something similar in C with OpenMP and Cilk using pragmas. Good Luck!
0
744
false
0
1
Python Peer to Peer Network
22,360,906
2
3
0
0
4
1
0
0
I have an improperly packaged Python module. It only has __init__.py file in the directory. This file defines the class that I wish to use. What would be the best way to use it as a module in my script? ** EDIT ** There was a period (.) in the the name of the folder. So the methods suggested here were not working. It works fine if I rename the folder to have a valid name.
0
python
2014-03-12T19:56:00.000
0
22,362,449
Put __init__.py into directory my_module, make sure my_module is in sys.path, and then from my_module import WHAT_EVER_YOU_WANT
0
87
false
0
1
Import a module in Python
22,362,518
2
3
0
0
4
1
0
0
I have an improperly packaged Python module. It only has __init__.py file in the directory. This file defines the class that I wish to use. What would be the best way to use it as a module in my script? ** EDIT ** There was a period (.) in the the name of the folder. So the methods suggested here were not working. It works fine if I rename the folder to have a valid name.
0
python
2014-03-12T19:56:00.000
0
22,362,449
It's not necessarily improperly packaged. You should be able to do from package import X just like you would normally. __init__.py files are modules just like any other .py file, they just have some special semantics as to how they are evaluated in addition to the normal usage, and are basically aliased to the package name.
0
87
false
0
1
Import a module in Python
22,362,523
1
1
0
0
0
0
0
0
Can I serve PHP and python on a single project in app engine? for example /php/* will run php code, but the root / will run python code.
0
php,python,google-app-engine
2014-03-12T22:12:00.000
1
22,365,018
You can (and many do) use a front-end like nginx or Apache that handles and forwards different paths differently. I do not see why you would want your application engine to be "bilingual" though.
0
63
false
1
1
can i serve PHP and python on a single project in app engine?
22,365,130
1
3
0
2
6
1
0.132549
0
I am new to dynamic languages in general, and I have discovered that languages like Python prefer simple data structures, like dictionaries, for sending data between parts of a system (across functions, modules, etc). In the C# world, when two parts of a system communicate, the developer defines a class (possibly one that implements an interface) that contains properties (like a Person class with a Name, Birth date, etc) where the sender in the system instantiates the class and assigns values to the properties. The receiver then accesses these properties. The class is called a DTO and it is "well- defined" and explicit. If I remove a property from the DTO's class, the compiler will instantly warn me of all parts of the code that use that DTO and are attempting to access what is now a non-existent property. I know exactly what has broken in my codebase. In Python, functions that produce data (senders) create implicit DTOs by building up dictionaries and returning them. Coming from a compiled world, this scares me. I immediately think of the scenario of a large code base where a function producing a dictionary has the name of a key changed (or a key is removed altogether) and boom- tons of potential KeyErrors begin to crop up as pieces of the code base that work with that dictionary and expect a key are no longer able to access the data they were expecting. Without unit testing, the developer would have no reliable way of knowing where these errors will appear. Maybe I misunderstand altogether. Are dictionaries a best practice tool for passing data around? If so, how do developers solve this kind of problem? How are implicit data structures and the functions that use them maintained? How do I become less afraid of what seems like a huge amount of uncertainty?
0
python
2014-03-13T06:13:00.000
0
22,370,488
When using Python for a large project using automated testing is a must because otherwise you would never dare to do any serious refactoring and the code base will rot in no time as all your changes will always try to touch nothing leading to bad solution (simply because you'd be too scared to implement the correct solution instead). Indeed the above is true even with C++ or, as it often happens with large projects, with mixed-languages solutions. Not longer than a few HOURS ago for example I had to make a branch for a four line bugfix (one of the lines is a single brace) for a specific customer because the trunk evolved too much from the version he has in production and the guy in charge of the release process told me his use cases have not yet been covered with manual testing in current version and therefore I cannot upgrade his installation. The compiler can tell you something, but it cannot provide any confidence in stability after a refactoring if the software is complex. The idea that if some piece of code compiles then it's correct is bogus (possibly with the exception of hello_world.cpp). That said you normally don't use dictionaries in Python for everything, unless you really care about the dynamic aspect (but in this case the code doesn't access the dictionary with a literal key). If your Python code has a lot of d["foo"] instead of d[k] when using dicts then I'd say there is a smell of a design problem.
0
1,529
false
0
1
How does Python provide a maintainable way to pass data structures around in a system?
22,371,140
2
3
1
0
0
0
0
0
I have a third party DLL (no header file) written in C++ and I am able to get the function prototype information from the developer, but it is proprietary and he will not provide the source. I've gone through the SWIG tutorial but I could not find anywhere specifying how to use SWIG to access any functions with only the DLL file. Everything on the tutorial shows that I need to have the header so SWIG knows what the function prototypes look like. Is SWIG the right route to use in this case? I am trying to load this DLL in Python so I can utilize a function. From all of my research, it looks like Python's ctypes does not work with C++ DLL files and I am trying to find the best route to follow to do this. Boost.python seems to require changing the underlying C++ code to make it work with Python. To sum up, is there a way to use SWIG when I know the function prototype but do not have the header file or source code?
0
python,c++,dll,swig
2014-03-13T12:25:00.000
0
22,378,590
To use a library (static or dynamic), you need headers and library file .a, .lib... Its true for c++ and I think its the same for Python
0
1,508
false
0
1
SWIG C++ Precompiled DLL
22,378,831
2
3
1
0
0
0
0
0
I have a third party DLL (no header file) written in C++ and I am able to get the function prototype information from the developer, but it is proprietary and he will not provide the source. I've gone through the SWIG tutorial but I could not find anywhere specifying how to use SWIG to access any functions with only the DLL file. Everything on the tutorial shows that I need to have the header so SWIG knows what the function prototypes look like. Is SWIG the right route to use in this case? I am trying to load this DLL in Python so I can utilize a function. From all of my research, it looks like Python's ctypes does not work with C++ DLL files and I am trying to find the best route to follow to do this. Boost.python seems to require changing the underlying C++ code to make it work with Python. To sum up, is there a way to use SWIG when I know the function prototype but do not have the header file or source code?
0
python,c++,dll,swig
2014-03-13T12:25:00.000
0
22,378,590
SWIG cannot be used without the header files. Your only option is a lib like ctypes. If you find ctypes doesn't do it for you and you can't find alternative then post a question with why ctypes not useable in your case.
0
1,508
false
0
1
SWIG C++ Precompiled DLL
22,389,172
1
1
0
2
5
1
0.379949
0
In operators module we have a helper method operator.abs. But in python abs is already a function, and the only way I know to invoke the __abs__ method on an object is by function call anyway. Is there some other fancy way I don't know about to take the absolute value of a number? If not, why does operator.abs exist in the first place, and what is an example where you would have to use it instead of plain old abs?
0
python,operators,absolute-value
2014-03-13T18:03:00.000
0
22,387,166
abs(...)     abs(a) -- Same as abs(a). No, I don’t think there’s some other fancy way you don’t know about, or really any reason at all for this to be here at all except for consistency (since operator has methods for the other __operator__s).
0
723
false
0
1
How/why do we use operator.abs
22,387,237
1
1
0
2
2
0
1.2
0
These are the actions that I have taken and I still can't seem to run the script on PHP on my windows 8.1 platform. Please do note that I have installed Python and have tested it. I am running python 2.7 (1) Step 1: Edited the Apache httpd.conf scripts to run cgi .py scripts AddHandler cgi-script .cgi .pl .asp .py (2) Step 2: Added a test script in cgi-bin folder of xampp. It is called #!/Python27/python print "Content-type: text/plain\n\n" print "Hello world" (3) Step 3: Created a PHP Code called Python.php $python = exec('python C:\xampp\cgi-bin\BridgePHP.py'); echo "Python is printing: " . $python; (4) I get the following output in my browser: Python is printing: (5) I tested by running using the standard command prompt and the word "Hello World" was printed I am unsure why php script is unable to run the python script. Is there any other settings that I should do?
0
php,python-2.7,xampp,windows-8.1
2014-03-14T02:46:00.000
1
22,395,199
I have finally figured this out: I will have to restart the server whenever I add a new cgi script in the server. This worked. In this case BridgePHP.py
0
3,357
true
0
1
Running Python in PHP as a script in XAMPP (WINDOWS 8.1)
22,395,764
1
1
0
0
0
1
1.2
0
I have a project where I have to import a bunch of modules everytime I want to run some code. In order to avoid writing these over every time I create a new file, I created a sort of setup script that imports all of these, and then I just import * from that startup script. Is this stylistically ok? I could see why this would be confusing, when I reference modules/classes that can't be seen on import.... on the other hand, it saves time and space by not writing out the setup each time. What method should I do?
0
python,module,packages,python-module
2014-03-14T16:40:00.000
0
22,410,923
For readability purposes it would be better to import each one in every file. So as you said it would make no sense with those files coming from no where. So for the purpose of readability it would be much, much better to import each one in every file. I could share some examples if you would like. Cheers!
0
35
true
0
1
Should I create a file with all my module imports and then import * from that, or should I import the modules to each file I use?
22,411,037
1
1
0
0
0
1
1.2
0
For some reason, every time I upload my package to PyPI, it includes a tests.py file that: Isn't tracked in git Isn't even in the project directory anymore Isn't added to the /dist tarball after running sdist Isn't in the tarball hosted on Github Isn't in MANIFEST Where is it picking this file up from? It was part of my package during development but I've done everything I can to remove it before publishing - and it still shows up.
0
python,git,pypi
2014-03-15T21:44:00.000
0
22,430,150
Could not figure out the issue with distutils. Switched to setuptools and everything is working correctly, no tests.py being added to the tarball.
0
56
true
0
1
PyPI tarball includes files that aren't in the dist package
22,455,499
1
1
0
1
0
0
1.2
0
I'm working on a Raspberry Pi project at the moment and I'm searching for a possibility to play a note as long as ey press a button (connected with gpio). I use pyFluidsynth and got it working but it's note holding a note as long as i press a button, it repeats it really fast but to slow not to hear it. Is there any control I don't know? I'm just using noteon and noteoff, is there maybe something like "notehold"? thanks!
0
python,midi,synthesizer,fluidsynth
2014-03-16T10:34:00.000
0
22,435,753
In MIDI if you send a note on message it stays on until you send a note off. Maybe you are sending a note on every time you check the state of the button? If so, you shouldn't, send the note on/note off only when the button state changes.
0
317
true
1
1
How to play a note as long as I hit the key (Fluidsynth)?
22,449,199
1
1
0
1
1
1
0.197375
0
I have a python CGI script that takes several query strings as arguments. The query strings are generated by another script, so there is little possibility to get illegal arguments, unless some "naughty" user changes them intentionally. A illegal argument may throw a exception (e.g. int() function get non-numerical inputs), but does it make sense to write some codes to catch such rare errors? Any security risk or performance penalty if not caught? I know the page may go ugly if exceptions are not nicely handled, but a naughty user deserves it, right?
0
python,exception,cgi
2014-03-16T15:27:00.000
0
22,438,807
Any unhandled exception causes program to terminate. That means if your program is doing some thing and exception occurs it will shutdown in an unclean fashion without releasing resources. Any ways CGI is obsolete use Django, Flask, Web2py or something.
0
54
false
0
1
What is the risk of not catching exceptions in a CGI script
22,439,025
3
4
0
11
6
0
1.2
0
I want to deploy a Django application to a cloud computing environment, but I am worried about source code security. Can I deploy only the compiled .pyc files there? According to official python doc, pyc files are 'moderately hard to reverse engineer'. What are the pros and cons of taking this approach? Is this a standard practice? I am not using AWS, let me just say that I am in a country where cloud computing can not be trusted at all...
0
python,django,cloud,wsgi
2014-03-19T03:54:00.000
0
22,495,894
Deploying .pyc files will not always work. If using Apache/mod_wsgi for example, at least the WSGI script file still needs to be straight Python code. Some web frameworks also may require the original source code files to be available. Using .pyc files also does little to obscure any sensitive information that may be in templates used by a web framework. In general, using .pyc files is a very weak defence and tools are available to reverse engineer them to extract information from them. So technically your application may run, but it would not be regarded as very secure way of protecting your source code. You are better of using a hosting service you trust. This generally means paying for reputable hosting rather than just the cheapest one you can find.
0
6,259
true
1
1
Should I deploy only the .pyc files on server if I worry about code security?
22,497,827
3
4
0
1
6
0
0.049958
0
I want to deploy a Django application to a cloud computing environment, but I am worried about source code security. Can I deploy only the compiled .pyc files there? According to official python doc, pyc files are 'moderately hard to reverse engineer'. What are the pros and cons of taking this approach? Is this a standard practice? I am not using AWS, let me just say that I am in a country where cloud computing can not be trusted at all...
0
python,django,cloud,wsgi
2014-03-19T03:54:00.000
0
22,495,894
Generally, deploying PYC files will work fine. The Pros, as you said, a bit helpful for protecting source codes. Cons, here are the points I found: 1). PYC only works with same Python version. E.g., "a.pyc" was compiled by Python2.6, "b.pyc" was by 2.7, and b.pyc "import a", it won't work. Similarly, "python2.6 b.pyc" neither work. So do remember to use a same Python version to generate all PYC, as well as the version on your cloud server 2). if you want to SSH to cloud server for some live debugging, PYC cannot help you 3). the deployment work requires extra things to do
0
6,259
false
1
1
Should I deploy only the .pyc files on server if I worry about code security?
22,496,250
3
4
0
1
6
0
0.049958
0
I want to deploy a Django application to a cloud computing environment, but I am worried about source code security. Can I deploy only the compiled .pyc files there? According to official python doc, pyc files are 'moderately hard to reverse engineer'. What are the pros and cons of taking this approach? Is this a standard practice? I am not using AWS, let me just say that I am in a country where cloud computing can not be trusted at all...
0
python,django,cloud,wsgi
2014-03-19T03:54:00.000
0
22,495,894
Yes, just deploying the compiled files is fine. Another point to consider are the other aspects of your application. One aspect could be if current bugs let malicious users know what technology stack you are using, type of error messages displayed when (if) your application crashes. To me, these seem like some of the other aspects, I'm sure there are more.
0
6,259
false
1
1
Should I deploy only the .pyc files on server if I worry about code security?
22,495,975
1
5
0
0
4
1
0
0
I'm learning python and came into a situation where I need to change the behvaviour of a function. I'm initially a java programmer so in the Java world a change in a function would let Eclipse shows that a lot of source files in Java has errors. That way I can know which files need to get modified. But how would one do such a thing in python considering there are no types?! I'm using TextMate2 for python coding. Currently I'm doing the brute-force way. Opening every python script file and check where I'm using that function and then modify. But I'm sure this is not the way to deal with large projects!!! Edit: as an example I define a class called Graph in a python script file. Graph has two objects variables. I created many objects (each with different name!!!) of this class in many script files and then decided that I want to change the name of the object variables! Now I'm going through each file and reading my code again in order to change the names again :(. PLEASE help! Example: File A has objects x,y,z of class C. File B has objects xx,yy,zz of class C. Class C has two instance variables names that should be changed Foo to Poo and Foo1 to Poo1. Also consider many files like A and B. What would you do to solve this? Are you serisouly going to open each file and search for x,y,z,xx,yy,zz and then change the names individually?!!!
0
python,migration,dynamic-typing
2014-03-19T07:18:00.000
0
22,498,877
One of the tradeoffs between statically and dynamically typed languages is that the latter require less scaffolding in the form of type declarations, but also provide less help with refactoring tools and compile-time error detection. Some Python IDEs do offer a certain level of type inference and help with refactoring, but even the best of them will not be able to match the tools developed for statically typed languages. Dynamic language programmers typically ensure correctness while refactoring in one or more of the following ways: Use grep to look for function invocation sites, and fix them. (You would have to do that in languages like Java as well if you wanted to handle reflection.) Start the application and see what goes wrong. Write unit tests, if you don't already have them, use a coverage tool to make sure that they cover your whole program, and run the test suite after each change to check that everything still works.
0
696
false
0
1
Tracking changes in python source files?
22,498,955
1
1
0
1
0
1
0.197375
0
This is not a language compare question like asked on various forums. I am interested to know about more specific term core libraries/modules calling/execution in python. As I checked python modules installation directory like /usr/lib/python2.7 (On Ubuntu). I found .py (Source Code) and .pyc (Byte Code). I am assuming Python interpreter/compiler call .pyc file when we using import statement or more specifically called class/function from that module. While php is using .so (Shared object) files for libraries. As I seen on /usr/lib/php5/20090626. Yes python also have a directory /usr/lib/pyshared/python2.7 for .so files. But still lot of important libraries are stored as .pyc files. Is it not a good idea for using only .so extension for core libraries like php for performance benefits ?
0
php,python,python-2.7,shared-libraries
2014-03-19T09:21:00.000
0
22,501,155
.py files are compiles on the fly to .pyc files, the .pyc is used if it is more recent than the .py file. Some modules can be written in C/C++, then they are delivered as a .so file.
0
92
false
0
1
php compare to python in perspective of core libraries
22,503,228
1
5
0
0
5
0
0
0
I'm building a Joomla 3 web site but I have the need to customize quite a few pages. I know I can use PHP with Joomla, but is it also possible to use Python with it? Specifically, I'm looking to use CherryPy to write some custom pieces of code but I want them to be displayed in native Joomla pages (not just iFrames). Is this possible?
0
php,python,joomla,cherrypy,joomla3.0
2014-03-19T16:32:00.000
0
22,512,321
Before considering Python What are you wanting to customize? (perhaps some clever Javascript or a Joomla extension already exists) Is the Joomla-way not a better solution for your problem, given the fact that you're using Joomla? (change the template, or the view-templates of the modules and component in particular) i.o.w.: do you understand Joomla enough to know that you need something else? See below: If Python is still the way to go: Does your hosting support Python? Should you reconsider your choice of CMS? I like your choice of CherryPy.
0
5,932
false
1
1
Can you display python web code in Joomla?
22,783,076
2
2
0
1
0
1
1.2
0
I am trying to get the address of the sender in mbox formated mails in python. When I get the line that contains the sender, it looks like From: Mister X <misterx@domain>. I am able to retrieve the mail address with, for example, re.findall('<[a-zA-Z0-9\.]+@[a-zA-Z0-9\.]+>', str). I think that should be fine since email addresses, as far as I know, cannot contain any other characters. What I do not understand is why the expression <*@*>, which I expected to match any characters in the email address does not work at all. In fact, re.findall('<*@*>', 'From: Mister X <misterx@domain>')returns ['>'].
0
python,regex,email
2014-03-20T10:37:00.000
0
22,530,287
<* means: "the character < zero or more times". You are looking for <.*@.*>
0
134
true
0
1
Matching address in mbox format in python
22,530,391
2
2
0
0
0
1
0
0
I am trying to get the address of the sender in mbox formated mails in python. When I get the line that contains the sender, it looks like From: Mister X <misterx@domain>. I am able to retrieve the mail address with, for example, re.findall('<[a-zA-Z0-9\.]+@[a-zA-Z0-9\.]+>', str). I think that should be fine since email addresses, as far as I know, cannot contain any other characters. What I do not understand is why the expression <*@*>, which I expected to match any characters in the email address does not work at all. In fact, re.findall('<*@*>', 'From: Mister X <misterx@domain>')returns ['>'].
0
python,regex,email
2014-03-20T10:37:00.000
0
22,530,287
Here is my answer. why the expression <*@*>, which I expected to match any characters in the email address does not work at all. Because you are using re module that evaluate your expression <*@*> as regular expression. If you want to make your expression evaluated as wildcard, use fnmatch module. But fnmatch have only function that checks if string matches or not. So you can't get matches by using fnmatch. From your question, It seems you want to retrieve mail address, so you shouldn't use fnmatch module. Just use re module to get matches. I think you are just confusing regular expression between wildcard.
0
134
false
0
1
Matching address in mbox format in python
22,535,800
1
1
0
1
4
0
0.197375
0
Is it possible to exchange data between a PHP page and a Python application? How can I implement a PHP page that reacts to a Python application? EDIT: My application is divided in 2 parts: the web backend and a Python daemon. Via the web backend I upload MP3s to my server; these MP3s are processed by my Python daemon which fetch metadata from Musicbrainz. Now: I need to ask the user the results of the "Python fetch" to choose the right metadata. Is this possible?
0
php,python
2014-03-20T18:56:00.000
0
22,542,566
Write a Python script that takes a path in sys.argv or the audio data via sys.stdin and writes metadata to sys.stdout. Call it from PHP using exec.
0
419
false
1
1
Exchange data between Python and PHP
22,542,914
1
1
0
0
0
1
0
0
I'm building a little python script that is supposed to update itself everytime it starts. Currently I'm thinking about putting MD5 hashes on a "website" and downloading the files into a temp folder via the srcipt itself. Then if the MD5 Hashes line up the temp files will be moved over the old ones. But now I'm wondering if git will just do something like this anyway. What if the internet connection breaks away or power goes down when doing a git pull? Will I still have the "old" version or some intermediate mess? Since my aproach works with an atomic rename from the os I can at least be shure that every file is either old or new, but not messed up. Is that true for git as well?
0
python,git
2014-03-23T12:11:00.000
0
22,590,718
A pull is a complex command which will do a few different things depending on the configuration. It is not something you should use in a script, as it will try to merge (or rebase if so configured) which means that files with conflict markers may be left on the filesystem, which will make anything that tries to compile/interpret those files fail to do so. If you want to switch to a particular version of files, you should use something like checkout -f <remote>/<branch> after fetching from <remote>. Keep in mind that git cannot know what particular needs you have, so if you're writing a script, it should be able to perform some sanity checks (e.g. make sure there are no extra files lying around)
0
41
false
0
1
Does git always produce stable results when doing a pull?
22,590,792
1
1
0
0
1
0
0
0
I have an account to a computing cluster that uses Scientific Linux. Of course I only have user access. I'm working with python and I need to run python scripts, so I need to import some python modules. Since I don't have root access, I installed a local python copy on my $HOME with all the required modules. When I run the scripts on my account (hosting node), they run correctly. But in order to submit jobs to the computing queues (to process on much faster machines), I need to submit a bash script that has a line that executes the scripts. The computing cluster uses SunGrid Engine. However when I submit the bash script, I get an error that the modules I installed can't be found! So my understanding to the problem is that the modules are not sent or something to the machine that executes the script. My question is: is it possible to include all the modules in the script or so? EDIT: I just created a bash script that runs which python and I noticed that the output was NOT my python copy. But when I run 'which python' on my ssh account, I get my python copy correctly..
0
python,linux,bash
2014-03-25T15:53:00.000
1
22,639,768
The submitted script is most likely using the system Python installation and not your own. Try submitting a shell script with only one command, which python, to confirm. The fix is to prepend the path to your Python interpreter to your system path. On my machine, the right Python is installed at /Users/mbatchkarov/anaconda/bin/python. I added export PATH="/Users/mbatchkarov/anaconda/bin:$PATH" to ~/.bash_profile EDIT Add the same line to ~/.bashrc.
0
109
false
0
1
Loading python modules through a computing cluster
22,640,013
1
3
0
1
5
0
0.066568
0
I have an account to a computing cluster that uses Scientific Linux. Of course I only have user access. I'm working with python and I need to run python scripts, so I need to import some python modules. Since I don't have root access, I installed a local python copy on my $HOME with all the required modules. When I run the scripts on my account (hosting node), they run correctly. But in order to submit jobs to the computing queues (to process on much faster machines), I need to submit a bash script that has a line that executes the scripts. The computing cluster uses SunGrid Engine. However when I submit the bash script, I get an error that the modules I installed can't be found! I can't figure out what is wrong. I hope if you can help.
0
linux,python,distributed-computing
2014-03-25T16:01:00.000
1
22,652,367
You could simply call your python program from the bash script with something like: PYTHONPATH=$HOME/lib/python /path/to/my/python my_python_script I don't know how SunGrid works, but if it uses a different user than yours, you'll need global read access to your $HOME. Or at least to the python libraries.
0
2,305
false
0
1
Loading python modules through a computing cluster
22,656,636
1
3
0
3
15
1
0.197375
0
why python does not have Access modifier like in c#, java i.e public, private etc.what are the alternative way of encapsulation and information hiding in python.
0
python,access-modifiers
2014-03-26T09:07:00.000
0
22,656,024
What differences do access modifiers in c# and java make? If I have the source code, I could simply change the access from private to public if I want to access a member variable. It is only when I have a compiled library that access modifiers can't be changed, and perhaps they provide some useful functionality there in restricting the API. However, python can't be compiled and so sharing libraries necessitates sharing the source code. Thus, until someone creates a python compiler, access modifiers would not really achieve anything.
0
7,428
false
0
1
why python does not have access modifier?And what are there alternatives in python?
56,082,181
1
1
0
0
0
1
0
0
Ok for my work we handle a lot of calculation documents. All of these have coversheets and revlogs that must be generated. We can easily create an excel file that has most of the information needed to fill in the forms but automating the actual process of filling in these forms that are premade in word has proven tricky. I used macros to some success but if something about a specific form differed to greatly the entire thing would mess up and I still had to open each word file individually and then hit the macro. Some of this process isn't going to be automatable as it requires pulling information from pdfs that isn't always in a standard format but any fast automation would be better than none. I have a good bit of c++ experience (by a good bit I just mean several courses on data structures etc, nothing way too high level). I have also used python some and stumbled my way through visual basic a tad. Any idea on how to go about automating generating sometimes 100+ of these forms?
0
python,c++,excel,automation,ms-word
2014-03-27T13:47:00.000
0
22,689,597
MS-Word has extensive programming capabilities built in ("Visual Basic for Applications (VBA)" These exact same programming capabilities are available to applications you write in any language, including C++, that can access Word via COM. Depending on your needs, it could be possible to fill in an entire Word document from one click to run such a program.
0
602
false
0
1
What would be the best way to automate filling in a premade form in word
22,691,619
1
1
0
0
1
0
0
1
we have a problem with sending in most efficient way about 1000 (or even more) 2MB's chunks via network. We want to avoid pure sockets (if it won't be possible we will use them). For now we've tested: List item rabbitmq client -> server: about 39sec/GB on localhost (very slow) requests client -> flask server : still about 40sec/GB on localhost flask on tornado with creating threads for each IO write opperation and still 40 sec/GB of SSD flash drive raw tornado still 40 sec/GB We are running out of ideas. Best solution for us is to use lightweight solution, maybe http.
0
python,upload,flask,rabbitmq,tornado
2014-03-27T19:21:00.000
0
22,697,210
If all the files are available at the start. I would zip them first to a single file. It is not about the compression but the number of files. There are certain IO operations (open/close and network start/end) that will happen 100 times for each file which you can easily avoid. Compression will help too. Now sockets or HTTP, it wont matter much if you have single file (or technically a stream).
0
965
false
0
1
What's the fastest way to send 1000 2MB's files using Python?
43,458,666
1
1
0
1
0
1
0.197375
0
I'm using Python to create a simple program to trick my brother. The idea of my program is to read any key input that he writes and output another one. For example, I press 's' letter and it outputs 'o'. I do have the character converter working, however I now need to catch the key pressed and instantaneously return the new key to the screen. How can I achieve this? Thank you very much for your time
0
python,module,keyboard,operating-system,key
2014-03-28T13:45:00.000
0
22,714,606
Python probably isn't the best language for this. In fact I'm pretty sure it's not possible under most circumstances. You'd need the script running all the time, I assume, which is a problem in and of itself. But a further problem is that AFAIK python can't arbitrarily modify keyboard input across the whole computer. So you'll probably need something that can work on a lower level, such as C or C++.
0
313
false
0
1
Python: get pressed keyboard keys and return
22,715,757
1
1
0
1
0
0
1.2
0
I wanna be able to find the output text of the "deploy" script, that openshift calls automatically. Suppose I have the following line in that "deploy" script: python "$OPENSHIFT_REPO_DIR"wsgi/openshift/manage.py syncdb --noinput Then I wanna see the output text of that call... I already looked at the openshifthome/python/logs, and there are access* and error* logs, but there are no the ouputs that I want. I want this because some times, after I push to the master git branch, it fails, and I wanna know where it fails... Thanks
0
python,django,logging,openshift
2014-03-28T23:40:00.000
1
22,725,031
The deploy output is logged to stdout if I recall correctly. If you git push from the command line you should see where things are failing.
0
127
true
0
1
Where can i find the deploy log in python app deployed in Openshift
22,736,609
1
1
0
0
1
0
0
0
--First month in programming; be gentle with me-- I'm looking to build a short application using python to run on an RPi; the idea is to ping our company owned servers individually and eventually have them return as LED status lights. For now though; I would like it to broadcast a desktop notification to specific Macs on the same network. I have 0 experience with python or programming general. Where should I start?
0
python,raspberry-pi
2014-03-31T10:05:00.000
0
22,758,813
For learning Python there are too many very good web ressources easily found on the net, so I don't mention them here. For your nice little project for the Raspi you should get familiar with the python module called RPi.GPIO. With it you can easily turn on & off your LEDs depending on the ping response of your company servers.
0
90
false
0
1
Python, Rpi server status notifier
22,759,029
1
1
0
4
2
1
1.2
0
Say I have a script script.py located in a specific folder in my system. This folder is not available on PATH. Assuming that I will always run script.py using python script.py, is there any way to run my script from anywhere on the system without having to modify PATH? I thought modifying PYTHONPATH would do it, but it doesn't. PYTHONPATH seems to only affect the module search path, and not the script search path. Is my understanding correct?
0
python,shell
2014-03-31T21:29:00.000
1
22,772,554
yes, add it to your PYTHONPATH as you are doing, but you cannot invoke it with python foo.py, instead, use python -m foo.
0
47
true
0
1
Python-specific PATH environment variable?
22,772,608
1
1
0
0
0
0
1.2
1
I'm creating a unittest- and Selenium-based test suite for a web application. It is reachable by several hostnames, e.g. implying different languages; but of course I want to be able to test e.g. my development instances as well without changing the code (and without fiddling with the hosts file which doesn't work for me anymore, because of network security considerations, I suppose). Thus, I'd like to be able to specify the hostname by commandline arguments. The test runner does argument parsing itself, e.g. for chosing the tests to execute. What is the recommended method to handle this situation?
0
selenium,python-unittest
2014-04-02T08:30:00.000
0
22,805,650
The solution I came up with finally is: Have a module for the tests which fixes the global data, including the hostname, and provides my TestCase class (I added an assertLoadsOk method to simply check for the HTTP status code). This module does commandline processing as well: It checks for its own options and removes them from the argument vector (sys.argv). When finding an "unknown" option, stop processing the options, and leave the rest to the testrunner. The commandline processing happens on import, before initializing my TestCase class. It works well for me ...
0
94
true
0
1
How to pass an argument (e.g. the hostname) to the testrunner
22,939,972
1
1
0
1
0
0
1.2
0
I am developping an application in which i have to establish an ethernet connection between raspberry pi and a windows pc. On my pc i want to develop a python program (gui) that can not only import files from the raspberry pi, but also read those files and modify them. I don't want to use any soft or program already existing. So what is the best solution: sockets, or ssh? or there is an other choice?
0
python,sockets,ssh,raspberry-pi,ethernet
2014-04-02T09:12:00.000
0
22,806,639
Samba, FTP/SFTP, or also (if doable on windows) SSHFS. If you want your own implementation then for example you could use a REST API (web app) running on PI and allowing file operations in some folders (create, modify, delete, get, list...). You could also think about using Git and git pulling/pushing between each other :)
0
412
true
0
1
How to access to the Raspberry pi files using a python gui running on windows?
22,872,068
1
6
0
9
2
1
1
0
First of all, I was not studying math in English language, so I may use wrong words in my text. Float numbers can be finite(42.36) and infinite (42.363636...) In C/C++ numbers are stored at base 2. Our minds operate floats at base 10. The problem is - many (a lot, actually) of float numbers with base 10, that are finite, have no exact finite representation in base 2, and vice-versa. This doesn't mean anything most of the time. The last digit of double may be off by 1 bit - not a problem. A problem arises when we compute two floats that are actually integers. 99.0/3.0 on C++ can result in 33.0 as well as 32.9999...99. And if you convert it to integer then - you are in for a surprise. I always add a special value (2*smallest value for given type and architecture) before rounding up in C for this reason. Should I do it in Python or not? I have run some tests in Python and it seems float division always results as expected. But some tests are not enough because the problem is architecture-dependent. Do somebody know for sure if it is taken care of, and on what level - in float type itself or only in rounding up and shortening functions? P.S. And if somebody can clarify the same thing for Haskell, which I am only starting with - it would be great. UPDATE Folks pointed out to an official document stating there is uncertainty in floating point arithmetic. The remaining question is - do math functions like ceil take care of them or should I do it on my own? This must be pointed out to beginner users every time we speak of these functions, because otherwise they will all stumble on that problem.
0
python,c++,haskell,floating-point
2014-04-02T12:11:00.000
0
22,811,050
The format C and C++ use for representing float and double is standardized (IEEE 754), and the problems you describe are inherent in that representation. Since Python is implemented in C, its floating point types are prone to the same rounding problems. Haskell's Float and Double are a somewhat higher level abstraction, but since most (all?) modern CPUs use IEEE754 for floating point calculations, you most probably will have that kind of rounding errors there as well. In other words: Only languages/libraries which choose to not base their floating point types on the underlying architecture might be able to circumvent the IEEE754 rounding problems to a certain degree, but since the underlying hardware does not support other representations directly, there has to be a performance penalty. Therefore, probably most languages will stick to the standard, not least because its limitations are well known.
0
466
false
0
1
Do Python and Haskell have the float uncertanity issue of C/C++?
22,811,510
1
2
0
1
2
0
0.099668
0
I work on a project to control my PC with a remote, and a infrared receptor on an Arduino. I need to simulate keyboard input with a process on linux who will listen arduino output and simulate keyboard input. I can dev it with Python or C++, but i think python is more easy. After many search, i found many result for... windows u_u Anyone have a library for this ? thanks EDIT: I found that /dev/input/event3 is my keyboard. I think write in to simulate keyboard, i'm searching how do that
0
python,c++,linux,input,keyboard
2014-04-02T12:41:00.000
1
22,811,844
The most generic solution is to use pseudo-terminals: you connect tttyn to the standard in and standard out of the program you want to monitor, and use pttyn to read and write to it. Alternatively, you can create two pipes, which you connect to the standard in and standard out of the program to be monitored before doing the exec. This is much simpler, but the pipes look more like a file than a terminal to the program being monitored.
0
2,879
false
0
1
Simulate keyboard input linux
22,812,228