Available Count
int64 1
31
| AnswerCount
int64 1
35
| GUI and Desktop Applications
int64 0
1
| Users Score
int64 -17
588
| Q_Score
int64 0
6.79k
| Python Basics and Environment
int64 0
1
| Score
float64 -1
1.2
| Networking and APIs
int64 0
1
| Question
stringlengths 15
7.24k
| Database and SQL
int64 0
1
| Tags
stringlengths 6
76
| CreationDate
stringlengths 23
23
| System Administration and DevOps
int64 0
1
| Q_Id
int64 469
38.2M
| Answer
stringlengths 15
7k
| Data Science and Machine Learning
int64 0
1
| ViewCount
int64 13
1.88M
| is_accepted
bool 2
classes | Web Development
int64 0
1
| Other
int64 1
1
| Title
stringlengths 15
142
| A_Id
int64 518
72.2M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I am starting a python code once my system (debian based raspberry pi) starts by adding the statement sudo python /path/code.py in crontab -e. On boot up it does start. But I would like to know, how can I stop the thing from running using the command line once it starts up. | 0 | python,raspberry-pi,raspbian | 2014-10-31T04:53:00.000 | 1 | 26,667,750 | You must not add it to the crontab which will start it on a time scheduled base. Instead write an init script or (more simple) add it to /etc/rc.local ! | 0 | 525 | false | 0 | 1 | Stopping a autostarted python program started using crontab | 26,667,793 |
1 | 3 | 0 | 0 | 4 | 1 | 0 | 0 | How do I make IDLE use UTF-8 as the default encoding for my Python files ?
There is no "Encoding" option in IDLE settings. | 0 | python,unicode,python-idle | 2014-10-31T11:13:00.000 | 0 | 26,673,342 | 1) Navigate to the folder with your current version of python. Mine is:
/System/Library/Frameworks/Python.framework/Versions/2.7/Resources
2) View the python application package, as this is the IDLE.
3) You should see a file called Info.plist containing the line-
<?xml version="1.0" encoding="UTF-8"?>
4) Here you can modify the encoding, keep in mind python dose not support all source encoding options. | 0 | 7,034 | false | 0 | 1 | Configure IDLE to use Unicode | 26,674,682 |
3 | 5 | 0 | 0 | 3 | 0 | 0 | 0 | I'm trying to setup easy_install on my mac.
But I'm getting the following error.
Installing Setuptools running install Checking .pth file support in
/Library/Python/2.7/site-packages/ error: can't create or remove files
in install directory
The following error occurred while trying to add or remove files in
the installation directory:
[Errno 13] Permission denied:
'/Library/Python/2.7/site-packages/test-easy-install-789.pth'
The installation directory you specified (via --install-dir, --prefix,
or the distutils default setting) was:
/Library/Python/2.7/site-packages/ | 0 | python,macos,setuptools,easy-install | 2014-11-03T10:27:00.000 | 1 | 26,712,229 | You can add "sudo" before "python setup.py ..." in the install.sh. | 0 | 5,730 | false | 0 | 1 | setuptools easy_install mac error | 40,178,814 |
3 | 5 | 0 | 8 | 3 | 0 | 1.2 | 0 | I'm trying to setup easy_install on my mac.
But I'm getting the following error.
Installing Setuptools running install Checking .pth file support in
/Library/Python/2.7/site-packages/ error: can't create or remove files
in install directory
The following error occurred while trying to add or remove files in
the installation directory:
[Errno 13] Permission denied:
'/Library/Python/2.7/site-packages/test-easy-install-789.pth'
The installation directory you specified (via --install-dir, --prefix,
or the distutils default setting) was:
/Library/Python/2.7/site-packages/ | 0 | python,macos,setuptools,easy-install | 2014-11-03T10:27:00.000 | 1 | 26,712,229 | Try again using sudo python ... to be able to write to '/Library/Python/2.7/site-packages/ | 0 | 5,730 | true | 0 | 1 | setuptools easy_install mac error | 26,712,371 |
3 | 5 | 0 | 0 | 3 | 0 | 0 | 0 | I'm trying to setup easy_install on my mac.
But I'm getting the following error.
Installing Setuptools running install Checking .pth file support in
/Library/Python/2.7/site-packages/ error: can't create or remove files
in install directory
The following error occurred while trying to add or remove files in
the installation directory:
[Errno 13] Permission denied:
'/Library/Python/2.7/site-packages/test-easy-install-789.pth'
The installation directory you specified (via --install-dir, --prefix,
or the distutils default setting) was:
/Library/Python/2.7/site-packages/ | 0 | python,macos,setuptools,easy-install | 2014-11-03T10:27:00.000 | 1 | 26,712,229 | Try curl bootstrap.pypa.io/ez_setup.py -o - | sudo python for access related issues. | 0 | 5,730 | false | 0 | 1 | setuptools easy_install mac error | 39,312,073 |
1 | 2 | 0 | 0 | 0 | 0 | 1.2 | 0 | If I do Run Unittest .... test_foo in PyCharm it takes quite long to run the test, since all tests get collected first. PyCharm uses py.test -k to run the test.
Since we have more than 1000 tests, collecting them takes some time (about 1.2 seconds). Often the test itself needs less time to execute!
Since I use this very often, I want to speed this up.
Any idea how to get this done? | 0 | python,pycharm,pytest | 2014-11-04T13:00:00.000 | 0 | 26,735,790 | Answer to own question:
I installed pyCharm again (for other reasons) and now it uses utrunner.py.
It is much faster if I run Run 'Unittest test_foo', since not this does not collect all tests before running the test.
Problem solved. | 0 | 1,778 | true | 0 | 1 | py.test -k: collecting tests takes too much time | 27,827,622 |
3 | 3 | 0 | 1 | 0 | 1 | 0.066568 | 0 | Does importing a specific function from a module is a faster process than importing the whole module?
That is, is from module import x debugs faster than import module? | 0 | python | 2014-11-05T07:30:00.000 | 0 | 26,751,800 | No, it shouldn't be faster, and that shouldn't matter anyway: importing things is not usually considered a performance-critical operation, so you can expect it to be fairly slow compared to other things you can do in Python. If you require importing to be very fast, probably something is wrong with your design. | 0 | 34 | false | 0 | 1 | Debugging issues in importing modules in Python | 26,751,874 |
3 | 3 | 0 | 1 | 0 | 1 | 1.2 | 0 | Does importing a specific function from a module is a faster process than importing the whole module?
That is, is from module import x debugs faster than import module? | 0 | python | 2014-11-05T07:30:00.000 | 0 | 26,751,800 | I would say there is little or no peformance difference, as importing a module for the first time will execute the entire module - all classes, variables and functions are all built, regardless of the actually symbol you need.
The 2nd time you import the module in the same program that will be much quicker, as the module is not reloaded, and all existing definitions are used. | 0 | 34 | true | 0 | 1 | Debugging issues in importing modules in Python | 26,751,880 |
3 | 3 | 0 | 1 | 0 | 1 | 0.066568 | 0 | Does importing a specific function from a module is a faster process than importing the whole module?
That is, is from module import x debugs faster than import module? | 0 | python | 2014-11-05T07:30:00.000 | 0 | 26,751,800 | The whole module has to compile before you can import the specific function.
Instead, it's just a difference in namespace. (ie. you call module_x.function_y vs just calling function_y) | 0 | 34 | false | 0 | 1 | Debugging issues in importing modules in Python | 26,751,885 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | I am porting the GCS python client lib. and suffering some problems about dependency.
Because I want to use gcs on the NAS without glibc, I got the error at the code:
from oauth2client.client import SignedJwtAssertionCredentials
the error shows the reason due to lack of gcc
I trace the code and it seems to generate cryptography relative files (like _Cryptography_cffi_36a40ff0x2bad1bae.so) at run-time from the crypto.verify.
Since the machine I used with gcc, is there any way to replace the cryptography library or I could pre-compile and generate the files at my build machine ?
Thanks! | 0 | google-cloud-storage,google-api-python-client | 2014-11-05T15:05:00.000 | 1 | 26,760,398 | Cause the compile errors occurred and could not solve easily, I finally use the previous pyOpenssl to solve this problem. | 0 | 35 | false | 0 | 1 | python cryptography run-time bindings files in GCS | 27,596,094 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I'm running Python3 on Android and I can import android, but I can't import any android submodules.
My goal is to have my scripts react to events, such as plugging/unplugging the headset, but I'm struggling to follow the examples found online.
One group seems to think that you should import jnius and use this autoclass() helper, while the other things you should directly import android.broadcast.
I'm struggling because python cannot find either jnius or android.broadcast installed, yet android.Android() works fine.
How do you properly import the android.broadcast.BroadcastListener object in python? | 0 | android,python,python-import | 2014-11-05T15:22:00.000 | 0 | 26,760,746 | You can't import android.broadcast because the module doesn't exits. I've unzipped /data/data/com.hipipal.qpy3/files/lib/python3.2/python32.zip and there is no trace of broadcasts. If you can find the module, you can put it under /data/data/com.hipipal.qpy3/files/lib/python3.2/site-packages. | 0 | 835 | true | 0 | 1 | How do you import android modules on android? | 26,783,905 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I'm having a problem with running an ipython notebook server. I've written a series of custom ROI (Region Of Interest) widgets for the notebook that allow a user to draw shapes like rectangles and ellipses on an image displayed in the notebook, then send information about the shapes back to python running on the server. All information is passed via widget traitlets; the shape info is in a handles traitlet of type object. When I run this locally on port 8888 (the default) and access it with firefox running on the same computer, everything works. (The system in this case is a Mac running OSX Yosemite).
Now I tried to access it remotely by making an ssh connection from another computer (ubuntu linux, in this case) and forwarding local port 8888 to 8888 on the host. This almost works: firefox running on the client is able to access the ipython notebook server, execute code in notebooks, etc. The ROI widgets also display and seem to work properly, except for one thing: no information about the shapes drawn makes it back to the server.
This is not just an issue of remote access (although that's the most important for my intended use). I have exactly the same problem if I run locally, but use a port other than 8888. For instance, if I set the port to 9999 in ipython_notebook_config.py, run the notebook server and access it with a local firefox, I get exactly the same problem. Similarly, if I run ipython notebook twice with all default settings, the second instance binds port 8889, because 8888 was bound by the first. When I access the server running at 8888 with a local firefox, everything works; when I access the simultaneously running server running at 8889, my widgets once more fail to send info back to the server. If I use --debug, I can see all the comm_msgs passed. The server running on 8888 receives messages that contain shape info, as expected. These messages simply don't show up in the log of the server running at 8889.
Any thoughts? | 0 | ipython,ipython-notebook | 2014-11-06T15:56:00.000 | 1 | 26,783,752 | I never did figure out the answer to my question -- why the port matters. However, I found that my ROI widgets had a rookie mistake on the JavaScript side (I'm fairly new to JS programming) that, when fixed, made all the problems go away. Ironically, the puzzle now is why it worked when I was using the default port! | 0 | 327 | true | 0 | 1 | port usage in the ipython notebook | 27,171,998 |
1 | 1 | 0 | 12 | 4 | 0 | 1 | 0 | I'm trying to using PRAW to organize all comments by users active in /r/nba based on their flair. Is there a good way get a user's flair if I have the username?
My end goal is to perform some text analysis on user comments grouped by NBA fandom. thanks! | 0 | python,reddit,praw | 2014-11-10T17:32:00.000 | 0 | 26,849,501 | Found it - for anyone who had trouble with this, just use author_flair_text | 0 | 1,053 | false | 0 | 1 | PRAW: Get User's Flair | 26,850,897 |
1 | 2 | 0 | 0 | 1 | 0 | 0 | 0 | Is there a way that I can call CBMC from Python or is there any wrapper or API for it available?
My Problem is the following. I want to create a C function automatically in Python (this works quite well) and sent them to CBMC from Python for checking and get feedback if the function is OK or not. | 0 | python,cbmc | 2014-11-10T19:22:00.000 | 0 | 26,851,328 | CBMC can also produce JSON output using --json-ui since version 5.5, which is more compact than the XML output. Also note that you can suppress certain messages by adjusting the verbosity level using --verbosity <some number between 0 and 10>. | 0 | 169 | false | 0 | 1 | CBMC call from Python? | 53,566,093 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I need to run my tests (written with Python and Behave) without using console. I prefer to create simple Python script and use some unit test runner. I'm thinking about unittest, but pytest and nose solutions are also welcome:) I couldn't find any hint on behave's homepage. | 0 | python,unit-testing,testing,python-behave | 2014-11-11T08:30:00.000 | 0 | 26,860,604 | behave is the "test runner". Use the "-o " option to store your results somewhere for whatever format(her) you want to use.
NOTE:
It is basically the same like py.test. | 0 | 882 | false | 0 | 1 | How to run Python & Behave tests with some unit test framework? | 28,639,457 |
2 | 2 | 0 | 5 | 4 | 1 | 0.462117 | 0 | I'm trying to determine if Python's mock.patch (unittest.mock.patch in Py3) context manager mutates global state, i.e., if it is thread-safe.
For instance: let's imagine one thread patches function bar within function foo with a context manager, and then inside the context manager the interpreter pauses that thread (because of the GIL etc.) and resumes another thread, which runs foo outside of said context manager. If patch is thread-safe I would expect that the global state of the functions foo and bar are unmodified, and so the second thread will get the normal behavior of foo. But if patch modifies global state, the second thread will get the modified behavior of foo even though it's not inside the context manager.
I referred to the source code but wasn't able to clearly tell just by looking at it. | 0 | python,multithreading,python-mock,global-state | 2014-11-11T23:18:00.000 | 0 | 26,876,438 | I went ahead and ran a crude experiment using multiprocessing.dummy.Pool on Python 3.4. The experiment mapped a function against range(100) input using the thread pool, and if the input of the function was exactly 10, it patched an inner function to call time.sleep(). If the patch was threadsafe, the results would all show up immediately except for the result for 10, which would show up late; if it was not threadsafe, a few results would show up immediately and many others would show up late.
The results demonstrated that unittest.mock.patch does mutate global state. Good to know! | 0 | 2,517 | false | 0 | 1 | Does python's `unittest.mock.patch` mutate global state? | 26,877,363 |
2 | 2 | 0 | 10 | 4 | 1 | 1.2 | 0 | I'm trying to determine if Python's mock.patch (unittest.mock.patch in Py3) context manager mutates global state, i.e., if it is thread-safe.
For instance: let's imagine one thread patches function bar within function foo with a context manager, and then inside the context manager the interpreter pauses that thread (because of the GIL etc.) and resumes another thread, which runs foo outside of said context manager. If patch is thread-safe I would expect that the global state of the functions foo and bar are unmodified, and so the second thread will get the normal behavior of foo. But if patch modifies global state, the second thread will get the modified behavior of foo even though it's not inside the context manager.
I referred to the source code but wasn't able to clearly tell just by looking at it. | 0 | python,multithreading,python-mock,global-state | 2014-11-11T23:18:00.000 | 0 | 26,876,438 | mock.patch isn't inherently thread-safe or not thread-safe. It modifies an object. It's really nothing more than an assignment statement at the beginning, and then an undo-ing assignment statement at the end.
If the object being patched is accessed by multiple threads, then all the threads will see the change. Typically, it's used to modify attributes of modules, which are global state. When used this way, it is not thread safe. | 0 | 2,517 | true | 0 | 1 | Does python's `unittest.mock.patch` mutate global state? | 26,877,522 |
1 | 1 | 0 | 1 | 1 | 0 | 0.197375 | 0 | I'm involved in a raspberry pi project and I use python language. I installed scipy, numpy, matplotlib and other libraries correctly. But when I type
from scipy.io import wavfile
it gives error as "ImportError: No module named scipy.io"
I tried to re-install them, but when i type the sudo cord, it says already the new version of scipy is installed. I'm stucked in this point and please help me... Thank you | 0 | python,scipy | 2014-11-12T09:49:00.000 | 0 | 26,883,835 | I would take a guess and say your Python doesnt know where you isntalled scipy.io. add the scipy path to PYTHONPATH. | 1 | 1,955 | false | 0 | 1 | Import error when using scipy.io module | 26,883,907 |
2 | 2 | 0 | 0 | 2 | 0 | 0 | 0 | Some background:
I have an i2c device (MCP23017), which has 6 switches connected to its GPIO ports. The MCP23017 is connected to a Raspberry Pi via i2c.
I'm able to read the state of each of the switches as required.
My issue is in regards to interrupts. I'm using the WiringPi2 library for Python, which allows me to interface with the MCP23017 under Python. From the research I've done, the WiringPiISR library allows for i2c interrupt protocols to be run, although it only seems to work (properly) under C.
My question is this: is there a simple solution for implementing i2c interrupts under Python?
I'm considering dropping Python for C for this particular project, but the GUI interface has already been written (in Python), so I'd like to keep that as a last option.
Any guidance/input/comments would be greatly appreciated!
Thanks! | 0 | python,c,interrupt,i2c | 2014-11-13T00:16:00.000 | 0 | 26,899,161 | There are devices that have an interrupt line in addition to I2C lines. For example I have a rotary encoder I2C interface board from duppa.net which has 5 pins - SDA, SCL, +, - and interrupt.
The python library that comes with it configures the Raspberry Pi GPIO 4 line as an input edge triggered interrupt and sets up callbacks. On a change (the rotary encoder is rotated for instance) the interrupt line wiggles, the interrupt handler reads the status register over I2C and calls the appropriate callback.
So this is completely conventional interrupt usage in the context of I2C doing to usual task of freeing the master from having to poll the bus. | 0 | 2,183 | false | 0 | 1 | Interrupts using i2c and Python | 67,497,343 |
2 | 2 | 0 | 0 | 2 | 0 | 0 | 0 | Some background:
I have an i2c device (MCP23017), which has 6 switches connected to its GPIO ports. The MCP23017 is connected to a Raspberry Pi via i2c.
I'm able to read the state of each of the switches as required.
My issue is in regards to interrupts. I'm using the WiringPi2 library for Python, which allows me to interface with the MCP23017 under Python. From the research I've done, the WiringPiISR library allows for i2c interrupt protocols to be run, although it only seems to work (properly) under C.
My question is this: is there a simple solution for implementing i2c interrupts under Python?
I'm considering dropping Python for C for this particular project, but the GUI interface has already been written (in Python), so I'd like to keep that as a last option.
Any guidance/input/comments would be greatly appreciated!
Thanks! | 0 | python,c,interrupt,i2c | 2014-11-13T00:16:00.000 | 0 | 26,899,161 | As far as I understand, WiringPiISR library only allows you to configure a Pin to be an interrupt and define its type (ie, edge based or level based). Since you're talking about I2c interrupt, there is no way you can have I2C interrupt as in this case, Your Rpi works as a master device and other connected device(s) as slave(s). Since in I2C, communication is always initiated by the master, slaves can not interrupt you. (at least not via I2C channel)
Hope it helps. | 0 | 2,183 | false | 0 | 1 | Interrupts using i2c and Python | 26,908,484 |
2 | 5 | 0 | -1 | 23 | 0 | -0.039979 | 1 | I have a web crawling python script that takes hours to complete, and is infeasible to run in its entirety on my local machine. Is there a convenient way to deploy this to a simple web server? The script basically downloads webpages into text files. How would this be best accomplished?
Thanks! | 0 | python,cloud,web-crawler,virtual,server | 2014-11-13T05:23:00.000 | 0 | 26,901,882 | If you have a google e-mail account you have an access to google drive and utilities. Choose for colaboratory (or find it in more... options first). This "CoLab" is essentially your python notebook on google drive with full access to your files on your drive, also with access to your GitHub. So, in addition to your local stuff you can edit your GitHub scripts as well. | 0 | 26,999 | false | 0 | 1 | What is the easiest way to run python scripts in a cloud server? | 65,263,186 |
2 | 5 | 0 | 1 | 23 | 0 | 0.039979 | 1 | I have a web crawling python script that takes hours to complete, and is infeasible to run in its entirety on my local machine. Is there a convenient way to deploy this to a simple web server? The script basically downloads webpages into text files. How would this be best accomplished?
Thanks! | 0 | python,cloud,web-crawler,virtual,server | 2014-11-13T05:23:00.000 | 0 | 26,901,882 | In 2021, Replit.com makes it very easy to write and run Python in the cloud. | 0 | 26,999 | false | 0 | 1 | What is the easiest way to run python scripts in a cloud server? | 67,290,539 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | In a project me and students are doing, we want to gather temperature and air humidity information in nodes that sends the data to a raspberry pi. That raspberry pi will then send the data to a mysql database, running on windows platform.
Some background information about the project:
As a project, we are going to design a sellable system which gathers temperature and air humidity information and saves it on a server, which the owner could overwatch on a website/mobile application. He will receive an username and a password when purchasing the system to log in to the server. This means that, as a seller, we bound the station to an account, which is given to the customer.
1 station can add unlimited amount of nodes and will have a static ID, so the server knows which node sends the information. We have very limited knowledge about sending information, python in general and servers/databases.
My problem is: How could I send the data from the raspberry pi? How could I receive the data on the server?
The idea is that the raspberry pi should send data continuously and it's up to the server to accept or ignore data if they are correct or not. I want to send this information:
Station ID "In case the node does not exist in the database, it will then add it to the corresponding station"
Node ID "To know which node in the database to store the data at"
Date/Time "To know when the data was messured"
Air humidity
Temperature
I am not sure if I need to send account/password information, since it should not matter as long as the account "owns" the station.
I hope I provided enough information. | 0 | python,mysql,raspberry-pi,windows-server | 2014-11-13T11:13:00.000 | 1 | 26,907,588 | As long as you know the IP address of the windows machine, you can easily run a server on windows (Apache/MySQL -> PHP). You provide this IP address to the RaspberryPI and it can login and authenticate the same way as on any other server. Basically the WAMP stack will act as an abstraction layer for communication. | 0 | 2,420 | false | 0 | 1 | How to send data from raspberry pi to windows server? | 26,941,321 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I'm developing a project in my university where I need to receive some data from a set of an Arduino, a sensor and a CuHead v1.0 Wifi-shield on the other side of the campus. I need to communicate thru sockets to transfer the data from sensor to the server but I need to ensure that no one will also send data thru this open socket. On the server side, I need to open the socket using Python 3 or 2.7. On the sensor side, the library of CuHead WiFi Shield lacks important functions to create secure connections. It is possible to ensure this level of security only on the server side? What suggestions do you have for me? | 0 | python,security,sockets,arduino | 2014-11-13T13:15:00.000 | 0 | 26,909,777 | use stunnel at both ends, so all traffic at both ends goes to localhost that stunnel encrypts and sends to the other end | 0 | 34 | false | 0 | 1 | How to ensure that no one will access my socket? | 26,910,064 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | Why is Python able to store long-integers of any length were as Java and C only provide 64 bits?
If there is a way in Java to do it, please show it. | 0 | java,python,long-integer,primitive | 2014-11-14T21:50:00.000 | 0 | 26,939,410 | Python stores long integers with unlimited percision, allowing you to store large numbers as long as you have available address space.
As for Java, use BigInteger class | 0 | 1,680 | false | 1 | 1 | long-type in Python vs Java & C | 26,939,554 |
1 | 1 | 0 | 1 | 1 | 0 | 0.197375 | 1 | I am running a python script on my raspberry pi, via ssh. I use the google oauth library to fetch events from google calendar, but I have problems with the authentication.
When I run the program on my main computer (which has a GUI and a web browser), it works as expected, but not on my Pi. I am running the program with the flag --noauth_local_webserver because of that there exists no web browser on the Pi. Instead I get a link to click on but when I do that, google answers with the redirect_uri_mismatch error. I am running this locally at home, but it works on my main computer so I cannot figure out what is wrong. Any suggestions? | 0 | python | 2014-11-15T23:52:00.000 | 0 | 26,952,168 | Ok, so I found the answer!
The problem is that if the registred application is set to web application in the google developer console settings, this is the error message you will get. To solve this I just changed the type to desktop application instead. | 0 | 1,538 | false | 0 | 1 | Redirect error when using google oauth and the flag --noauth_local_webserver | 26,955,746 |
1 | 2 | 0 | 1 | 6 | 1 | 0.099668 | 0 | I could add #! /usr/bin/python in the beginning of python script and add this script to PATH in order to run it as a command.
But could anyone explain to me what does '#' and '!' mean separately in python and what other language has this mechanism?
Thanks | 0 | python | 2014-11-16T06:03:00.000 | 0 | 26,954,341 | # followed by anything is a comment; so far as Python itself is concerned, that's it. Unix, on the other hand, will parse out the /usr/bin/python so that it knows how to run your code. | 0 | 3,978 | false | 0 | 1 | the meaning of #! in python separately | 26,954,400 |
2 | 2 | 0 | 0 | 0 | 0 | 1.2 | 1 | Is is possible to actually fake your IP details via mechanize? But if it's not, for what is br.set_proxies() used? | 0 | python,mechanize-python | 2014-11-18T06:13:00.000 | 0 | 26,987,797 | You cant' fake you IP address because IP is third layer and HTTP works on 7th layer.
So, it's imposible to send ip with non-your ip.
(you can set up second interface and IP address using iproute2 and set routing throught that interface, but it's not python/mechanize level. it's system level) | 0 | 711 | true | 0 | 1 | Change IP with Python mechanize | 26,988,092 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | Is is possible to actually fake your IP details via mechanize? But if it's not, for what is br.set_proxies() used? | 0 | python,mechanize-python | 2014-11-18T06:13:00.000 | 0 | 26,987,797 | You don't fake your IP details, set_proxy is to configure a HTTP proxy. You still need legitimate access to the IP. | 0 | 711 | false | 0 | 1 | Change IP with Python mechanize | 26,987,818 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I have built an application in python that is hosted on heroku which basically uses a script written in Python to store some results into a database (it runs as a scheduled task on daily basis). I would have done this with ruby/rails to avoid this confusion, but the application partner did not support Ruby.
I would like to know if it will be possible to build the front-end with Ruby on Rails and use the same database.
My rails application will need to make use MVC and have its own tables on the database, but it will also use the database that python sends data to just to retrieve some data from there.
Can I create the Rails app and reference the details of the database that my python application uses?
How could I test this on my local machine?
What would be the best approach to this? | 0 | python,ruby-on-rails,ruby,ruby-on-rails-3,heroku | 2014-11-18T09:32:00.000 | 0 | 26,990,840 | I don't see any problem in doing this, as far as rails manages the database structure and python script populates it with data.
My advice, but just to make it simpler, is to define the database schema through migrations in your rails app and build it like the python script doesn't exist.
Once you have completed it, simply start the python script so it can start populating the Database (could be necessary to rename some table in the python script, but no more than this).
If you want to test in your local machine you can one of this:
run the python script in your local machine
configure the database.ymlin your rails app to point to the remote DB (can be difficult if you don't have administration access to the host server, because of port farwarding etc)
The only thing you should keep in mind is about concurrent accesses.
Because you have 2 application that both read and write in your DB, would be better if the python script makes its job in a single and atomic transaction, to avoid your rails app finding the DB in an half-updated state.
You can see the database like a shared box, it doesn't matter how many applications use it. | 0 | 58 | true | 1 | 1 | Rails app to work with a remote heroku database | 26,994,319 |
1 | 2 | 1 | 1 | 0 | 1 | 1.2 | 0 | I have some code that is running in the python console reading out text lines.
Is it possible to feed the output of the python console to my Unity3D game script so that it can trigger some actions in my game?
in other words:
python console running in the background outputs commands that need to feed to my Unity game. | 0 | python,unity3d,unityscript | 2014-11-18T23:29:00.000 | 0 | 27,006,173 | An efficient solution would be to use TCP sockets in both the Python script and the Unity-side script. Since the Python-side is serving the data, then designate it as the server socket and make the socket on the Unity-side the client. This solution will be much faster than writing and reading to a shared file. | 0 | 1,040 | true | 0 | 1 | Feed Python output to Unity3D | 27,021,007 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 1 | I want to use javascript to retrieve a json object from a python script
Ive tried using various methods of ajax and post but cant get anything working.
For now I have tried to set it up like this
My Javascript portion:
I have tried
$.post('./cgi-bin/serverscript.py', { type: 'add'}, function(data) {
console.log('POSTed: ' + data);
});
and
$.ajax({
type:"post",
url:"./cgi-bin/serverscript.py",
data: {type: "add"},
success: function(o){ console.log(o); alert(o);}
});
My Python
import json import cgi import cgitb cgitb.enable() data = cgi.FieldStorage()
req = data.getfirst("type") print "Content-type: application/json"
print print (json.JSONEncoder().encode({"status":"ok"}))
I am getting a 500 (internal server error) | 0 | javascript,python,ajax,json,cgi | 2014-11-19T14:38:00.000 | 0 | 27,019,558 | Have you checked your host's server logs to see if it's giving you any output?
Before asking here, a good idea would be to ssh to your host, if you can, and running the program directly, which will most likely print the error in the terminal.
This is far too general at the moment, there are so many reasons why a CGI request can fail ( misconfigured environment, libraries not installed, permissions errors )
Go back and read your servers logs and see if that shines any more light on the issue. | 0 | 847 | false | 1 | 1 | Running CGI Python Javascript to retrieve JSON object | 29,558,961 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | I'm trying to send emails using python smtp library but get the following an error message when trying to send to external email addresses (internal email works):
smtplib.SMTPRecipientsRefused: {'[email protected]': (550, ' Relaying denied')}
This is because we have rules setup on our exchange that prevent relaying from client machines.
What I don't understand is how come I can send emails over SMTP with an SSIS package without getting the relay error.
Is there a setting I need to enable in my python to bypass this or is SSIS sending the email to SQL Server to send on its behalf. | 0 | python,sql-server,ssis,smtp,exchange-server | 2014-11-19T19:42:00.000 | 0 | 27,025,569 | I believe you are getting this due to authentication. SSIS is probably passing your windows credentials through but when you are trying to send with python your credentials are being denied.
Not 100% sure that is your issue. But a thought. | 0 | 354 | false | 0 | 1 | How does (SSIS) Integrated Services send email? | 27,025,619 |
1 | 2 | 0 | 21 | 15 | 1 | 1.2 | 0 | This error appeared when trying to use the 'tmpdir' in a pytest test.
TypeError: object of type 'LocalPath' has no len() | 0 | python,pytest,os.path | 2014-11-20T07:25:00.000 | 0 | 27,034,001 | 'tmpdir' is of type <class 'py._path.local.LocalPath'>, just wrap 'tmpdir' in a string when passing to os.path.join
example:
os.path.join(str(tmpdir), 'my_test_file.txt') | 0 | 8,854 | true | 0 | 1 | os.path.join fails with "TypeError: object of type 'LocalPath' has no len()" | 27,034,002 |
1 | 2 | 0 | 2 | 2 | 0 | 1.2 | 1 | i am quite new to curl development. I am working on centOS and i want to install pycurl 7.19.5, but i am unable to as i need libcurl 7.21.2 or above.I tried installing an updated curl but it is still pointing to the old libcurl.
curl-config --version
libcurl 7.24.0
curl --version
curl 7.24.0 (x86_64-unknown-linux-gnu) libcurl/7.21.1 OpenSSL/1.0.1e zlib/1.2.3 c-ares/1.7.3 libidn/1.29 Protocols: dict file ftp ftps http https imap imaps ldap ldaps pop3 pop3s rtsp smtp smtps telnet tftp Features: AsynchDNS Debug TrackMemory IDN IPv6 Largefile NTLM SSL libz.
Can anyone please help me how i can update the libcurl version in curl | 0 | python,linux,curl,libcurl,centos6 | 2014-11-20T14:21:00.000 | 0 | 27,041,862 | You seem to have two versions installed, curl-config --version shows the newer version (7.24.0) and curl (the tool) is the newer version but when it runs the run-time linker ld.so finds and uses the older version (7.21.1).
Check /etc/ld.so.conf for which dirs that are check in which order and see if you can remove one of the versions or change order of the search. | 0 | 4,612 | true | 0 | 1 | libcurl mismatch in curl and curl-config | 27,042,314 |
1 | 1 | 0 | 2 | 3 | 0 | 1.2 | 0 | I use boost::python to integrate Python into a C++ program. Now I would like that the Python program that is executed via boost::python::exec_file() can obtain the command line arguments of my C++ program via sys.argv. Is this possible? | 0 | python,command-line-arguments,boost-python | 2014-11-20T19:05:00.000 | 1 | 27,047,518 | Prior to your call to exec_file() (but after Py_Initialize(), you should invoke PySys_SetArgv(argc, argv); giving it the int argc and const char *argv[] from your program's main(). | 0 | 795 | true | 0 | 1 | How to set sys.argv in a boost::python plugin | 27,047,900 |
2 | 2 | 0 | 1 | 0 | 0 | 1.2 | 1 | i am planning to make web crawler which can crawl 200+ domain, which of the language will be suitable for it. I am quite familiar with PHP but an amateur at Python. | 0 | php,python,web-crawler | 2014-11-21T04:37:00.000 | 0 | 27,054,206 | I have built crawlers in both languages. While I personally find it easy to make a crawler in python because of huge number of freely available libraries for html parsing, I would recommend that you go with the language you are most comfortable with. Build a well designed and efficient crawler in a language you know well and you will get even better at that language. There is no feature which can not be implemented in either of the two languages so just make a decision and start working.
Good luck. | 0 | 1,196 | true | 0 | 1 | PHP vs Python For Web Crawler | 27,055,726 |
2 | 2 | 0 | 1 | 0 | 0 | 0.099668 | 1 | i am planning to make web crawler which can crawl 200+ domain, which of the language will be suitable for it. I am quite familiar with PHP but an amateur at Python. | 0 | php,python,web-crawler | 2014-11-21T04:37:00.000 | 0 | 27,054,206 | You could just try both. Make one in php and one in python. It'll help you learn the language even if you're experienced. Never say no to opportunities to practice. | 0 | 1,196 | false | 0 | 1 | PHP vs Python For Web Crawler | 27,055,630 |
1 | 3 | 0 | 1 | 7 | 0 | 0.066568 | 0 | I must install python-dev on my embedded linux machine, which runs python-2.7.2. The linux flavor is custom-built by TimeSys; uname -a gives:
Linux hotspot-smc 2.6.32-ts-armv7l-LRI-6.0.0 #1 Mon Jun 25 18:12:45 UTC 2012 armv7l GNU/Linux
The platform does not have package management such as 'yum' or 'apt-get', and for various reasons I prefer not to install one. It does have gcc.
Does python-dev source contain C/C++ code? Can I download python-dev source code as a .tar.gz file, for direct compilation on this machine? I have looked for the source but haven't been able to find it.
Thanks,
Tom | 0 | python-2.7 | 2014-11-22T00:40:00.000 | 1 | 27,072,734 | Does python-dev source contain C/C++ code?
Yes. It includes lots of header files and a static library for Python.
Can I download python-dev source code as a .tar.gz file, for direct compilation on this machine?
python-dev is a package. Depending on your operation system you can download a copy of the appropriate files by running, e.g.
sudo apt-get install python-dev or sudo yum install python-devel depending on your operation system. | 0 | 21,320 | false | 0 | 1 | python-dev installation without package management? | 36,137,101 |
1 | 2 | 1 | 1 | 1 | 0 | 0.099668 | 0 | I am trying to use the PyDEV console in eclipse for a demonstration of some Python code. For this demonstration I need to resize the default font size used in the PyDev console window.
Some googling led me to change the 'General/Appearance/Colors and Fonts/Debug/Console Font', but that didn't work. I tried changing all candidates I could identify in the Colors and Font settings, but none of them influences the font size in the PyDev console window.
Is there any way to achieve this?
This is in eclipse 4.3.2 (kepler) with Pydev 3.8 | 0 | python,eclipse,fonts | 2014-11-25T09:18:00.000 | 1 | 27,122,732 | Solution:
Help > Preferences > General > Appearance > Colors And Fonts > Structed Text Editors > Edit | 0 | 2,839 | false | 0 | 1 | How to change the console font size in Eclipse-PyDev | 55,985,898 |
1 | 2 | 0 | 1 | 1 | 0 | 1.2 | 0 | I am using python and Amazon EC2
I am trying to progrmamtically SSH into the instance created by the Elastic Beanstalk Worker. While using 'eb init' there is no option to specify the KeyPair to use for the instance and hence I am not able to SSH into it.
Reason for me to do this is that I want to check if the dependencies in requirements.txt are installed correctly in the instance. Is there any other way to check this other than SSHing into the instance and checking? | 0 | python,amazon-web-services,ssh,amazon-ec2,amazon-elastic-beanstalk | 2014-11-26T00:21:00.000 | 1 | 27,139,271 | Hi you have to declare the keypair to use on the web console.
Go to
elasticbeanstalk > your application > edit configuration > Instances > select keypair
Alternatively, this sounds like a hack but you can write a python script file that call for the modules that you installed and throws an error if the module is not found. The error is captured and you can view it in the web logs. | 0 | 723 | true | 0 | 1 | SSH into EC2 Instance created by EBS | 27,139,451 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I have very basic question here...
I have a situation where friend of mine claims that he has a parent child domains created. As they are parent child domains there is two way transitive trust created by default...
But when I try to run Python application which internally calls win32security.LookupAccountName("", LocalSystem), it is taking approx. 2 minutes to complete and fails with exception "The trust relationship between the primary domain and the trusted domain failed " - Error Code -1788
Any idea what could have happened ? And how to resolve this ?
Is there any way to verify the trust relationship between these domains ? | 0 | python,active-directory,windows-server-2008-r2,domainservices | 2014-11-26T06:19:00.000 | 0 | 27,142,550 | Whenever I have seen that error before it has normally just required the machine in question to be removed from the domain and joined back to it again. So if it just one computer then I would suggest trying that first of all. | 0 | 902 | true | 0 | 1 | win32security.LookupAccountName fails with error The trust relationship between the primary domain and the trusted domain failed | 27,160,376 |
2 | 2 | 1 | 0 | 0 | 0 | 0 | 0 | With QPython on Kindle fire .. I use QEdit to write & save a .py file .. say bob.py ..
But when I switch to Console, I can't IMPORT from bob ..
Can someone tell me how to do this?
John (new to QPython) | 0 | qpython | 2014-11-28T17:56:00.000 | 0 | 27,193,835 | The comment from Yulia V got me thinking, maybe I just needed to append the location of "Scripts" to the sys.path, and Yep, that worked fine!!
Thanks Yulia! | 0 | 1,136 | false | 0 | 1 | How to import from saved QPython file? | 27,197,253 |
2 | 2 | 1 | 0 | 0 | 0 | 0 | 0 | With QPython on Kindle fire .. I use QEdit to write & save a .py file .. say bob.py ..
But when I switch to Console, I can't IMPORT from bob ..
Can someone tell me how to do this?
John (new to QPython) | 0 | qpython | 2014-11-28T17:56:00.000 | 0 | 27,193,835 | I think you can save the module file into the same directory which you script locates or /sdcard/com.hipipal.qpyplus/lib/python2.7/site-packages/ | 0 | 1,136 | false | 0 | 1 | How to import from saved QPython file? | 27,209,867 |
1 | 2 | 0 | 2 | 2 | 1 | 0.197375 | 0 | I just started with learning Python 3.4.x.
I really want to keep learning and developing on all devices. That's why I'm using Codeanywhere.
But the problem is I don't know how to execute a .py file in Codeanywhere.
Is there a method to do it?
Thanks | 0 | python | 2014-11-28T19:32:00.000 | 0 | 27,194,932 | You run your .py files just like you would if you were running the python commandline in windows.
ex.
python myfile.py
Open a SSH terminal from your python devbox and type it in the cmd line and you're all set. | 0 | 9,516 | false | 0 | 1 | How do I run python in Codeanywhere? | 30,282,342 |
1 | 3 | 0 | 0 | 1 | 0 | 0 | 0 | I'm making a code where the pi gets a serial input from a usb-serial board(From the sparkfun RFID starter kit), how can I make this work?
error
Traceback (most recent call last):
File "main", line 22, in
ser = s.Serial('ttyUSB0', 9600, timeout=10000)
File "/usr/lib/python2.7/dist-packages/serial/serialutil.py", line 260, in init
self.open()
File "/usr/lib/python2.7/dist-packages/serial/serialposix.py", line 276, in open
raise SerialException("could not open port %s: %s" % (self._port, msg))
serial.serialutil.SerialException: could not open port ttyUSB0: [Errno 2] No such file or >directory: 'ttyUSB0'
The RFID port is the ttyUSB0 | 0 | python,serialization,raspberry-pi | 2014-11-29T15:58:00.000 | 0 | 27,204,134 | It's the cable. Check the USB cable. All of that yanking | 0 | 9,230 | false | 0 | 1 | ttyUSB0 not found on Raspberry Pi | 64,522,252 |
1 | 1 | 0 | 1 | 0 | 0 | 0.197375 | 0 | I am trying to automate a scenario in which, I have a terminal window open with multiple tabs open in it. I am able to migrate between the tabs, but my problem is how do i pass control to another terminal tab while i run my perl script in a different tab.
Example: I have a terminal open with Tab1,Tab2,Tab3,Tab4 open in the same terminal, i run the perl script in Tab3 and i would want to pass some commands onto Tab1. Could you please tell me how can i do this ??
I use GUI tool to switch between tabs X11::GUITest and use keyboard shortcuts to switch between tabs, any alternative suggestion is welcome, my ultimate aim is to pass control on to a different tab. | 0 | python,perl,ubuntu,automation,perl-module | 2014-12-01T10:39:00.000 | 1 | 27,226,551 | The main thing to understand is that each tab has a different instance of terminal running, more importantly a different instance of shell (just thought I would mention as it didnt seem like you were clear about that from your choice of words). So "passing control" in such a scenario could most probably entail inter-process communication (IPC).
Now that opens up a range of possibilities. You could, for example, have a python/perl script running in the target shell (tab) to listen on a unix socket for commands in the form of text, which the script can then execute. In Python, you have modules subprocess (call, Popen) and os (exec*) for this. If you have to transfer control back to the calling process, then I would suggest using subprocess as you would be able to send back return codes too.
Switching between tabs is a different action and has no consequences on the calling/called processes. And you have already mentioned how you intend on doing that. | 0 | 134 | false | 0 | 1 | How do i pass on control on to different terminal tab using perl? | 27,242,918 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 1 | So I'm writting an IRC bot in Python. Right now, the first thing I'm trying to get it to do is Log all the important things in every channel it's on (as well as private messages to the bot itself).
So far I've gotten it to log the JOIN, PRIVMSG (including CTCP commands) and PART. However I'm having some trouble with the QUIT command. Now I know the QUIT command doesn't include the <channel> parameter because it doesn't need it. However my bot is connected to multiple channels and I need to be able to differentiate which channels the user was connected to when he/she issued the QUIT command to appropriate log it. Many users won't be connected to every channel the bot is.
What would be the ideal way to go about this? Thanks for your help. | 0 | python,python-3.x,bots,irc | 2014-12-03T19:21:00.000 | 0 | 27,279,922 | It sounds like you want to write the same QUIT log message to multiple per-channel log files, but only specific ones the bot is in?
To accomplish something similar, I ended up getting the list of names in a channel when the bot joins, then keeping track of every nick change, join, part, kick, and quit, and adjusting the bot's internal list. That way, on a quit, I could just check the internal list and see what channels they were on. | 0 | 73 | true | 0 | 1 | Logging where the QUIT command comes from in IRC | 27,316,992 |
1 | 2 | 0 | 2 | 6 | 1 | 0.197375 | 0 | I am writing a program in Python for elliptic curve cryptography (for school and out of interest). I am currently working on the digital signature algorithm. I am currently looking for a good and secure hashing function which is either standard in Python or can easily be downloaded and imported. I thought about SHA256, since that's the only one I know which hasn't been broken yet (as far as I know). However, I have also read that SHA shouldn't be used for cryptography. Is SHA256 appropriate for a digital signature algorithm? Or should a different hashing function be used? If so, which one would be a good choice? | 0 | python,python-3.x,hash,cryptography,digital-signature | 2014-12-04T17:36:00.000 | 0 | 27,300,409 | The best standardized algorithm currently available is still SHA-2. SHA-2 now consists of 6 hash functions: SHA-256, SHA-384 and SHA-512 were first defined. SHA-224 was later added to allow for a smaller output size. After that the less well available SHA-512/224 and SHA-512/256 were introduced.
SHA-2 mainly consists of the 32-bit oriented SHA-256 variants - SHA-256 and SHA-224 - and the 64-bit SHA-512 variants - the others. The performance of the SHA-512 variants may actually be higher on 64 bit machines, hence the introduction of SHA-512/224 and SHA-512/256. Basically the variants of SHA-256 / SHA-512 only differ in the constants they use internally and the amount of bits used as output size. Some newer Intel and AMD processors SHA extensions that only accelerate SHA-256, not SHA-512, possibly shifting the favor again towards SHA-256 with regard to speed.
During the SHA-3 competition it came to light that SHA-2 is still pretty strong, even if SHA-1 is under attack. I would suggest only to look at other hashes if SHA-2 is under attack or if better hash algorithms get standardized and used.
From Wikipedia:
In 2005, security flaws were identified in SHA-1, namely that a mathematical weakness might exist, indicating that a stronger hash function would be desirable.[6] Although SHA-2 bears some similarity to the SHA-1 algorithm, these attacks have not been successfully extended to SHA-2.
Note that SHA-2 uses a considerably more complex round function compared to SHA-1. So although it has a similar structure (both are so called Merkle-Damgard hashes) SHA-2 may be much more resistant than SHA-1 against attack none-the-less. | 0 | 5,345 | false | 0 | 1 | Cryptographic hash functions in Python | 27,312,944 |
1 | 2 | 0 | 4 | 0 | 0 | 1.2 | 0 | Our startup is currently using RabbitMQ (with Python/Django) for messaging queues, now we are planning to move to Amazon SQS for its high availability & their delayed queue feature.
But I am reading on INTERNET everywhere that SQS is slow performing & also very cost effective, so is it wise decision to move to Amazon SQS or should to stick to RabbitMQ?
And if we its good to stick with RabbitMQ, whats the alternative solution for "delayed queues"? | 0 | python,django,amazon-web-services,rabbitmq,amazon-sqs | 2014-12-05T12:32:00.000 | 0 | 27,315,968 | I haven't had any problems with slow performance on SQS, but then again it maybe that the be the nature of my apps don't count on sub-millisecond response times for items in my queue. For me the work done on the items in the queue contributes more to the lag than the time it takes to use the queue.
For me the distributed, highly available and 'hands-off' nature of SQS suits the bill. Only you can decide whats is more important: a few more milliseconds of performance in a non-redundant system that you need to support yourself, or the 'queue as a service' offerings of AWS. Not knowing you application, I can't say if the perceived extra performance is a necessary trade off for you. | 0 | 3,023 | true | 1 | 1 | Moving from RabbitMQ to Amazon SQS | 27,317,451 |
2 | 2 | 0 | 0 | 0 | 1 | 1.2 | 0 | I've run into a strange problem.
I built VTK with python wrappings on cent os 6.5.
On importing vtk it gives me PyUnicodeUCS2_* error. I checked python used for the build for unicode setting with sys.maxunicode. It is UCS4. I searched for this error and found that the error occurs when the VTK is built using UCS2 python. But, This is not the case in my case. What could be the reason for error?
The python that I'm using is picked from some other machine . If I run maxunicode on original previous machine it shows USC2. The same python (I copied the whole folder python2.6) on the other machine where I'm building VTK, shows maxunicode as UCS4. I think this has something to do with the problem.
Please help. | 0 | python,unicode,centos6,vtk,python-unicode | 2014-12-06T02:42:00.000 | 0 | 27,327,731 | I tried to compile VTK with my python build several times. Checked the various paths in CMAKE to avoid conflict with system python. Still couldn't get rid of the error. Finally, I built the python with --enable-unicoe=ucs2. That solved the problem. Thanks for the help though. | 0 | 99 | true | 0 | 1 | PyUnicodeUCS2_* error while importing VTK | 27,519,430 |
2 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I've run into a strange problem.
I built VTK with python wrappings on cent os 6.5.
On importing vtk it gives me PyUnicodeUCS2_* error. I checked python used for the build for unicode setting with sys.maxunicode. It is UCS4. I searched for this error and found that the error occurs when the VTK is built using UCS2 python. But, This is not the case in my case. What could be the reason for error?
The python that I'm using is picked from some other machine . If I run maxunicode on original previous machine it shows USC2. The same python (I copied the whole folder python2.6) on the other machine where I'm building VTK, shows maxunicode as UCS4. I think this has something to do with the problem.
Please help. | 0 | python,unicode,centos6,vtk,python-unicode | 2014-12-06T02:42:00.000 | 0 | 27,327,731 | This error is caused by using an extension built by a UCS2-based Python interpreter with a UCS4-based interpreter (or vice-versus).
If you built it using the same Python interpreter then something is confusing in your build environment. | 0 | 99 | false | 0 | 1 | PyUnicodeUCS2_* error while importing VTK | 27,327,870 |
1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | Can anyone tel me how to start a python script on boot, and then also load the GUI ? I am debian based Raspbian OS.
The reason I want to run the python script on boot is because I need to read key board input from a RFID reader. I am currently using raw_input() to read data from the RFID reader. The 11 character hex value is then compared against a set of values in a txt file. This raw_input() did not work for me on autostarting python script using crontab and also using with LXDE autostart.
So, I am thinking to run python script at boot, so that it reads keyboard input. If there are any other ways of reading keyboard input using crontab autostart and LXDE autostart, please let me know. | 0 | python,linux,boot,raspbian,autostart | 2014-12-06T23:03:00.000 | 1 | 27,337,587 | Try to use bootup option in crontab:
@reboot python /path/to/pythonfile.py | 0 | 1,590 | false | 0 | 1 | Starting a python script at boot and loading GUI after that | 27,344,131 |
1 | 1 | 0 | 7 | 3 | 0 | 1.2 | 0 | Why doesn't uWSGI listen on IPv6 interface, even if system is 100% IPv6 ready? As far as I could see there aren't parameters nor documentation covering this issue. | 0 | http,python-3.x,ipv6,uwsgi | 2014-12-07T11:41:00.000 | 1 | 27,342,256 | In your INI config file specify something like this
[uwsgi]
socket = [::]:your_port_number
Or from the CL,
./uwsgi -s [::]:your_port_number
The server shall now listen along all the interfaces (including IPv4, if the underlying OS supports dual stack TCP sockets) | 0 | 1,462 | true | 1 | 1 | uWSGI --http :80 doesn't listen IPv6 interface | 27,342,634 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 1 | I've coded a small raw packet syn port scanner to scan a list of ips and find out if they're online. (btw. for Debian in python2.7)
The basic intention was to simply check if some websites are reachable and speed up that process by preceding a raw syn request (port 80) but I stumbled upon something.
Just for fun I started trying to find out how fast I could get with this (fastest as far as i know) check technique and it turns out that despite I'm only sending raw syn packets on one port and listening for responses on that same port (with tcpdump) the connection reliability quite drops starting at about 1500-2000 packets/sec and shortly thereafter almost the entire networking starts blocking on the box.
I thought about it and if I compare this value with e.g. torrent seeding/leeching packets/sec the scan speed is quiet slow.
I have a few ideas why this happens but I'm not a professional and I have no clue how to check if I'm right with my assumptions.
Firstly it could be that the Linux networking has some fancy internal port forwarding stuff running to keep the sending port opened (maybe some sort of feature of iptables?) because the script seems to be able to receive syn-ack even with closed sourceport.
If so, is it possible to prevent or bypass that in some fashion?
Another guess is that the python library is simply too dumb to do real proper raw packet management but that's unlikely because its using internal Linux functions to do that as far as I know.
Does anyone have a clue why that network blocking is happening?
Where's the difference to torrent connections or anything else like that?
Do I have to send the packets in another way or anything? | 0 | python,linux,performance | 2014-12-08T04:18:00.000 | 1 | 27,351,360 | Months ago I found out that this problem is well known as c10k problem.
It has to do amongst other things with how the kernel allocates and processes tcp connections internally.
The only efficient way to address the issue is to bypass the kernel tcp stack and implement various other low-level things by your own.
All good approaches I know are working with low-level async implementations
There are some good ways to deal with the problem depending on the scale.
For further information i would recommend to search for the c10k problem. | 0 | 126 | true | 0 | 1 | speed limit of syn scanning ports of multiple targets? | 29,195,455 |
1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | I need to send some data from a Python program to a C++ program. The current design is such that the C++ program executes the python program in a separate thread. I wish to pass some result from the python program back to the C++ program.
What I have found so far includes:
Sending over sockets
Sending via a pipe
Using a temporary file
Embedding the python interpreter in the C++ program
Using boost.python
My data (to be passed back to the C++ program) is essentially a python dictionary, and a few files. (I am sending email details, and the attachments). What strategy should I use?
Is there anything I can do to make my work easier? Or can I improve my program design?
EDIT: Added boost.python to the list of options found. | 0 | python,c++,c,sockets | 2014-12-09T16:37:00.000 | 0 | 27,384,091 | I ended up altering the the modules such that the C++ part and the Python part can work more or less independently of each other. But if passing data was necessary, I guess I would have gone via the socket route. | 0 | 622 | false | 0 | 1 | Strategies to send data from a Python program to a C++ program | 27,401,759 |
1 | 1 | 0 | 3 | 0 | 0 | 1.2 | 0 | I've installed on my raspberry opencv python module and everything was working fine. Today I've compiled a C++ version of OpenCV and now when I want to run my python script i get this error:
Traceback (most recent call last):
File "wiz.py", line 2, in
import cv2.cv as cv
ImportError: No module named cv | 0 | python,opencv,raspberry-pi | 2014-12-09T16:53:00.000 | 0 | 27,384,395 | Check the API docs for 3.0. Some python functions return more parameters or in a different order.
example: cv2.cv.CV_HAAR_SCALE_IMAGE was replaced with cv2.CASCADE_SCALE_IMAGE
or
(cnts, _) = cv2.findContours(...) now returning the modified image as well
(modImage, cnts, _) = cv2.findContours(...) | 1 | 1,523 | true | 0 | 1 | OpenCV python on raspberry | 27,387,097 |
1 | 2 | 0 | 1 | 1 | 0 | 0.099668 | 0 | The Question
Where can I access the documentation for legacy versions of the nose testing framework?
Why
I have to support some python code that must run against python 2.6 on a Centos 6 system. It is clear from experimentation that nosetests --failed does not work on this system. I'd like to know if I'm just missing a module or not. More generally, I need to know what capabilities of nose that I have grown used to I will have to do without, without having to check for them individually. | 0 | python,nose | 2014-12-09T23:07:00.000 | 1 | 27,390,553 | You should be able to upgrade nosetests via pip, while still staying with python 2.6. At least, nose 1.3.4 (latest as of this writing) installs cleanly inside the py2.6 virtualenv I just threw together. I don't have any py2.6-compatible code to hand to show that it's working correctly, though. | 0 | 41 | false | 0 | 1 | online documentation for old versions of nose | 27,390,659 |
1 | 2 | 0 | 4 | 6 | 0 | 0.379949 | 0 | I'm developing a module1 which has some test cases. I've another module2 which can run these test cases and generate the amount of coverage. Currently the .coverage folder is generated on the current working directory from where the module2 is being called. Is there a way to specify the folder path for coverage to dump this .coverage in the path specified? | 0 | python-2.7,coverage.py | 2014-12-10T10:53:00.000 | 0 | 27,399,181 | Seems like from command line there is no option to do this.
But using configuration files this can be done.
In the configuration file keep below lines.
[run]
data_file = < path where .coverage should be stored >
Then run:
coverage run < script > (if configuration file is not specified, it looks for .coveragerc in same folder from when coverage is being run. If this is also not available then defaults are used)
or
coverage run --rcfile < configuration file name > < script name > | 0 | 1,975 | false | 0 | 1 | Provide path to Coverage to dump .coverage | 38,825,213 |
1 | 5 | 0 | 4 | 3 | 0 | 0.158649 | 0 | my ultimate goal is to allow my raspberry pi detect when my iphone or pebble watch is nearby. I am presently focusing on the pebble as I believe iphone randomizes the MAC address. I have the static MAC address of the pebble watch.
My question is how to detect the presence of the MAC address through bluetooth?
I have tried hcitool rssi [mac address] or l2ping [mac address] however both needs a confirmation of connection on the watch before any response. I want it to be automatic...
I also tried hcitool scan, but it takes awhile, presumably it is going through all possibilities. I simply want to search for a particular Mac Address.
EDIT: I just tried "hcitool name [Mac Address]" which return the name of the device and if not there it returns a "null" so this is the idea... is there a python equivalent of this?
I am new to python, so hopefully someone can point to how I can simply ping the mac address and see how strong the RSSI value is? | 0 | python,bluetooth,pebble-watch | 2014-12-10T13:12:00.000 | 0 | 27,401,918 | Apple iDevices do use private resolvable addresses with Bluetooth Low Energy (BLE). They cycle to a different address every ~15 minutes. Only paired devices that have a so called Identity Resolving Key can "decipher" these seemingly random addresses and associate them back to the paired device.
So to do something like this with your iPhone, you need to pair it with your raspberry pi.
Then what you can do is make a simple iOS app that advertises some data (what does not matter because when the app is backgrounded, only iOS itself gets to put data into the advertising packet). On the raspberry pi you can then use hcitool lescan to scan for the BLE advertisements. If the address of the advertisement can be resolved using the IRK, you know with high certainty that it's the iPhone. I'm not sure if hcitool does any IRK math out of the box, but the resolving algorithm is well specified by the Bluetooth spec.
Pebble currently does indeed use a fixed address. However, it is only advertising when it is disconnected from the phone it is supposed to be connected to. So, for your use case, using its BLE advertisements is not very useful. Currently, there is no API in the Pebble SDK to allow an app on the Pebble to advertise data.
FWIW, the commands you mentioned are useful only for Bluetooth 2.1 ("Classic") and probably only useful if the other device is discoverable (basically never, unless it's in the Settings / Bluetooth menu). | 0 | 15,931 | false | 0 | 1 | Detecting presence of particular bluetooth device with MAC address | 27,412,126 |
2 | 3 | 0 | 0 | 11 | 1 | 0 | 0 | In PHP you use the === notation to test for TRUE or FALSE distinct from 1 or 0.
For example if FALSE == 0 returns TRUE, if FALSE === 0 returns FALSE. So when doing string searches in base 0 if the position of the substring in question is right at the beginning you get 0 which PHP can distinguish from FALSE.
Is there a means of doing this in Python? | 0 | python,int,boolean | 2014-12-11T19:52:00.000 | 0 | 27,431,249 | The strict equivalent of x === y in Python is type(x) is type(y) and x == y. You don't really want to do this as Python is duck typed. If an object has the appropriate method or attribute then you shouldn't be too worried about its actual type.
If you are checking for a specific unique object such as (True, False, None, or a class) then you should use is and is not. For example: x is True. | 0 | 10,474 | false | 0 | 1 | Python: False vs 0 | 27,431,567 |
2 | 3 | 0 | 27 | 11 | 1 | 1.2 | 0 | In PHP you use the === notation to test for TRUE or FALSE distinct from 1 or 0.
For example if FALSE == 0 returns TRUE, if FALSE === 0 returns FALSE. So when doing string searches in base 0 if the position of the substring in question is right at the beginning you get 0 which PHP can distinguish from FALSE.
Is there a means of doing this in Python? | 0 | python,int,boolean | 2014-12-11T19:52:00.000 | 0 | 27,431,249 | In Python,
The is operator tests for identity (False is False, 0 is not False).
The == operator which tests for logical equality (and thus 0 == False).
Technically neither of these is exactly equivalent to PHP's ===, which compares logical equality and type - in Python, that'd be a == b and type(a) is type(b).
Some other differences between is and ==:
Mutable type literals
{} == {}, but {} is not {} (and the same holds true for lists and other mutable types)
However, if a = {}, then a is a (because in this case it's a reference to the same instance)
Strings
"a"*255 is not "a"*255", but "a"*20 is "a"*20 in most implementations, due to how Python handles string interning. This behavior isn't guaranteed, though, and you probably shouldn't be using is in this case. "a"*255 == "a"*255 and is almost always the right comparison to use.
Numbers
12345 is 12345 but 12345 is not 12345 + 1 - 1 in most implementations, similarly. You pretty much always want to use equality for these cases. | 0 | 10,474 | true | 0 | 1 | Python: False vs 0 | 27,431,348 |
1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | so im working on a text game in python, and am working on this at both school and home.
I recently started using Visual studio and love the program, however i found that it cannot open or save properly as a .py file, which i need to do to be able to work on the file at school.
I have installed python tools for visual studio and it works great, but i can only work with files in a .pyroj format.
Does anyone with Visual Studio experience know any way to save and open .py's in Visual Studio?
Many thanks | 0 | python,visual-studio-2013 | 2014-12-11T22:16:00.000 | 0 | 27,433,443 | As for opening files, if you right click the .py file, open with, and then select visual studio.
I know it's not really a solution, but you may want to try PyCharm, I prefer it to IDLE, and it has several customisation options to make it look like visual studio.
Hope it helps. | 0 | 780 | false | 0 | 1 | Visual Studio cannot open or save Python .py files | 42,518,497 |
1 | 1 | 0 | 0 | 4 | 1 | 1.2 | 0 | I would like to connect to my raspberry pi using a remote interpreter. I've managed to do it just fine in windows 7 using Pycharm, but having recently upgrading to windows 8.1 it no longer works. I've tried to connect to the raspberry pi (where it worked in win 7) and another one with a fresh install of Raspbian (released 09-09-2014).
I also tried through Ubuntu, but to no avail. Has anyone out there managed to get this right in windows 8 or any linux flavour?
Should I try a key pair (OpenSSH or PuTTY)?
After adding the RSA key to the repository, the process that hangs is
'Getting remote interpreter version' ~ 'Connecting to 10.0.0.98' | 0 | python,raspberry-pi,pycharm,remote-debugging,interpreter | 2014-12-13T16:07:00.000 | 1 | 27,460,843 | It works in PyCharm if you deploy a remote SFTP server.
Tools > Deployment > Add > Enter name and SFTP >
Enter host, port, root path (I said "/" without quotes) username and password.
Then, when creating a new project, change your interpreter to 'Deployment Configuration', and select your SFTP server.
Press OK, then create.
You should be all set to go. | 0 | 2,553 | true | 0 | 1 | Configuring Remote Python Interpreter in Pycharm | 29,901,664 |
1 | 2 | 0 | 1 | 0 | 0 | 1.2 | 1 | I have a python script on my raspberry-pi continuously (every 5 seconds) running a loop to control the temperature of a pot with some electronics through GPIO.
I monitor temperature on a web page by having the python script write the temperature to a text file witch I request from java script and HTTP on a web page.
I would like to pass a parameter to the python script to make changes to the controlling, like change the target temperature.
What would be the better way to do this?
I'm working on a solution, where the python script is looking for parameters in a text file and then have a second python script write changes to this file. This second python script would be run by a http request from the web page.
Is this a way to go? Or am I missing a more direct way to do this.
This must be done many time before and described on the web, but I find nothing. Maybe I don't have the right terms to describe the problem.
Any hints is appreciated.
Best regards Kresten | 0 | python,http,web,raspberry-pi | 2014-12-14T21:52:00.000 | 0 | 27,474,557 | You have to write somewhere your configuration for looping script. So file or database are possible choices but I would say that a formatted file (ini, yaml, …) is the way to go if you have a little number of parameters. | 0 | 229 | true | 1 | 1 | Interact with python script running infinitive loop from web | 27,474,586 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | Recently I installed Mac OS Yosemite. Until then I had no problem in installing libraries and have already installed beautiful soup, pydelicious, etc. After the installation of Yosemite, I tried to install Mechanize and Requests Libraries in mac. There was no problem in their installation. I could import them and use them from the Terminal. However, Xcode 6.1 doesn't load them/seeing them. I consistently get the
ImportError: No module named mechanize
ImportError: No module named requests
error messages.
I have already tried changing the file permissions with full access to the user but to no avail.
I also checked PYTHONPATH and .profile files, so far no luck.
I wonder, if any has encountered this problem or if any one know of some fix to this problem? | 0 | python,xcode6,mechanize,python-requests,osx-yosemite | 2014-12-15T12:43:00.000 | 0 | 27,484,406 | Apparently Xcode was referring to my default of Python.
After the comment from Droppy, I checked my python version by using
which python
I copy pasted the result in the Xcode Program scheme. Now it works...!
Thanks for all the help. | 0 | 481 | true | 0 | 1 | ImportError: No module named mechanize - XCode 6.1 is not seeing newly installed python libraries | 27,577,039 |
2 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | I am running Python on a Raspberry Pi and everything works great.
I have a small script running on the system start up which prints several warning messages (which I actually cannot read since it is running in the background)...
My question is: Is there a way via SSH to "open" this running script instance and see what is going on or a log file is the only way to work with that?
Thanks! | 0 | python,ssh,raspberry-pi,raspbian | 2014-12-15T18:59:00.000 | 1 | 27,491,173 | You should modify your python script to write its output to a file instead of to the screen (which you can't see). I.e., I think that a log file is your best (possibly only) bet. You can write to a file in /tmp on the raspberry pi if you just want a temporary log file that you can check once and a while.
Also, as Tim said, you could try out the python logging library, but I think just writing to a file is quicker and easier, although you might run into some issues with permissions... | 0 | 82 | false | 0 | 1 | Python on system start up | 27,494,641 |
2 | 2 | 0 | 1 | 0 | 0 | 1.2 | 0 | I am running Python on a Raspberry Pi and everything works great.
I have a small script running on the system start up which prints several warning messages (which I actually cannot read since it is running in the background)...
My question is: Is there a way via SSH to "open" this running script instance and see what is going on or a log file is the only way to work with that?
Thanks! | 0 | python,ssh,raspberry-pi,raspbian | 2014-12-15T18:59:00.000 | 1 | 27,491,173 | Try using the Python logging library. You can configure it to save the output to a file and then you can use tail -f mylogfile.log to watch as content is put in.
EDIT:
An alternative is to use screen. It allows you to run a command in a virtual console, detach from that console, and then disconnect from the machine. You can then reconnect to the machine and re-attach to that console and see all the output the process made. I'm not sure about using it on a script that starts when the machine is turned on, though (I simply haven't tried it). | 0 | 82 | true | 0 | 1 | Python on system start up | 27,492,567 |
1 | 1 | 1 | 1 | 2 | 0 | 0.197375 | 0 | Protobuf with pure python performance 3x slowly on pypy than CPython do.
So I try to use c++ implementation for pypy.
These are two error (PyFloatObject undefined and const char* to char*) when I compile protobuf(2.6.1 release) c++ implementation for pypy.
I compile successfully after I modify python/google/protobuf/pyext/message.c,But I get 'Segmentation fault' error finally when I use protobuf with c++ implementation on pypy.
I don't know how to fix it, help me please! | 0 | python,c++,protocol-buffers | 2014-12-17T03:40:00.000 | 1 | 27,517,939 | So this is a happy non-answer using my experience. The pure-python bindings for google protobuf are a terrible port of C/C++ stuff. However, I had quite a bit of success wrapping C google protobuf generated bindings using cffi. Someone should go ahead and create a more generic binding, but that would just a short consulting stint. | 0 | 567 | false | 0 | 1 | Is there any way to use Google Protobuf on pypy? | 27,687,619 |
2 | 3 | 0 | -2 | 6 | 0 | -0.132549 | 0 | I have a remote interpreter set up in PyCharm.
Everytime I close and reopen PyCharm, the connection seems to be broken, and the process to "reopen" the connection doesn't feel efficient to me.
Before doing the following, it is not possible to run any script.
Here is what I usually do:
File -> Settings
Project -> Project Interpreter
Click on the gear icon on the right.
Choose "More"
With my remote interpreter selected, click on "Edit"
Change "SSH Credentials" for "Deployment Configuration" (all the info is already filled)
Click "ok" multiple times to close everything up.
At that point, I can run any scripts on the remote machine.
What is the best/fastest way to do this? (any way to "save the settings"?) | 0 | python,pycharm,remote-access,interpreter | 2014-12-19T01:27:00.000 | 0 | 27,558,514 | i meet the same error, file->settings checking your interpreter setting,
You don't set the 'host' and 'post',or your set but the content was clear. check again, and it truly work | 0 | 10,796 | false | 0 | 1 | Connecting to remote interpreter when starting PyCharm | 53,589,468 |
2 | 3 | 0 | 2 | 6 | 0 | 0.132549 | 0 | I have a remote interpreter set up in PyCharm.
Everytime I close and reopen PyCharm, the connection seems to be broken, and the process to "reopen" the connection doesn't feel efficient to me.
Before doing the following, it is not possible to run any script.
Here is what I usually do:
File -> Settings
Project -> Project Interpreter
Click on the gear icon on the right.
Choose "More"
With my remote interpreter selected, click on "Edit"
Change "SSH Credentials" for "Deployment Configuration" (all the info is already filled)
Click "ok" multiple times to close everything up.
At that point, I can run any scripts on the remote machine.
What is the best/fastest way to do this? (any way to "save the settings"?) | 0 | python,pycharm,remote-access,interpreter | 2014-12-19T01:27:00.000 | 0 | 27,558,514 | This was a bug in version 4.0.2 of PyCharm and was corrected in version 4.0.3.
Edit: No longer true. I tried with another computer and having the most recent version doesn't fix the problem. | 0 | 10,796 | false | 0 | 1 | Connecting to remote interpreter when starting PyCharm | 27,585,038 |
1 | 1 | 0 | 2 | 2 | 1 | 0.379949 | 0 | Is there an editor that I could install on the Raspbian OS to practice Regex with? If not, what about through Python? If so, is there a good python IDE out there for Raspbian that supports Regexs? | 0 | python,regex,raspberry-pi,raspbian | 2014-12-19T10:42:00.000 | 0 | 27,564,320 | Python itself supports regexes (via a built-in module). If you're just interested in playing around with them, import re in an interactive shell, and you have access to Python's regular expression engine. | 0 | 335 | false | 0 | 1 | Using Regular Expressions on my Raspberry Pi | 27,578,046 |
2 | 3 | 0 | 2 | 0 | 0 | 0.132549 | 0 | I am trying to generate a nucleotide motif that will code chosen amino acids. For example - histidine is coded by CAT, CAC. Arginine is CGT, CGC, CGA, CGG,AGA and AGG. The pattern is:
position in codon - C or A
position in codon - A or G
position - A, T, C or G
by that rule you can define chosen amino acids (H and R) but also the amino acids that i dont want (for example AAA is lysine, AAT is asparagine...). So I need to define the pattern that matches only my chosen AAs, in case above it can be: [C][A or G][T], that pattern defines only histidine and arginine, but not the other amino acids. I am trying to work out an algorithm which will do this thing with any amino acids which i choose (more than two) and if the pattern does not exist it should find the possibilities for less amino acids (for example if pattern for 5 amino acids does not exist, it will find the patterns for four amino acids from the query) - this final optimization problem is probably the hardest part. Any suggestions? Thanks a lot and sorry for my poor english. | 0 | python,design-patterns,sequence,bioinformatics | 2014-12-22T12:52:00.000 | 0 | 27,603,128 | I would do this in two steps. First, translate the nucleotide sequence into the amino acid sequence, using a mapping of codon to amino acid (CAT maps to H, CAC maps to H, CGT maps to R, CGC maps to R, etc.). Second, use the Boyer-Moore algorithm to search for specific amino acid sequences, or regular expressions if you need "wildcards" or groups of options. | 0 | 235 | false | 0 | 1 | Bioinformatics - common motif for amino acids | 27,603,419 |
2 | 3 | 0 | 0 | 0 | 0 | 1.2 | 0 | I am trying to generate a nucleotide motif that will code chosen amino acids. For example - histidine is coded by CAT, CAC. Arginine is CGT, CGC, CGA, CGG,AGA and AGG. The pattern is:
position in codon - C or A
position in codon - A or G
position - A, T, C or G
by that rule you can define chosen amino acids (H and R) but also the amino acids that i dont want (for example AAA is lysine, AAT is asparagine...). So I need to define the pattern that matches only my chosen AAs, in case above it can be: [C][A or G][T], that pattern defines only histidine and arginine, but not the other amino acids. I am trying to work out an algorithm which will do this thing with any amino acids which i choose (more than two) and if the pattern does not exist it should find the possibilities for less amino acids (for example if pattern for 5 amino acids does not exist, it will find the patterns for four amino acids from the query) - this final optimization problem is probably the hardest part. Any suggestions? Thanks a lot and sorry for my poor english. | 0 | python,design-patterns,sequence,bioinformatics | 2014-12-22T12:52:00.000 | 0 | 27,603,128 | The problem was solved:
1) I made a library of all codons from the aminoacids of choise (ex. Met and Trp are AUG and UGG - so library of all combinations consists of [A/U][U/G][G] - AUG,AGG,UUG,UGG
2) Created two lists - first of "good" aminoacids - AUG, UGG and second of "bad" - AGG, UUG
3) Calculated the amount of "good" aminoacids left, if I remove a specific nucleotide (say if I remove U in second position I lose my AUG for methionine - so for U in second position the number is 1) for each nucleotide in each position
4) Next "bad" codons were analyzed for occurence of each nucleotide in each position (for codons AGG and UUG I have 2x G in third, 1x G in second, 1x U in first etc.)
5) After these steps I simply took the highest number from the step 4), looked to the list in step 3) and if the G in third position can be harmlessly removed without any losses in good aminoacids (not possible in our example, but can be done in larger codon sets) - I remove all codons with G in third position, edit the list of "good" and "bad" codons, and proceed again to step 3) | 0 | 235 | true | 0 | 1 | Bioinformatics - common motif for amino acids | 27,894,330 |
1 | 1 | 0 | 1 | 0 | 1 | 1.2 | 0 | I want to make simple online IDE with cloud storage feature, let us say this should be implement in C++,Python ,etc...
Does it SaaS (I mean the IDE :-) ) ?
I'm new in SaaS and Cloud world , where to start to make something like what I said above ?
Thanks. | 0 | python,c++,ide,cloud,saas | 2014-12-24T06:25:00.000 | 0 | 27,632,485 | Based on Wikipedia:
Software as a service (SaaS) is a software licensing and delivery
model in which software is licensed on a subscription basis and is
centrally hosted
There are many implementations of SaaS. Such as ERP, CRM, CMS, etc. Find out by your self what kind of service will you offer to your customers. Then choose the right SaaS implementation. | 0 | 140 | true | 0 | 1 | Integrated Development Environment with SaaS | 27,633,513 |
1 | 1 | 0 | 1 | 0 | 0 | 1.2 | 0 | I've accumulated a set of 500 or so files, each of which has an array and header that stores metadata. Something like:
2,.25,.9,26 #<-- header, which is actually cryptic metadata
1.7331,0
1.7163,0
1.7042,0
1.6951,0
1.6881,0
1.6825,0
1.678,0
1.6743,0
1.6713,0
I'd like to read these arrays into memory selectively. We've built a GUI that lets users select one or multiple files from disk, then each are read in to the program. If users want to read in all 500 files, the program is slow opening and closing each file. Therefore, my question is: will it speed up my program to store all of these in a single structure? Something like hdf5? Ideally, this would have faster access than the individual files. What is the best way to go about this? I haven't ever dealt with these types of considerations. What's the best way to speed up this bottleneck in Python? The total data is only a few MegaBytes, I'd even be amenable to storing it in the program somewhere, not just on disk (but don't know how to do this) | 0 | python,database,numpy,dataset,storage | 2014-12-24T19:51:00.000 | 0 | 27,641,616 | Reading 500 files in python should not take much time, as the overall file size is around few MB. Your data-structure is plain and simple in your file chunks, it ll not even take much time to parse I guess.
Is the actual slowness is bcoz of opening and closing file, then there may be OS related issue (it may have very poor I/O.)
Did you timed it like how much time it is taking to read all the files.?
You can also try using small database structures like sqllite. Where you can store your file data and access the required data in a fly. | 1 | 66 | true | 0 | 1 | Better way to store a set of files with arrays? | 27,641,772 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I have an api with publishers + subscribers and I want to stop a publisher from uploading a lot of data if there are no subscribers. In an effort to avoid another RTT I want to parse the HTTP header, see if there are any subscribers and if not return an HTTP error before the publisher finishes sending all of the data.
Is this possible? If so, how do I achieve it. I do not have post-buffering enabled in uwsgi and the data is being uploaded with a transfer encoding of chunked. Therefore, since uWSGi is giving me a content-length header, it must have buffered the whole thing somewhere previously. How do I get it to stop?
P.S. uWSGi is being sent the data via nginx. Is there some configuration that I need to set there too perhaps? | 0 | python,nginx,uwsgi | 2014-12-25T07:32:00.000 | 0 | 27,645,172 | The limit here is nginx. It cannot avoid (unless when in websockets mode) buffering the input. You may have more luck with apache or the uWSGI http router (albeit i suppose they are not a viable choice) | 0 | 244 | false | 1 | 1 | Can uWSGI return a response before the POST data is uploaded? | 27,646,090 |
1 | 1 | 0 | 2 | 0 | 0 | 1.2 | 0 | I am new to python scripting and started to look into a script to allow me to SSH to a box and check it is still running.
I have installed ActiveState (Python 2.7) on my windows desktop.
Using PyPM I have installed paramiko and pycrypto
but when I try to execute my scripts I get the following error:
Script: python C:\Python27\Scripts\RunOnEnv\ssh-matic.py
Error:ImportError:no Modules named ssh
When passing:
'>>>help('modules')
I can not see ssh in the list.
I have tried uninstalling and installing the modules with no problems.
What else am I missing? | 0 | python,ssh,paramiko,activestate,pypm | 2014-12-26T19:25:00.000 | 1 | 27,661,082 | I feel really foolish.
After reading that paramiko has replaced ssh module there still is an ssh module available.
Ooops!! | 0 | 320 | true | 0 | 1 | Installing modules with PyPM on ActiveState | 27,667,123 |
1 | 1 | 0 | 0 | 0 | 1 | 1.2 | 0 | I get slightly nervous posting in here because these are not my waters yet. Please bear with, I am very new to the world of code. I do my best to find answers to questions before I ask them; time is very valuable so I appreciate yours.
When I run code under Python in Terminal do I run the risk of damaging my system if I run bad code? My guess is no but I'd rather ask than regret. To follow up that question is there an editor that includes a built in interpreter so that I can write code and see it interpreted in the same window? Or is the best practice to write in an editor and run it in Terminal? Since syntax highlighting isn't available in Terminal I'm assuming that writing in Terminal is less than efficient.
Thank you for your help. | 0 | python,terminal,ide | 2014-12-27T05:55:00.000 | 1 | 27,664,809 | As soon as you don't mess up with the system files, you might not do any damages. Be sure what files you might be disturbing. If your code is too, ummmm, core, try to give ideone.com a try. This might help you with the things you might b touching.
Terminal might not give you enough interactions, but as you told that you are new to coding, terminal is important. Get along with it. Find error without syntax highlighting feature. It will definitely help in the future. But yes, this is applicable if you are going for it seriously, gradually. | 0 | 629 | true | 0 | 1 | Where To Test Python Code | 27,664,945 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I'm trying to test and distribute my python application in script or executable form (client). I already have my openshift server setup and running. I'm confused on setting up port forwarding with other users to test it out with.
Do other clients (publicly) need to download rhc and run 'rhc port-forward appname' on their own machine or are there alternatives out there which can be accomplished using python internally by code?
This is kind of confusing and any help would be much appreciated.
Thanks. | 0 | python,c++,linux,openshift,openshift-client-tools | 2014-12-27T06:12:00.000 | 1 | 27,664,896 | all the 'rhc port-forward appname' does is set up SSH tunnels behind the scenes. If you want people to tunnel into your appication you will need to get their public SSH key into your application as an approved key. Then you can set up an SSH tunnel whatever way you chose. | 0 | 211 | true | 0 | 1 | Public Client App Port Forwarding with OpenShift | 27,665,587 |
1 | 2 | 0 | 1 | 2 | 1 | 0.099668 | 0 | I am trying to use the timeit module to time the speed of an algorithm that analyzes data.
The problem is that I have to do run some setup code in order to run this algorithm. Specifically, I have to load some documents from a database, and turn it into matrix representation.
The timeit module does not seem to allow me to pass in the matrix object, and instead forces me to set this up all over again in the setup parameter. Unfortunately this means that the running time of my algorithm is fuzzed by the running time of pre-processing.
Is there some way to pass in objects that were created already, to timeit in the setup parameter? Otherwise, how can I deal with situations where the setup code takes a nontrivial amount of time, and I don't want that to fuzz the code block I actually am trying to test?
Am I approaching this the wrong way? | 0 | python,time,timeit | 2014-12-29T05:36:00.000 | 0 | 27,683,900 | The time it takes to run the setup code doesn't affect the timeit module's timing calculations.
You should be able to pass your matrix into the setup parameter using import, eg
"from __main__ import mymatrix" | 0 | 601 | false | 0 | 1 | Timeit module - Passing objects to setup? | 27,684,364 |
1 | 2 | 0 | 3 | 2 | 1 | 1.2 | 0 | It's a common question not specifically about some language or platform. Who is responsible for a file created in systems $TEMP folder?
If it's my duty, why should I care where to put this file? I can place it anywhere with same result.
If it's OS responsibility, can I forgot about this file right after use?
Thanks and sorry for my basic English. | 0 | python,bash,shell,operating-system,temp | 2014-12-29T13:51:00.000 | 1 | 27,690,197 | As a general rule, you should remove the temporary files that you create.
Recall that the $TEMP directory is a shared resource that other programs can use. Failure to remove the temporary files will have an impact on the other programs that use $TEMP.
What kind of impacts? That will depend upon the other programs. If those other programs create a lot of temporary files, then their execution will be slower as it will take longer to create a new temporary file as the directory will have to be scanned on each temporary file creation to ensure that the file name is unique.
Consider the following (based on real events) ...
In years past, my group at work had to use the Intel C Compiler. We found that over time, it appeared to be slowing down. That is, the time it took to run our sanity tests using it took longer and longer. This also applied to building/compiling a single C file. We tracked the problem down.
ICC was opening, stat'ing and reading every file under $TEMP. For what purpose, I know not. Although the argument can be made that the problem lay with the ICC, the existence of the files under $TEMP was slowing it and our development team down. Deleting those temporary files resulted in the sanity checks running in less than a half hour instead of over two--a significant time saver.
Hope this helps. | 0 | 1,430 | true | 0 | 1 | Should I delete temporary files created by my script? | 27,708,677 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I am running test scripts in Robot framework using Google Chrome browser.
But when I run scripts consecutively two times, no message log gets generated in message log section in Run tab. This problem is being encountered only while using Chrome.
Can anyone help me on this as why it is occurring. | 0 | python-2.7,selenium-webdriver,robotframework | 2014-12-31T06:21:00.000 | 0 | 27,717,059 | Have you tried running the test with 'pybot -L TRACE' ? | 0 | 580 | false | 1 | 1 | Using Robot framework with Google Chrome browser | 27,816,296 |
1 | 2 | 0 | 0 | 0 | 0 | 1.2 | 0 | I want to create a python program that can communicate with another python program running on another machine. They should communicate via network. For me, it's super simple using BasicHTTPServer. I just have to direct my message to http:// server2 : port /my/message and server2 can do whatever action needed based on that message "/my/message". It is also very time-efficient as I do not have to check a file every X seconds or something similar. (My other idea was to put text files via ssh to the remote server and then read that file..)
The downside is, that this is not password protected and not encrypted. I would like to have both, but still keep it that simple to transfer messages.
The machines that are communicating know each other and I can put key files on all those machines.
I also stumbled upon twisted, but it looks rather complicated. Also gevent looks way too complicated with gevent.ssl.SSLsocket, because I have to check for byte length of messages and stuff..
Is there a simple example on how to set something like this up? | 0 | python,encryption,https | 2014-12-31T10:20:00.000 | 1 | 27,719,695 | if you have no problem rolling out a key file to all nodes ...
simply throw your messages into AES, and move the output like you moved the unencrypted messages ...
on the other side ... decrypt, and handle the plaintext like the messages you handled before ... | 0 | 595 | true | 0 | 1 | Python network communication with encryption and password protection | 27,719,872 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | Im trying to design a system that can periodic "download" a large amount of data from an outside api..
This user could have around 600,000 records of data that I need once, then to check back every hour or so to reconcile both datasets.
Im thinking about doing this in python or ruby in background tasks eventually but I'm curious about how to store the data.
Would it possible/good idea to store everything in one record hashed as json vs copying each record individually?
It would be nice to be able to index or search the data without anything failing so I was wondering what would be the best implementation memory wise.
For example if the a user has 500,000 tweet records and I want to store all of them, which would be a better implementation?
one record as JSON => user_1 = {id:1 twt:"blah"},{id:2 twt:"blah"},.....{id:600,000 twt:"blah"}
vs
many records =>
id:1 outside_id=1 twt:"blah"
id:2 outside_id=1 twt:"blah"
id:3 outside_id=1 twt:"blah"
I'm curious how I would find out how memory intensive each method is or what is the best solution.
The records are alot more complex with maybe 40 attributes per record I wanted to store.
Also would MySQL or MongoDB be a better solution for fastest copy/storage? | 0 | python,mysql,ruby,json,mongodb | 2015-01-04T02:11:00.000 | 0 | 27,761,684 | I think this all boils down to what the most important needs are for the project. These are some of the questions I would try to answer before selecting the technology:
Will I need to access records individually after inserting into the database?
Will I ever need to aggregate the data when reading it (for reporting, for instance)?
Is it more important to the project goals to have the data written quickly or read quickly?
How large do I anticipate the data will grow and will the database technology I select scale easily, cheaply and reliably to support the data volume?
Will the schema of the data change? Do I need a schemaless database solution like MongoDB?
Where are the trade offs between development time/cost, maintenance time/cost and time/cost for running the program?
Without knowing much about the particulars or your project or its goals I would say it's generally not a good idea to store a single JSON object for the entirety of the data. This would likely make it more difficult to read the data and append to it in the future. You should probably apply some more thought on how to model your data and represent it in the database in a way that will make sense when you actually need to use it later. | 0 | 149 | false | 1 | 1 | Best Way to Store Large Amount of Outside API Data... using Ruby or Python | 27,761,882 |
1 | 2 | 0 | 1 | 3 | 0 | 0.099668 | 0 | I'd like to track my pypi project using google analytics. I was wondering where exactly I should embed the google analytics' code? | 0 | python,google-analytics,pypi | 2015-01-05T22:55:00.000 | 0 | 27,789,333 | It's not possible to include Google Analytics code for a project on PyPI. However, you can include it on the project's website (if any) and other pages related to the project, such as documentation. | 0 | 313 | false | 0 | 1 | How to track a PYPI project using google analytics? | 27,789,541 |
1 | 4 | 0 | 0 | 0 | 1 | 0 | 0 | The problem statement is:
Given 2 python files 'A.py' and 'B.py' (modified A.py), is there a way we can find out the:
1.Added methods
2.Removed methods
3.Modified methods : (a) Change in method prototype (b) Change in method content
Similarly for classes(changed/removed/modified) as well.
My Solution:
I was thinking if i could use a good diff tool, and find out the added/removed/modified lines, i can parse them to find out the required details.
I tried with git-diff but it gives line-wise diff. So if a method got shifted because some other method was added before that, it shows the method as deleted from the original file and added in the later file.
I saw 'meld' gives good diff between files which i could use easily, but i could not find a way to programmatically capture the output of meld.
Please provide any follow up on my solution, or any other solution for the problem
FYI: I want to automate this as there are many such files. Also, this has to be done on a linux box. | 0 | python,scripting,diff | 2015-01-06T09:50:00.000 | 0 | 27,796,040 | Git can do that, check out github its exactly what you look for | 0 | 2,811 | false | 0 | 1 | Find difference between 2 python files | 27,796,129 |
1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | I have a Python wrapper (to a C lib) generated by Swig.
Have unittest run happy within PyDev.
Project structure follow PyBuilder suggested setup:
|-src
|-main
|-python
|-A.py
|-_A.so
|-unittest
|-python
|-A_tests.py
when try run pyb, got following error:
Fatal Python error: PyThreadState_Get: no current thread
Abort trap: 6
NOTE:
If I change A to a pure Python module, everything works.
Must be something (step) missing related to load that .so file.
Sorry for a newbie question like this. Any help will be greatly appreciated. | 0 | python,swig,pybuilder | 2015-01-06T21:18:00.000 | 0 | 27,807,343 | is it possible you build the .so library for another python version?
PyBuilder does not do anything special about shared objects, especially not when running unit tests.
So try running ldd _A.so and see if that matches the interpreter you're using with pyb? | 0 | 53 | false | 0 | 1 | PyBuilder broken for Swig-Python generated wrapper project | 27,814,800 |
1 | 2 | 0 | 0 | 0 | 1 | 0 | 0 | I made a login to a python program I created. Is there any way to make the characters show up as stars or something other than the actual character? I am using Python version 3.4.2, I need something I can put in my script. Thanks! | 0 | python,login | 2015-01-07T00:37:00.000 | 0 | 27,809,691 | I suggest you read passwords either using no-echo from a TTY or just a direct stdin line read, only as a fallback.
Instead call a "password helper" program that is provided via an environment variable such as "TTY_ASKPASS". ("ssh" and even "sudo" can do this!)
This means not only can a user provide a 'ask password with stars' but they can also input the password from other sources like a keyring daemon, or GUI popup.
Do not limit your users! | 0 | 487 | false | 0 | 1 | Special Characters For Python | 67,327,565 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | I have a python script that sends a mail from Yahoo including an attachment. This script runs on a Freebsd 7.2 client and uses firefox : Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.9.0.10) Gecko/2009072813 Firefox/3.0.10. The script fails with the error - Element xpath=//input[@value="Send"] not found. Checked the Page source, the x-Path exists. However, it is not visible in the compose page.
Kindly help me sort out this issue. | 0 | python,firefox,freebsd,yahoo-mail | 2015-01-07T07:54:00.000 | 0 | 27,814,764 | Check if the button is inside an iframe. If it is then switch to frame and try it again. | 0 | 673 | false | 1 | 1 | Send button in yahoo mail page is not not visible - Firefox, Freebsd 7.2 | 27,815,406 |
1 | 1 | 0 | 0 | 2 | 1 | 0 | 0 | I would like to port a semi-HPC code scriptable with Python to Xeon Phi, to try out the performance increase; it cannot be run in offload mode (data transfers would be prohibitive), the whole code must be run on the co-processor.
Can someone knowledgeable confirm that it means I will have to "cross-compile" all the libraries (including Python) for the Xeon Phi arch, have those libs mounted over NFS on the Xeon Phi, and then execute it all there?
For cross-compilation: what is the target arch? Of course, for numerics the xeon-phi is a must due to extended intrinsics, but for e.g. Python, would the binaries and libs be binary-compatible with amd64? That would make it much easier, essentially only changing some flags for the number-crunching parts.
UPDATE: For the record, we've had a very bad support from Intel on the forums; realizing poor technical state of the software stack (yocto could not compile and so on), very little documentation and so on, we abandoned this path. Goodbye, Xeon Phi. | 0 | python,xeon-phi | 2015-01-07T10:42:00.000 | 0 | 27,817,594 | Why not first port it from Python (which is bytecode for a virtual machine -- that is a software emulation of a CPU -- then to be translated and executed on a certain 'real' hardware CPU). You could port to C++ or so, which -- when compiled for the target platform -- produces machine code that runs natively on the target. That should improve execution speed, right, so you may not even need a Xeon Phi. | 0 | 1,664 | false | 0 | 1 | Running Python on Xeon Phi | 32,990,976 |
1 | 1 | 0 | 0 | 0 | 0 | 1.2 | 0 | I'm not savvy with Python or server programming at all. My AVG blocked Python from running SimpleHTTPServer. I was able to install Python 3.4.2 successfully, but noticed that SimpleHTTPServer has been moved into HTTP server.
How can I set up my machine or Python 3.4.2 so that I can just type python -m SimpleHTTPServer when working on my AngularJS projects locally?
I'm running Windows 7 64.
Thanks, | 0 | angularjs,python-3.4,simplehttpserver | 2015-01-07T16:09:00.000 | 0 | 27,823,606 | resolved it by re-installing python on my machine. | 0 | 192 | true | 1 | 1 | python simple server with 3.4.2 | 31,328,765 |
1 | 2 | 0 | 2 | 3 | 1 | 1.2 | 0 | I am creating a conda recipe, and have added run_test.py . These are unittest classes.
Unfortunatly, when there are errors, the package is still created.
My question, how to inform conda that the test failed, and it should not continue with the package build.
run_test.py contains :
suit = unittest.TestLoader().discover("../tests/unitTest")#, pattern="test[AP][la]*[sr].py")
unittest.TextTestRunner(verbosity=2).run(suit )
I do add the files in meta.yaml
test:
files:
- ../tests/unittest/
This is the output:
Ran 16 tests in 2.550s
FAILED (errors=5)
===== PACKAGE-NAME-None-np18py27_0 OK ====
I want to stop the build | 0 | python,anaconda,conda | 2015-01-07T18:10:00.000 | 0 | 27,825,854 | The script needs to exit nonzero. If the tests fail, call sys.exit(1) in the script. | 0 | 541 | true | 0 | 1 | How to fail a conda package build when there are errors on run_test.py | 28,181,251 |
1 | 2 | 0 | 0 | 4 | 0 | 0 | 0 | Pytest is able to provide nice traceback errors for the failed tests but is doing this after all the tests were executed and I am interested in displaying the errors progressively.
I know that one workaround would be to make it fail fast at the first error, but I do not want this, I do want it to continue. | 0 | python,pytest | 2015-01-08T12:47:00.000 | 0 | 27,840,512 | Use fail tag
py.test -r f
PyTest will returns just the names of the failing tests, line numbers of where the fail occurred and the type of error that caused the failure. | 0 | 1,385 | false | 0 | 1 | How can I quickly display failure details while using pytest? | 27,842,432 |
1 | 2 | 0 | 2 | 1 | 0 | 0.197375 | 0 | I have tried ./configure --enable-unicode and ./configure --enable-unicode=ucs4 but the command import sys; print sys.maxunicode is still 65535.
How should I fix this and compile Python with 4-byte unicode enabled? | 0 | python,python-2.7,unicode,compilation | 2015-01-09T03:16:00.000 | 0 | 27,853,300 | We can re-compile already installed python with 4-byte Unicode or 2-byte Unicode
Full Flow
after downloading python2.7.x and extracting it.
go to directory Python2.7.x (in my case it is Python2.7.10)
2.fire command "sudo ./configure --enable-unicode=ucs4" or "sudo ./configure --enable-unicode=ucs2" which ever you want.
now you can check if it is UCS2 or UCS4 as below
1.go to terminal
2.type python and enter
now enter following commands
import sys
print sys.maxunicode
if output is 1114111 then it is UCS4
otherwise if output is 65535 then it is UCS2 | 0 | 3,667 | false | 0 | 1 | How to recompile Python2.7 with 4-byte unicode enabled? | 33,888,200 |
1 | 2 | 0 | 0 | 4 | 1 | 0 | 0 | All search results return with "how-to" information rather than "what-it-is" information. I'm looking for a simple explanation of what this feature even is. | 0 | python,unit-testing,testing,discovery | 2015-01-10T23:53:00.000 | 0 | 27,882,449 | Test discovery in python Unittest checks if all the python test files are modules that are importable from top-level directory of your project. | 0 | 1,658 | false | 0 | 1 | What does test discovery mean as it relates to Python unit testing? | 69,606,469 |
2 | 3 | 0 | 0 | 0 | 1 | 0 | 0 | To avoid type in long path name, I am trying to create a folder to put all my .py file in. And I want it to be some sort of "default" folder that every time I run .py file, the system will search this folder to look for that file.
One solution i figured, is to put my .py file in those module folders like "python\lib", and I can call python -m filename.
But I do not want to make a mess in the lib folder.
Is there any other ways to do it? Thanks! | 0 | python,module,system | 2015-01-11T04:47:00.000 | 0 | 27,884,051 | It's not possible to do that without path. The only thing that you can is putting all of modules that you want to use in the same directory, you don't have to put them python\lib , you can put them in a folder on your desktop for example.Then run your scripts in that folder but, always be sure starting scripts with #!/usr/bin/env python. | 0 | 5,561 | false | 0 | 1 | How do I run python file without path? | 27,884,145 |
2 | 3 | 0 | 1 | 0 | 1 | 0.066568 | 0 | To avoid type in long path name, I am trying to create a folder to put all my .py file in. And I want it to be some sort of "default" folder that every time I run .py file, the system will search this folder to look for that file.
One solution i figured, is to put my .py file in those module folders like "python\lib", and I can call python -m filename.
But I do not want to make a mess in the lib folder.
Is there any other ways to do it? Thanks! | 0 | python,module,system | 2015-01-11T04:47:00.000 | 0 | 27,884,051 | for example: first type
sys.path.append("/home/xxx/your_python_folder/")
then you can import your own .py file | 0 | 5,561 | false | 0 | 1 | How do I run python file without path? | 27,884,333 |
1 | 2 | 0 | 279 | 185 | 0 | 1.2 | 0 | I am running unit tests on a CI server using py.test. Tests use external resources fetched over network. Sometimes test runner takes too long, causing test runner to be aborted. I cannot repeat the issues locally.
Is there a way to make py.test print out execution times of (slow) test, so pinning down problematic tests become easier? | 0 | python,pytest | 2015-01-11T05:54:00.000 | 0 | 27,884,404 | I'm not sure this will solve your problem, but you can pass --durations=N to print the slowest N tests after the test suite finishes.
Use --durations=0 to print all. | 0 | 37,228 | true | 0 | 1 | Printing test execution times and pinning down slow tests with py.test | 27,899,853 |
1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | I am doing an AI project based on Keyboard Analytics. In part 1 of the project, I have to build a python based application which will record keyboard inputs. I have some requirements.
I require a breakdown of input. For eg., 'I is CapsLock + 'i' or Shift + 'i'.
I also want to be able to find the duration of a keypress.
I need to do this globally. Not restricted to an application.
I have considered pyHook + win32 combo. But I don't think it gives keyPress duration
I have also considered pyGame. But, it's limited to the application.
Is there any module that will help me do this? Or any way I can combine existing modules to get the job done? | 0 | python,pygame,artificial-intelligence,keypress,pyhook | 2015-01-12T14:35:00.000 | 0 | 27,904,399 | I believe you can simply use Pygame. Simply check for key events and use pygame.time to tick during the keypress. | 0 | 193 | false | 0 | 1 | Getting global KeyPress duration using Python | 28,001,301 |
Subsets and Splits