Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
26,621,636
2014-10-29T02:00:00.000
1
1
0
0
python,command-line,cron,scrapy,twython
26,708,022
1
true
1
0
Use a cron job. This allows you to run a command line command at a time interval. You can combine commands with && and as a result can change directories with the normal bash command cd. So in this case you can call cd /directory/folder && scrapy crawl scrape_site && python twitter.py To run this every fifteen minutes make the cron job run like */15 * * * * So the full cron job would look like */15 * * * * cd /directory/folder && scrapy crawl scrape_site && python twitter.py
1
0
0
I wrote a web scraper in Scrapy that I call with scrapy crawl scrape_site and a twitter bot in Twython that I call with python twitter.py. I have all the proper files in a directory called ScrapeSite. When I execute these two commands on the command line while in the directory ScrapeSite they work properly. However, I would like to move the folder to a server and have a job run the two commands every fifteen minutes. I've looked into doing a cron job to do so, but the cronjobs are located in a different parent directory, and I can only call scrapy in a directory with Scrapy files (e.g. ScrapeSite). Can I make a cron job to run a file in the ScrapeSite directory that in turn can call the two commands at the proper level? How can I programmatically execute command line commands at a different leveled directory at a certain time interval?
How can I programmatically execute command line commands at a different leveled directory at a certain time interval?
1.2
0
1
190
26,621,818
2014-10-29T02:24:00.000
1
0
0
1
python,linux,terminal
26,621,902
1
false
0
0
URL support is hard coded in the individual terminal emulator. The terminal may support arbitrary URIs as registered in whichever environment it calls home, so that you can e.g. write a Gnome extension for myapp://something and have it work in gnome-terminal, but this is entirely terminal specific. It's also possible for a terminal program in any terminal to receive mouse events and it can then do whatever it wants with them (like how elinks lets you click non-URL links to browse). However, this requires the program to be running in the foreground and controlling everything that appears on that terminal.
1
1
0
I've noticed that hyperlinks printed in my Debian/Linux terminal are clickable and open the browser when clicked. I was wondering if this could be used for other things or if this was just hard-coded in the terminal for hyperlinks only. Is it possible to print out a line in Python that when clicked will launch another process, for example?
Clicking on element in terminal
0.197375
0
0
155
26,625,845
2014-10-29T08:37:00.000
6
1
0
0
python,amazon-web-services,amazon-ec2,webserver
26,626,875
2
true
1
0
you should open the 8080 port and ip limitation in security croups, such as: All TCP TCP 0 - 65535 0.0.0.0/0 the last item means this server will accept every request from any ip and port, you also
1
4
0
I work on a project in which I need a python web server. This project is hosted on Amazon EC2 (ubuntu). I have made two unsuccessful attempts so far: run python -m SimpleHTTPServer 8080. It works if I launch a browser on the EC2 instance and head to localhost:8080 or <ec2-public-IP>:8080. However I can't access the server from a browser on a remote machine (using <ec2-public-IP>:8080). create a python class which allows me to specify both the IP address and port to serve files. Same problem as 1. There are several questions on SO concerning Python web server on EC2, but none seems to answer my question: what should I do in order to access the python web server remotely ? One more point: I don't want to use a Python web framework (Django or anything else): I'll use the web server to build a kind of REST API, not for serving HTML content.
Accessing Python webserver remotely on Amazon EC2
1.2
0
1
1,858
26,626,347
2014-10-29T09:09:00.000
-2
1
0
0
python,ssh,paramiko
26,708,001
1
false
0
0
Load host keys from a local host-key file. Host keys read with this method will be checked after keys loaded via load_system_host_keys, but will be saved back by save_host_keys (so they can be modified). The missing host key policy AutoAddPolicy adds keys to this set and saves them, when connecting to a previously-unknown server. This method can be called multiple times. Each new set of host keys will be merged with the existing set (new replacing old if there are conflicts). When automatically saving, the last hostname is used. Read a file of known SSH host keys, in the format used by openssh. This type of file unfortunately doesn't exist on Windows, but on posix, it will usually be stored in os.path.expanduser("~/.ssh/known_hosts").
1
4
0
I am connecting to a host for the first time using its private key file. Do I need to call load_host_keys function before connecting to the host? Or Can I just skip it? I have the autoAddPolicy for missing host key but how can python know the location of the host key file? Hence my question, when to use the function load_host_key?
When and why to use load_host_keys and load_system_host_keys?
-0.379949
0
1
3,777
26,627,776
2014-10-29T10:20:00.000
5
0
0
0
python,pyqt,qmessagebox
26,627,916
1
true
0
1
Finally found an answer. Just used MsgBox.done(1) instead of close. Thanks
1
2
0
So I have a QMessageBox that is not closable by the user. I want it to stay active until some work is done, and then close automatically. I tried MsgBox.close(), but it doesn't work. How do I close that MsgBox? Thanks in advance
PyQt close QMessageBox
1.2
0
0
3,442
26,628,565
2014-10-29T10:57:00.000
-1
1
0
0
python,ibm-mq,pymqi
26,640,062
1
false
0
0
You should never need to check the status of a channel or start it, from an application. This is probably why the documentation does not have much coverage on this as it is not an expected thing for an application to do. Instead, your channel should be configured to automatically start when messages need to be moved across it, so it is always running when you need it to be. This is known as Triggering. If there is an issue with the network connection the channel is using, the channel will retry and remake the connection again too, so you do not need to check the status of the channel from your application.
1
0
0
I am interfacing with a Websphere MQ system, using Python/pymqi. From time to time, I will need to: check the status of an MQ channel start/restart a channel that is not running How can I achieve the above? Pymqi documentation doesn't appear to cover this, despite having very good coverage of dealing with MQ queues.
How can I restart an MQ channel using pymqi?
-0.197375
0
0
195
26,629,379
2014-10-29T11:39:00.000
15
0
1
1
python,windows-7,uninstallation,python-3.4,system-restore
27,374,542
2
false
0
0
Just had this problem and solved it by hitting repair first then uninstall.
2
8
0
A couple of days after uninstalling Python 3.4.2 I had to carry out a system restore (I'm using Windows 7) due to accidentally installing a bunch of rubbish-ware that was messing with my computer even after installation. This system restore effectively "reinstalled" Python, or rather a broken version of it. I now can't uninstall it via the usual Control Panel -> Uninstall Programs tool, nor can I reinstall it using the original installer. Unfortunately Windows has not saved an earlier system snapshot that I could restore to. Both the uninstall and reinstall processes make a fair bit of progress before stopping with a warning error that says: "There is a problem with this Windows Installer package. A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor" Does anyone have any suggestions on how I might succeed in this uninstallation?
Can't uninstall Python 3.4.2 from Windows 7 after system restore
1
0
0
14,672
26,629,379
2014-10-29T11:39:00.000
12
0
1
1
python,windows-7,uninstallation,python-3.4,system-restore
26,629,632
2
true
0
0
Just delete the c:\Python3.4\ directory, Reinstall python 3.4 (any sub version, just has to be 3.4), and uninstall that. Python is, for the most part, totally self-contained in the Python3.4 directory. Reinstalling python is only needed so you can get a fresh uninstaller to remove the registry keys installation creates.
2
8
0
A couple of days after uninstalling Python 3.4.2 I had to carry out a system restore (I'm using Windows 7) due to accidentally installing a bunch of rubbish-ware that was messing with my computer even after installation. This system restore effectively "reinstalled" Python, or rather a broken version of it. I now can't uninstall it via the usual Control Panel -> Uninstall Programs tool, nor can I reinstall it using the original installer. Unfortunately Windows has not saved an earlier system snapshot that I could restore to. Both the uninstall and reinstall processes make a fair bit of progress before stopping with a warning error that says: "There is a problem with this Windows Installer package. A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor" Does anyone have any suggestions on how I might succeed in this uninstallation?
Can't uninstall Python 3.4.2 from Windows 7 after system restore
1.2
0
0
14,672
26,632,250
2014-10-29T13:57:00.000
0
0
1
0
python,cython,conda
26,639,123
1
false
0
0
In the conda build script, you need to install the files, not just build them. For Python, this typically means running python setup.py install in the build.sh, and including python in your build dependencies so that the python will install into the build environment.
1
0
0
I have built a module using "conda build packagename". However, the built module ends up in "\Anaconda\conda-bld\work". The module can only be imported (using "import packagename")if I cd into this directory, then run Python. I have tried placing the files in "\Anaconda\conda-bld\work" in "\Anaconda\Lib\site-packages", however I am not able to import the module from any directory; I must be in "\Anaconda\Lib\site-packages". Is the only solution to put the .PYD file/ .SO file next to the executable Python file or is there a way to let Python know there is a new module installed? Thank you for your help.
Conda Cython Build PYD/SO Files
0
0
0
992
26,637,631
2014-10-29T18:06:00.000
1
0
0
1
python,django,celery,django-celery
26,644,301
1
true
1
0
Eventually you will have duplicates. Many people ignore this issue because it is a "low probability", and then are surprised when it hits them. And then a story leaks how someone was logged into another uses Facebook account. If you require them to always be unique then you will have to prefix each ID with something that will never repeat - like current date and time with microseconds. And if that is not good enough, because there still is a even tinier chance of a collision, you can create a small application that will generate those prefixes, and will add an counter (incremented after each hash request, and reset every couple seconds) to the date and microseconds. It will have to work in single-threaded mode, but this will be a guarantee to generate unique prefixes that won't collide.
1
0
0
I'm using celery with django and am storing the task results in the DB. I'm considering having a single set of workers reading messages from a single message broker. Now I can have multiple clients submitting celery tasks and each client will have tasks and their results created/stored in a different DB. Even though the workers are common, they know which DB to operate upon for each task. Can I have duplicate task ids generated because they were submitted by different clients pointing to different DBs? Thanks,
Common celery workers for different clients having different DBs
1.2
1
0
412
26,638,048
2014-10-29T18:30:00.000
1
0
0
0
python,gevent
27,197,021
1
false
0
0
Monkey patch at the earliest possible moment in your code (i.e. before any of your third party modules have been imported). Then, when the third party modules are imported, they will use the monkey-patched versions of the standard libraries.
1
0
0
gevent library documentation suggests to use gevent.monkey.patch_all() function to make standard library modules cooperative. As I understand this method only works for my code (written by me), because I can explicitly monkey-patch standard library before importing standard library modules. What about third-party libraries (websocket client for example), which import threading, socket modules internally. Is there a way for this libraries to use patched version of threading and socket modules ?
gevent.patch_all() and third-party libraries
0.197375
0
1
194
26,638,665
2014-10-29T19:10:00.000
1
0
0
0
python,lldb
26,642,160
1
true
0
0
There isn't a good way to do #1. It does seem gross to have to parse the output of break list... You can sort of do #2 by making a python method callback with a bunch of "HandleCommand" lines in it. It wouldn't be hard to add SB API methods that do either of these tasks. We already have SBStringList as a convenient container for the command text either coming in or going out. If you want to try your hand at lldb hacking, a patch to this effect would be warmly accepted. Otherwise, file a bug with the lldb.llvm.org bugzilla, and somebody will get around to it when they have a free moment.
1
1
0
I am attempting to write a python command extension for lldb which can export the current set of breakpoints to a plist file, and restore the exported breakpoints from the file complete with conditions and commands (presumably in a new session.) I looked over the lldb python API and searched the web (and stack overflow) but have not found answers to the following issues: Is there any way to get the list of breakpoint commands associated with an SBBreakpoint object via the Python API? (I overcame this issue by issuing a command line style "breakpoint list" to the HandleCommand API and parsing the results for commands - but it would be nice to be able to do it via the API.) Is there any way to set multiple commands on an SBBreakPoint object via the python API? The command line alternative only has an affordance for a singe line command. Other than setting an python method callback, there does not appear to be a way to set multiple lldb command line style breakpoint commands (non python) on a breakpoint object?
lldb python API access to getting and setting breakpoint commands (of the non python variety)
1.2
0
0
246
26,646,118
2014-10-30T06:04:00.000
2
0
1
0
python,perl
26,655,292
1
true
0
0
Create a CPAN autobundle file. It records the specific version of each module. Use a module installer that can target versions, such Carton, perlrocks, cpanminus. Example: cpanm HTTP::[email protected]
1
1
0
In Python I can use pip to dump a list of versions of installed modules pip list > modules.txt and use it to install that same list of versions of modules in another installation. pip install -r modules.txt How is this done with Perl? (Note: I understand that I can install standalone Perl installations using perlbrew, which makes it the equivalent of Python's virtualenv, so I'm just missing the pip piece.)
How can I install the same version of the modules from one Perl installation in another installation, like you can do with pip in Python?
1.2
0
0
84
26,649,495
2014-10-30T09:38:00.000
-1
0
0
0
python,web.py
26,651,096
2
false
1
0
web.py runs CherryPy as the web server and it has support for handling requests with chunked transfer coding. Have you misread the documentation?
1
1
0
I am using web.py to run a server. I need to get a request from a remote server, however, the request sends me a data with Chunked Transfer Coding. I can use web.ctx.env['wsgi.input'].read(1000) to get the data. But this is not what I need since I don't know the length of the data (because it is chunked). But if I use web.ctx.env['wsgi.input'].read() the server would crash. Can anybody tell me how to get the chunked data in a request?
Python: how to read 'Chunked Transfer Coding' from a request in web.py server
-0.099668
0
1
599
26,650,427
2014-10-30T10:20:00.000
1
0
1
0
python,shebang
26,650,500
5
false
0
0
No, only the main Python file needs the shebang. The shebang is only needed if you want to execute it as ./your_file.py or as your_file.py if it's in your $PATH. So unless the other files should also be executable by themselves (you can always execute using python your_file.py) you don't need the shebang.
3
9
0
I am working on a medium sized python (2.7) project with multiple files I import. I have one main python file which starts the program. Other files contain class definitions, functions, etc. I was wondering if I should put the shebang line in every python file or only the one I run in order to start my program?
Should I put the shebang line in every python file?
0.039979
0
0
6,656
26,650,427
2014-10-30T10:20:00.000
2
0
1
0
python,shebang
26,650,505
5
false
0
0
You only need it in the file you execute, though it might help code editors tell what kind of code they are looking at if you have it in other files too.
3
9
0
I am working on a medium sized python (2.7) project with multiple files I import. I have one main python file which starts the program. Other files contain class definitions, functions, etc. I was wondering if I should put the shebang line in every python file or only the one I run in order to start my program?
Should I put the shebang line in every python file?
0.07983
0
0
6,656
26,650,427
2014-10-30T10:20:00.000
0
0
1
0
python,shebang
70,532,237
5
false
0
0
Python scripts with "executable" permissions (or any executable script files on unix, linux, mac, etc) can use a shebang line. It's important to understand that Python does not read any lines marked as a comment (prefixed with a "#" sign) and that includes a shebang line. Actually, your OS reads the shebang line and sends the script off to the appropriate interpreter as instructed by the shebang line. lots of scripts do this: "sh" scripts "bash" scripts "python" scripts, all of which are interpreted. So, only the executable Python files that you want to run as command/programs, including package files coded with: if __name__ == '__main__' somewhere in the script, which are also intended to be executed directly by a user. (the objective of the shebang line it to simplify the scripts execution. Rather than forcing the user to run the program by typing the interpreter to use, and then typing the script file as an argument. The user can just type in the command only, as if it were any other command, or possible just click it on the desktop. The OS will then read the script first, find the shebang line, then call the interpreter found in the shebang)
3
9
0
I am working on a medium sized python (2.7) project with multiple files I import. I have one main python file which starts the program. Other files contain class definitions, functions, etc. I was wondering if I should put the shebang line in every python file or only the one I run in order to start my program?
Should I put the shebang line in every python file?
0
0
0
6,656
26,652,617
2014-10-30T12:08:00.000
0
0
0
1
python,google-app-engine,parsing,google-cloud-storage,urllib
26,660,144
2
false
1
0
They will likely be very close. the AppEngine cloud storage library uses the URL fetch service, just like urllib. Nonetheless, like any performance tuning, I'd suggest measuring on your own.
1
0
0
I'm developing an app (with Python and Google App Engine) that requires to load some content (basically text) stored in a bucket inside the Google Cloud Storage. Everything works as expected but I'm trying to optimize the application performance. I have two different options: I can parse the content via the urllib library (the content is public) and read it or I can load the content using the cloudstorage library provided by Google. My question is: in terms of performance, which method is better? Thank you all.
urllib vs cloud storage (Google App Engine)
0
0
1
199
26,657,334
2014-10-30T15:39:00.000
0
0
0
0
python,numpy,scipy,windows64
62,499,396
14
false
0
0
Follow these steps: Open CMD as administrator Enter this command : cd.. cd.. cd Program Files\Python38\Scripts Download the package you want and put it in Python38\Scripts folder. pip install packagename.whl Done You can write your python version instead of "38"
2
31
1
I found out that it's impossible to install NumPy/SciPy via installers on Windows 64-bit, that's only possible on 32-bit. Because I need more memory than a 32-bit installation gives me, I need the 64-bit version of everything. I tried to install everything via Pip and most things worked. But when I came to SciPy, it complained about missing a Fortran compiler. So I installed Fortran via MinGW/MSYS. But you can't install SciPy right away after that, you need to reinstall NumPy. So I tried that, but now it doesn't work anymore via Pip nor via easy_install. Both give these errors: There are a lot of errors about LNK2019 and LNK1120,. I get a lot of errors in the range of C: C2065,C2054,C2085,C2143`, etc. They belong together I believe. There is no Fortran linker found, but I have no idea how to install that, can't find anything on it. And many more errors which are already out of the visible part of my cmd-windows... The fatal error is about LNK1120: build\lib.win-amd64-2.7\numpy\linalg\lapack_lite.pyd : fatal error LNK1120: 7 unresolved externals error: Setup script exited with error: Command "C:\Users\me\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C:\BLAS /LIBPATH:C:\Python27\libs /LIBPATH:C:\Python27\PCbuild\amd64 /LIBPATH:build\temp.win-amd64-2.7 lapack.lib blas.lib /EXPORT:initlapack_lite build\temp.win-amd64-2.7\Release\numpy\linalg\lapack_litemodule.obj /OUT:build\lib.win-amd64-2.7\numpy\linalg\lapack_lite.pyd /IMPLIB:build\temp.win-amd64-2.7\Release\numpy\linalg\lapack_lite.lib /MANIFESTFILE:build\temp.win-amd64-2.7\Release\numpy\linalg\lapack_lite.pyd.manifest" failed with exit status 1120 What is the correct way to install the 64-bit versions NumPy and SciPy on a 64-bit Windows machine? Did I miss anything? Do I need to specify something somewhere? There is no information for Windows on these problems that I can find, only for Linux or Mac OS X, but they don't help me as I can't use their commands.
Installing NumPy and SciPy on 64-bit Windows (with Pip)
0
0
0
131,918
26,657,334
2014-10-30T15:39:00.000
0
0
0
0
python,numpy,scipy,windows64
44,685,941
14
false
0
0
for python 3.6, the following worked for me launch cmd.exe as administrator pip install numpy-1.13.0+mkl-cp36-cp36m-win32 pip install scipy-0.19.1-cp36-cp36m-win32
2
31
1
I found out that it's impossible to install NumPy/SciPy via installers on Windows 64-bit, that's only possible on 32-bit. Because I need more memory than a 32-bit installation gives me, I need the 64-bit version of everything. I tried to install everything via Pip and most things worked. But when I came to SciPy, it complained about missing a Fortran compiler. So I installed Fortran via MinGW/MSYS. But you can't install SciPy right away after that, you need to reinstall NumPy. So I tried that, but now it doesn't work anymore via Pip nor via easy_install. Both give these errors: There are a lot of errors about LNK2019 and LNK1120,. I get a lot of errors in the range of C: C2065,C2054,C2085,C2143`, etc. They belong together I believe. There is no Fortran linker found, but I have no idea how to install that, can't find anything on it. And many more errors which are already out of the visible part of my cmd-windows... The fatal error is about LNK1120: build\lib.win-amd64-2.7\numpy\linalg\lapack_lite.pyd : fatal error LNK1120: 7 unresolved externals error: Setup script exited with error: Command "C:\Users\me\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\amd64\link.exe /DLL /nologo /INCREMENTAL:NO /LIBPATH:C:\BLAS /LIBPATH:C:\Python27\libs /LIBPATH:C:\Python27\PCbuild\amd64 /LIBPATH:build\temp.win-amd64-2.7 lapack.lib blas.lib /EXPORT:initlapack_lite build\temp.win-amd64-2.7\Release\numpy\linalg\lapack_litemodule.obj /OUT:build\lib.win-amd64-2.7\numpy\linalg\lapack_lite.pyd /IMPLIB:build\temp.win-amd64-2.7\Release\numpy\linalg\lapack_lite.lib /MANIFESTFILE:build\temp.win-amd64-2.7\Release\numpy\linalg\lapack_lite.pyd.manifest" failed with exit status 1120 What is the correct way to install the 64-bit versions NumPy and SciPy on a 64-bit Windows machine? Did I miss anything? Do I need to specify something somewhere? There is no information for Windows on these problems that I can find, only for Linux or Mac OS X, but they don't help me as I can't use their commands.
Installing NumPy and SciPy on 64-bit Windows (with Pip)
0
0
0
131,918
26,660,542
2014-10-30T18:22:00.000
1
0
0
0
python,django,gunicorn,paramiko,f5
29,862,343
1
false
1
0
Total guess, but perhaps this will be helpful in debugging. Basically, ensure you've removed all output buffering, which can often be hiding what is really happening when layering multiple big frameworks (like you are doing here). Ensure that you disable all output buffering in Python, both for your foreground webserver process and for any worker processes (setting PYTHONUNBUFFERED is an easy way to ensure that none of your python scripts have buffering, at least on the standard library functions). The terminal can also introduce buffers that make debugging exceptionally hard. Consider switching your command to stdbuf -o0 -e0 your command to disable buffers on stdout and stderr (your command could still re-enable them, but most programs do not).
1
10
0
I have a Django project running behind Nginx, and Gunicorn. One of the applications interacts with network devices using Exscript, which is in turn using Paramiko. Some devices do not work properly when they are behind Gunicorn. The same exact code will work fine from within the django-admin shell. It will also work if I launch the built in django server, but I still get the error if I bypass nginx, and connect directly to Gunicorn. I tried moving the functionality to a celery task, it had the same exact problem, but only behind Gunicorn. I wrote a script using django-extensions that works from the command line, but will fail if called via subprocess. But only behind Gunicorn. The devices that are failing all seem to be F5 LTMs, and it looks like the buffer on the exscript object is being modified somehow. If I had to guess I would say that Gunicorn, and Exscript/Paramiko are somehow stepping on each others memory, or perhaps Gunicorn is setting some environment variable that Exscript is picking up on. In any case I am thoroughly stumped, and would appreciate any guidance on how to troubleshooot this.
Could gunicorn cause an issue with exscript/paramiko?
0.197375
0
0
436
26,661,358
2014-10-30T19:07:00.000
0
0
0
0
android,python,sockets,tcp,real-time
26,661,653
1
true
0
0
Most TCP implementations delay sending small buffers for a short period of time (~.2 seconds) hoping that more data will be presented before adding the expense of sending the TCP segment. You can use the TCP_NODELAY option to (mostly) eliminate that delay. There are several other factors that get in the way, such as where the stack happens to be in an ACK sequence, but its reasonably good way to get prompt delivery. Latency depends on many factors including other traffic on the line and whether a given segment is dropped and needs to be retransmitted. I'm not sure what a solid number would be. Sometimes real time data is "better never than late", making UDP datagrams a good option. update: A TCP connection stays open until you close them with shutdown(), a client or server level socket timeout hits or the underlying stack finally gets bored and closes it. So normally you just connect and send data periodically over time. A common way to deal with a timed out socket is to reconnect if you hit a send error.
1
0
0
I am pretty new to TCP networking and would like to use TCP for real time transfer of data. Essentially, I need my PC python to send data (single character) to Android application. Data to be send changes in real time, and upon change to data (usually about 0.5 - 1sec apart), it has to send this new data to Android app and will be displayed on the app immediately. My question is 1) If I am using TCP, is it possible to keep the socket connection open even after sending one data to anticipate the subsequent transfers. Or do I need to close the connection after every single data transfer and set up another socket connection. 2) What is the latency of a TCP in the event I am performing something like these? Any form of advice is greatly appreciated!
Use of TCP for real time small packet data over a period of time
1.2
0
1
851
26,662,422
2014-10-30T20:12:00.000
5
0
0
0
python,django,multithreading,mqtt,paho
26,665,495
1
true
1
0
Use the threaded interface to paho-mqtt instead. This starts a background thread to handle the network processing and can be accessed with loop_start(). Alternatively you could make your own thread and just call loop() yourself.
1
7
0
I am working on a website and am using Django for development. I have a few devices that communicate to the website using MQTT, and I plan to use paho-mqtt client. My issue is that for paho-mqtt to function I must call a function that loops forever while paho-mqtt continues to listen for messages. What is the best way to have this loop called and contained in it's own thread? Can I just create a new thread, or should I use something like celery?
How to handle mqtt loop_forever function when using Django?
1.2
0
0
3,352
26,665,058
2014-10-30T23:16:00.000
1
0
1
0
python,python-2.7,pygame
26,694,657
1
true
0
1
Changing the sounds to .wav seemed to fix the problem. I tested many different other solutions, but this was the only one that seemed to actually work.
1
1
0
I have a project that I'm compiling with py2exe. After compiling, everything seems to be working fine except for the sound, which for some reason isn't playing anything except a single pop sound in place of the actual sound. I know that relative paths are working, since all my other files (images, data-files, etc) are being loaded without any problems relative to the location of the application, and I'm not getting any error messages in the console (I compile with it showing, rather than with just the Pygame window alone). This is likely a path problem more than a Pygame problem, or possibly I'm missing a .dll but I don't know what the exact cause of this problem is.
Why do sounds work with Pygame before compiling with py2exe, but not after?
1.2
0
0
64
26,667,008
2014-10-31T03:13:00.000
0
0
0
1
python,linux,sungridengine
26,760,309
1
false
0
0
Instead of using qsub, use qrsh.
1
3
0
I am submitting a job(script) to Sun Grid Engine. The job is a python program. It can take many hours to run, but it will periodically write to stdout and stderr to inform me its status (like how many iterations is finished, etc). The problem is that SGE is buffering the output and only writes to file at the end, which means that I cannot see the output on the screen or by tailing the file in real time. I can only get to know the status after the job is finished. Is there a way to get around this by configuring SGE (qsub, etc.)?
SGE: How to see the output in real time
0
0
0
1,042
26,667,750
2014-10-31T04:53:00.000
0
1
0
1
python,raspberry-pi,raspbian
26,667,793
1
false
0
0
You must not add it to the crontab which will start it on a time scheduled base. Instead write an init script or (more simple) add it to /etc/rc.local !
1
0
0
I am starting a python code once my system (debian based raspberry pi) starts by adding the statement sudo python /path/code.py in crontab -e. On boot up it does start. But I would like to know, how can I stop the thing from running using the command line once it starts up.
Stopping a autostarted python program started using crontab
0
0
0
525
26,669,244
2014-10-31T07:11:00.000
-3
0
1
0
python,parallel-processing,pip
26,671,262
2
false
0
0
I think that the best approach to have a better speed is to see where is the bottleneck. Try to analyse which processes are taking place when you use pip command. Probably the most time is spent downloading from PyPI and for compiling the libraries to native (such as PIL). You could try to make your own PyPI repository and to pre-compile the sources that are needed to be compiled. In the past there has been much talk of this but there isn't a really speed up if launching pip in parallel.
1
8
0
I have a lot of packages to install in my pip requirement and I'd like to process them in parallell. I know that, for example, that if I want n parallel jobs from make I have to write make -j n; is there an equivalent command for pip requirements? Thanks!
How to install/compile pip requirements in parallel (make -j equivalent)
-0.291313
0
0
3,552
26,672,090
2014-10-31T10:10:00.000
1
0
0
0
python,xml,compiler-construction
26,672,298
1
true
0
0
To execute python code from python you can use exec function, about reading from xml - you didn't provide any information about xml file structure.
1
0
0
What I want to do is to read a xml file that contains some python code inside. And then run these code. For example, the xml file contains print 'Hello World' I want to use a function such as like def RunXML(xml): This function will read the code from the xml file and execute it. In this case, the function RunXML will print 'Hello World'. Appreciate for any help.
How to read a xml file and covert it to python code and run?
1.2
0
1
222
26,673,342
2014-10-31T11:13:00.000
0
1
1
0
python,unicode,python-idle
26,674,682
3
false
0
0
1) Navigate to the folder with your current version of python. Mine is: /System/Library/Frameworks/Python.framework/Versions/2.7/Resources 2) View the python application package, as this is the IDLE. 3) You should see a file called Info.plist containing the line- <?xml version="1.0" encoding="UTF-8"?> 4) Here you can modify the encoding, keep in mind python dose not support all source encoding options.
1
4
0
How do I make IDLE use UTF-8 as the default encoding for my Python files ? There is no "Encoding" option in IDLE settings.
Configure IDLE to use Unicode
0
0
0
7,034
26,684,292
2014-10-31T22:23:00.000
3
0
1
1
java,python,bash,awk
26,684,332
1
true
0
0
As long as you process your file line by line and you assemble some statistics, it doesn't really matter what tool you choose. Java has some advantage in terms of speed, compared to scripting languages, but in the end it will be a difference only by a constant factor. What matters the most is the algorithm that you use to process the file.
1
0
0
I am working on a project which uses text files (.txt) for input, reading them line by line but this files can go as large as 1 terabytes. I know some languages/technologies which I used for similar problems, those are Java, Bash, Awk, and Python. But I don't know which one can work with such large file, and what kind on tricks and tweaks will be needed.
choosing language to work on very large text files (up to some terabytes)
1.2
0
0
103
26,687,188
2014-11-01T07:42:00.000
0
0
1
0
python,multithreading
26,687,598
2
false
0
0
It is not possible to interrupt the acquire() method on a lock by any system call. acquire() Acquire a lock, blocking or non-blocking. When acquire invoked without arguments(consider t1): if this thread already owns the lock, increment the recursion level by one, and return immediately. Otherwise, if another thread owns the lock, block until the lock is unlocked. Once the lock is unlocked (not owned by any thread), then grab ownership, set the recursion level to one, and return. If more than one thread is blocked waiting until the lock is unlocked, only one at a time will be able to grab ownership of the lock. There is no return value in this case.
2
1
0
When I use two threads t1 and t2 to crawl something from the Internet, and write some filtered data into a single file, I use a Lock() instance to guarantee that there's only one thread that is writing to the file. What I know is when t1.aquire() is called, t2.aquire()will put t2 into blocking state. What I wanna know is, what's going on in the Python interpreter with thread t2 now. Will the interpreter check the state of the thread for every period of time? And further, is the interpreter controlling the CPU time assigned to a single thread or is the host OS?
What's going on when a thread blocks in Python?
0
0
0
112
26,687,188
2014-11-01T07:42:00.000
0
0
1
0
python,multithreading
26,687,606
2
false
0
0
This answer to this question is implementation-specific(that is, depends on the interpreter and OS). In CPython, each Python thread is mapped directly to an OS thread, so scheduling is controlled by an OS. Locking/unlocking is also handled by OS, not the interpreter. In Jython, everything runs under the JVM, which also maps threads directly to native threads. So I don't know any implementation where the Python interpreter handles scheduling and locking by itself.
2
1
0
When I use two threads t1 and t2 to crawl something from the Internet, and write some filtered data into a single file, I use a Lock() instance to guarantee that there's only one thread that is writing to the file. What I know is when t1.aquire() is called, t2.aquire()will put t2 into blocking state. What I wanna know is, what's going on in the Python interpreter with thread t2 now. Will the interpreter check the state of the thread for every period of time? And further, is the interpreter controlling the CPU time assigned to a single thread or is the host OS?
What's going on when a thread blocks in Python?
0
0
0
112
26,688,504
2014-11-01T10:47:00.000
0
0
1
0
python,ansi-escape
26,688,531
1
true
0
0
Here you are not at all printing the contents of the file YOU can do it with the help of read() function. Doing it like print(o.read()) would work fine.
1
0
0
Python 2.7.8 (with colorama) doesn't seem to load ANS files correctly. When i print an ANS file out, the output is blank. The code is is: o = open("C:\ANSI\DATA\MENU1.AMS", 'r') print(o) Am i forgetting a special character or something?
How do i open ANS files in python?
1.2
0
0
395
26,692,052
2014-11-01T19:07:00.000
2
0
1
1
python,python-3.x,pycharm
26,692,407
1
true
0
0
When you go to Settings > Console > Python Console you can choose the standard interpreter for your console. The standard there is the chosen Project Interpreter you select under Settings > Project Interpreter. Don't forget to restart Pycharm. Or you can assign a different interpreter to each project. Go to Settings > Project:myproject > Project Interpreter.
1
2
0
I am a new PyCharm user who switched from Wing. In Wing, if I configure Wing to use the "Python3" interpreter, the console would also run Python3. However, in PyCharm Community Version, even if I configure the project to use the Python 3.4 interpreter, the console would still use 2.7.5. (The program runs properly with Python 3.4) Is there a way that I can use the console with Python3. Platform: Mac OS X 10.7.5 Python 2.7.5 and 3.4 installed. Thanks!
Running Python 3 interpreter in PyCharm Community Edition
1.2
0
0
4,308
26,694,646
2014-11-02T00:09:00.000
1
0
1
0
python,web-scraping,lxml
26,694,820
1
true
0
0
pip is just a python script, you can run /some/other/python $(which pip) install lxml
1
0
0
My Mac (as all Macs) has installed a default version of python, but I installed python 2.7 manually. The issue is that now when I install lxml, I believe that my py2.7 version is not getting it, insttead it was installed for the preinstalled python, since it throws a "no module named lxml" error even after installation.
Installing lxml for an specific version of python?
1.2
0
0
116
26,696,393
2014-11-02T05:34:00.000
1
0
1
0
python,file-io
26,697,443
4
false
0
0
To read all lines up to the last X lines you need to know where the last X lines begin. You will need this information somewhere. There are several ways to get this information. When you write the file save the position of the last X lines. Stop reading when reaching that position. Store the positions of the line beginnings somewhere, this allows appending to the file. You know the size of the lines. Each line could have the same size and you compute it out of the file size Each line has at least one character so you do not need to read the last X characters.
1
2
0
I'd like to read a file line by line, except for the last N lines. How do I know where to stop, without reaching the end of the file and back tracking / discarding the last N lines, in Python? Is asking for # lines = X, and looping (X-N) a good way to go about this? What's the simplest / most Pythonic way of doing this?
Simple Way of NOT reading last N lines of a file in Python
0.049958
0
0
1,698
26,697,891
2014-11-02T09:33:00.000
0
0
0
1
python,python-2.7,environment-variables,ubuntu-14.04,volatility
26,698,001
2
false
0
0
It looks like you added the vol.py to your PATH, which is incorrect. You need to only add the directory to it such as /mydir/volatility/ without the vol.py in it
1
0
0
I'm trying to setup volatility so I can execute commands regardless of what directory I happen to be in at the time. I'm not sure what I'm doing wrong, I've set the environmental variables with the export command. I've double checked my my ~/.bashrc and even added the directory to /etc/enviroment. Running echo $PATH=vol.py returns /mydir/volatility/vol.py. But when I run python vol.py I get "python: can't open file 'vol.py': [Errno 2] No such file or directory" . So I guess my question. How can I set evironmental variables for python so that when I run python vol.py it executes on whatever image file I point it to without being in the volatility directory? Or even better just type vol.py -f whatever/imagefile, the system recognizes it as a python script and executes. I figure its probably something simple, but I'm still learning, so any help is much appreciated. My system : Kubuntu 14.04LTS; Python 2.7; Volatility 2.4
How do I set python environmental variables for Volatility
0
0
0
1,614
26,698,805
2014-11-02T11:30:00.000
0
0
1
0
python,c++,windows,boost-python
26,714,852
2
false
0
0
I just got an answer from one of my colleagues who told me had the exact same problem. The solution was indeed downloading and installing a version of vcredist_x86.exe, but the trick is to find the exact right one. Apparently you can get to a page somewhere from where you can choose the right version. Sorry for not being able to give more exact information, I just have the file now and it works, but it doesn't even say the version number in the file name. This is all very obscure for my taste, but then I'm not a Windows guy.
1
1
0
I've downloaded pythonxy (2.7.6.1) on my new 64 bit Windows machine (Windows 7 Enterprise, SP1). When I try to run python, I get an error saying the side-by-side configuration was incorrect. WinPython 32 bit (2.7.6.3) shows the same behavior, WinPython 64 bit is fine. However, I badly need to compile Python modules with boost and found myself taking the first few steps into what I believe will be searching-the-internet/configuration/compilation hell for 64 bit, so I'd rather try to make the 32-bit python work, for which I have my whole MinGW procedure set up and working. Does anybody know what I need to do in order to fix the side-by-side error? Install some redristributable package or something like that?
32 bit python on 64 bit windows machine
0
0
0
1,442
26,700,204
2014-11-02T14:08:00.000
0
0
0
0
python,sockets,port,zeromq
26,716,452
2
false
0
0
Having read details about ZeroRPC-python current state, the safest option to solve the task would be to create a central LotterySINGLETON, that would receive <-REQ/REP-> send a next free port# upon an instance's request. This approach is isolated from ZeroRPC-dev/mod(s) modifications of use of the otherwise stable ZeroMQ API and gives you the full control over the port#-s pre-configured/included-in/excluded-from LotterySINGLETON's draws. The other way aroung would be to try to by-pass the ZeroRPC layer and ask ZeroMQ directly about the next random port, but the ZeroRPC documentation discourages from bypassing their own controls imposed on (otherwise pure) ZeroMQ framework elements ( which is quite a reasonable to be emphasised, as it erodes the ZeroRPC-layer consistency of it's add-on operations & services, so it shall rather be "obeyed" than "challenged" in trial/error ... )
1
3
0
I am using ZeroRPC for a project, where there may be multiple instances running on the same machine. For this reason, I need to abe able to auto-assign unused port numbers. I know how to accomplish this with regular sockets using socket.bind(('', 0)) or with ZeroMQ using the bind_to_random_port method, but I cannot figure out how to do this with ZeroRPC. Since ZeroRPC is based on ZeroMQ, it must be possible. Any ideas?
ZeroRPC auto-assign free port number
0
0
1
346
26,703,279
2014-11-02T19:17:00.000
0
0
0
0
python,twitter,tweepy
27,247,595
2
true
0
0
Solved. You can never get banned from the Streaming API :)
1
0
0
I'm using Tweepy and I don't find any option to add a delay between each request to make sure I'm not getting banned from Twitter APIs. I think 1 request each 5 seconds should be fine. How can I do this using StreamListener?
How to add a delay between each request in Tweepy StreamListener?
1.2
0
1
397
26,708,170
2014-11-03T05:14:00.000
1
0
0
0
python,wsdl,pysimplesoap
26,923,483
1
false
1
0
Probably that is because you must be using ver 1.10. Try a. comment out the line where you set trace = True or trace = False. Trace has been probably removed in ver 1.10. Or b. revert back to version 1.05a
1
0
0
Not able to view debugged output and when tried with trace, gives an error:init() got an unexpected keyword argument 'trace'
Using wsdl in pysimplesoap
0.197375
0
0
448
26,708,916
2014-11-03T06:28:00.000
1
0
0
0
python,linux,nagios
26,887,365
2
false
0
0
I think it's not possible to automatically add client in monitoring system, but you can add it through web browser using Nconf.
2
1
0
I have a Linux environment which having more than 50 servers and which is monitoring by Nagios. Now we are creating new servers using python web based GUI and we need to add them to nagios server manually. Now we would like to add new servers automatically to nagios. is there any method to add the new servers to nagios automatically? Thanks in advance
automatically add clients to nagios server
0.099668
0
1
815
26,708,916
2014-11-03T06:28:00.000
1
0
0
0
python,linux,nagios
26,962,382
2
false
0
0
Nagios doesn't support this out of the box. That said, it would be fairly easy to create a Python script to automate the task. At the least, you'd have to provide it with a list of machine names and IP addresses, then let the script do all the .cfg file updates. Did this with Perl, taking less than a day to write, it added the host and several basic infra checks.
2
1
0
I have a Linux environment which having more than 50 servers and which is monitoring by Nagios. Now we are creating new servers using python web based GUI and we need to add them to nagios server manually. Now we would like to add new servers automatically to nagios. is there any method to add the new servers to nagios automatically? Thanks in advance
automatically add clients to nagios server
0.099668
0
1
815
26,709,879
2014-11-03T07:47:00.000
1
0
0
0
python,flask
26,717,670
1
false
1
0
Flask is microframework, it is used for developing webapp or related stuff, Flask has its own response building way, you can not control response packets from flask. What you are talking is of different layer, it comes about networking layer.
1
2
0
I am using flask for a very simple app. The response is right but split into multiple tcp packets. It seems that flask put every http header in a tcp packet. Why flask response is split into multiple tcp packets? How do I disable this feature?
How to make flask response in one tcp packet?
0.197375
0
1
299
26,710,816
2014-11-03T09:00:00.000
1
0
1
0
python,ipython
26,720,306
1
true
0
0
The config object itself should be stored as the .config attribute of any IPython configurable object. If you don't already have such an object, calling get_ipython() should get you the InteractiveShell instance that controls IPython. However, the values from the config are transferred to the configurable objects when they're instantiated, and other things may change those options without going through the config. So to see the actual values in use, just look at the attributes of the objects you're interested in, following the names used in the config.
1
0
0
How can i get the active conifg object (as defined in ipython_config.py) of an ipython instance? I'd like to inspect this to check vars are set.
ipython: echo active conifg object (as defined in ``ipython_config.py``) of an ipython instance?
1.2
0
0
90
26,712,229
2014-11-03T10:27:00.000
0
1
0
1
python,macos,setuptools,easy-install
40,178,814
5
false
0
0
You can add "sudo" before "python setup.py ..." in the install.sh.
3
3
0
I'm trying to setup easy_install on my mac. But I'm getting the following error. Installing Setuptools running install Checking .pth file support in /Library/Python/2.7/site-packages/ error: can't create or remove files in install directory The following error occurred while trying to add or remove files in the installation directory: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/test-easy-install-789.pth' The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: /Library/Python/2.7/site-packages/
setuptools easy_install mac error
0
0
0
5,730
26,712,229
2014-11-03T10:27:00.000
8
1
0
1
python,macos,setuptools,easy-install
26,712,371
5
true
0
0
Try again using sudo python ... to be able to write to '/Library/Python/2.7/site-packages/
3
3
0
I'm trying to setup easy_install on my mac. But I'm getting the following error. Installing Setuptools running install Checking .pth file support in /Library/Python/2.7/site-packages/ error: can't create or remove files in install directory The following error occurred while trying to add or remove files in the installation directory: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/test-easy-install-789.pth' The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: /Library/Python/2.7/site-packages/
setuptools easy_install mac error
1.2
0
0
5,730
26,712,229
2014-11-03T10:27:00.000
0
1
0
1
python,macos,setuptools,easy-install
39,312,073
5
false
0
0
Try curl bootstrap.pypa.io/ez_setup.py -o - | sudo python for access related issues.
3
3
0
I'm trying to setup easy_install on my mac. But I'm getting the following error. Installing Setuptools running install Checking .pth file support in /Library/Python/2.7/site-packages/ error: can't create or remove files in install directory The following error occurred while trying to add or remove files in the installation directory: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/test-easy-install-789.pth' The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: /Library/Python/2.7/site-packages/
setuptools easy_install mac error
0
0
0
5,730
26,712,949
2014-11-03T11:08:00.000
1
0
1
0
python,python-3.x,python-imaging-library
69,050,601
2
false
0
0
I mean, why not use both? It's trivial to convert PIL images into OpenCV images and vice-versa, and both have niche functions that can make your life easier. Pair them up with sklearn and numpy, and you're cooking with gas.
1
13
0
Here is the effect I am trying to achieve - Imagine a user submits an image, then a python script to cycle through each JPEG/PNG for a similar image in the current working directory. Close to how Google image search works (when you submit your image and it returns similar ones). Should I use PIL or OpenCV? Preferably using Python3.4 by the way, but Python 2.7 is fine. Wilson
Python - Best way to find similar images in a directory
0.099668
0
0
13,182
26,713,443
2014-11-03T11:36:00.000
-1
0
0
1
python,django
67,849,420
7
false
1
0
There is no way to delete it from the Terminal (unfortunately), but you can delete it directly. Just log into the admin page, click on the user you want to delete, scroll down to the bottom and press delete.
2
73
0
This may be a duplicate, but I couldn't find the question anywhere, so I'll go ahead and ask: Is there a simple way to delete a superuser from the terminal, perhaps analogous to Django's createsuperuser command?
Django delete superuser
-0.028564
0
0
57,795
26,713,443
2014-11-03T11:36:00.000
4
0
0
1
python,django
62,485,147
7
false
1
0
No need to delete superuser...just create another superuser... You can create another superuser with same name as the previous one. I have forgotten the password of the superuser so I create another superuser with the same name as previously.
2
73
0
This may be a duplicate, but I couldn't find the question anywhere, so I'll go ahead and ask: Is there a simple way to delete a superuser from the terminal, perhaps analogous to Django's createsuperuser command?
Django delete superuser
0.113791
0
0
57,795
26,720,195
2014-11-03T18:04:00.000
1
0
1
0
python,rgb,color-coding
26,720,253
1
true
0
0
Basically you want to keep the ratio between the components constant. So, to make it darker, simply multiply all 3 components by a constant. e.g. 0.9 again and again, until they're all 0, meaning black. To make brighter, multiply by a number larger than 1, and clamp at 255
1
0
0
I am trying to write a program where the user gives you the rgb value for a color and how many shades and tints they want to print out. Basically I just want to print out a grid with the left most color being the base color the user entered and then depending on if its shades or tints get lighter or darker until it reaches black or white. For example: User input: r = 255, g = 0, b = 0, numTints = 5, numShades = 8 Print Shades: red, darker red, darker red, darker red, black. Same for Tint only getting lighter till it gets to white. I'm new to programming and python, so I am having trouble figuring out how to calculate the shades and tints and have it stop on black and white.
How to calculate a number of shades and tints using a rgb value in python
1.2
0
0
841
26,722,127
2014-11-03T20:00:00.000
2
0
0
1
python,google-app-engine,google-bigquery,google-cloud-datastore
26,722,516
2
false
1
0
There is no full working example (as far as I know), but I believe that the following process could help you : 1- You'd need to add a "last time changed" to your entities, and update it. 2- Every hour you can run a MapReduce job, where your mapper can have a filter to check for last time updated and only pick up those entities that were updated in the last hour 3- Manually add what needs to be added to your backup. As I said, this is pretty high level, but the actual answer will require a bunch of code. I don't think it is suited to Stack Overflow's format honestly.
1
2
0
Currently, I'm using Google's 2-step method to backup the datastore and than import it to BigQuery. I also reviewed the code using pipeline. Both methods are not efficient and have high cost since all data is imported everytime. I need only to add the records added from last import. What is the right way of doing it? Is there a working example on how to do it in python?
Import Data Efficiently from Datastore to BigQuery every Hour - Python
0.197375
1
0
541
26,722,621
2014-11-03T20:29:00.000
1
0
0
0
python,openerp,openerp-7,odoo
26,728,276
1
true
1
0
You have to create a parser for you report. And it is this parser that returns the list of products.
1
1
0
I made a new report on OpenERP, for stock module, which uses the product.product model, like the default one which is Stock Level Forecast. Now, what is the problem I have, I need this report to be shown on "Location Structure->Physical Locations->Warehouse->Analyze current inventory" It is showing there, no errors, everything's fine, but my problem is, because this report is tied to product.product I need to show some product fields (already declared on report) and the location_id of the warehouse I'm querying. Obviously if I loop over the current location on my report, it throws an error, because this is a stock object, and not from product. So, my question is, how can I bring all product fields into stock, to effectively report all product fields AND stock location at the same time? I guess I should make a related field, or a one2many there, but I'm very confused at this point. Any ideas? I hope I explained myself, if you need the code please let me know I'll edit my question then. Thanks in advance!
Relate stock location to product model - OpenERP report
1.2
0
0
462
26,723,964
2014-11-03T21:55:00.000
0
0
0
0
python,whoosh
26,740,422
1
false
0
0
I have come up with a solution, It works. First off, I redefined by schema so that autograph was an ID field in whoosh. Then I added a filter to the search call using a Regex query. This works, but I am not going to accept it as the answer in hopes that there is a more elegant solution for filtering results.
1
2
0
If there a way using Whoosh to return the documents that have a field matching exactly the terms in a query? For example, say I have a schema that has a autograph field that has three possible values; Autograph, Partial autograph, and No Autograph. If I do a standard query autograph:autograph, I get all the records. Because the term autograph is in all records. I have tried doing something like Term('autograph', 'autograph') and applying that to the filter key word argument for the search function, but I end up getting the same results. Am I doing something wrong?
Whoosh: matching terms exactly
0
1
0
257
26,726,169
2014-11-04T01:12:00.000
0
0
0
0
python,django,forms,django-admin
26,726,763
1
false
1
0
Regardless of the method of input chosen, the data would be stored as a text column in the backing database. In that case, I would use a regular CharField. It just makes more sense. In the form, use 2 not required fields instead: key_text and key_file. Overwrite clean() to make at least one required. Then overwrite save(...).
1
0
0
I'm trying to see if this is possible: a Field object that is rendered as both a file upload prompt as well as a text box. Since my app deals with SSL certificates and keys, it makes sense to allow the user to either upload a key/cert file, or paste the key/cert in directly. Regardless of the method of input chosen, the data would be stored as a text column in the backing database. This would purely be for convenience's sake.
Django field that is a hybrid of text file upload and/or text box
0
0
0
121
26,726,950
2014-11-04T02:49:00.000
-2
0
0
0
python,numpy,matrix,indexing,pygame
26,727,002
2
false
0
0
set a bool to checks every turn if someone has won. if it returns true, then whosever turn it is has won so, for instance, it is x turn, he plays the winning move, bool checks if someone has won,returns true, print out (player whose turn it is) has won! and end game.
1
1
1
My assignment is Tic-Tac_Toe using pygame and numpy. I Have almost all of the program done. I just need help understanding how to find if a winner is found. I winner is found if the summation of ANY row, column, or diagonal is equal to 3. I have two 3x3 matrices filled with 0's. Let's call them xPlayer and oPlayer. The matrices get filled with 1 every time player x or player o chooses their choice at a certain location. So if, player x selects [0,0], the matrix location at [0,0] gets a 1 value. This should continue until the summation of any row, column, or diagonal is 3. If All the places in both the matrices are 1, then there is no winner. I need help finding the winner. I'm really new to python so I don't know much about indexing though a matrix. Any help would be greatly appreciated! EDIT: Basically, how do you find out the summation of every row, column, and diagonal to check if ANY of them are equal to 3.
Summation of every row, column and diagonal in a 3x3 matrix numpy
-0.197375
0
0
1,887
26,733,418
2014-11-04T10:57:00.000
2
0
0
0
python,algorithm,classification,extraction
29,713,740
1
false
0
0
One approach would be to take the least RMS energy value of the signal as a parameter for classification. You should use a music segment, rather than using the whole music file for classification.Theoretically, the part of the music of 30 sec, starting after the first 30 secs of the music, is best representative for genre classification. So instead of taking the whole array, what you can do is to consider the part which corresponds to this time window, 30sec-59sec. Calculate the RMS energy of the signal separately for every music file, averaged over the whole time. You may also take other features into account, eg. , MFCC. In order to use MFCC, you may go for the averaged value of all signal windows for a particular music file. Make a feature vector out of it. You may use the difference between the features as the distance between the data points for classification.
1
5
1
I'm developing a little tool which is able to classify musical genres. To do this, I would like to use a K-nn algorithm (or another one, but this one seems to be good enough) and I'm using python-yaafe for the feature extraction. My problem is that, when I extract a feature from my song (example: mfcc), as my songs are 44100Hz-sampled, I retrieve a lot (number of sample windows) of 12-values-array, and I really don't know how to deal with that. Is there an approach to get just one representative value per feature and per song?
Processing musical genres using K-nn algorithm, how to deal with extracted feature?
0.379949
0
0
449
26,735,790
2014-11-04T13:00:00.000
0
1
0
0
python,pycharm,pytest
27,827,622
2
true
0
0
Answer to own question: I installed pyCharm again (for other reasons) and now it uses utrunner.py. It is much faster if I run Run 'Unittest test_foo', since not this does not collect all tests before running the test. Problem solved.
1
0
0
If I do Run Unittest .... test_foo in PyCharm it takes quite long to run the test, since all tests get collected first. PyCharm uses py.test -k to run the test. Since we have more than 1000 tests, collecting them takes some time (about 1.2 seconds). Often the test itself needs less time to execute! Since I use this very often, I want to speed this up. Any idea how to get this done?
py.test -k: collecting tests takes too much time
1.2
0
0
1,778
26,736,981
2014-11-04T13:57:00.000
0
0
1
0
python,cmake,pip,setuptools
26,767,387
1
false
0
0
it depends. for "manual" install, you definitely should detect if all required (to build) tools are installed, and issue an error if they don't. then use execute_process() to run pip and whatever you want. from other side, if you are going to produce a real package for some particular Linux, you just pack your binaries and require (via corresponding syntax of particular package format like *.rpm or *.deb that your package depends on some other packages. so, you can be sure that they will be installed w/ (or even before) your package.
1
4
0
I've had a quick look around, but because of terminology like dependencies and packages being used in different ways, it's quite tricky to pin down an answer. I'm building a mixed-language source (Fortran, some C and Python) and the Fortran calls a Python script which depends on the networkx Python package in the PyPI. Normally, I just have networkx installed anyway, so it isn't a problem for me when rebuilding. However, for distribution, I want the best way to: Install pip or equivalent, if it is not installed. Possibly install virtualenv and create a virtual environment, if appropriate. Download and install networkx using the --user option with pip. Is there a standard way? Or should I just use CMake dependencies with custom commands that install pip etc.?
Automatically installing Python dependencies using CMake
0
0
0
1,464
26,737,396
2014-11-04T14:18:00.000
2
0
0
0
python,variables,indexing,tkinter,text-widget
26,738,506
1
false
0
1
The solution is simple: don't ever treat the index as a number, because it is not a number. It is a string that happens to look like a number. If you have code that is rounding the value (or truncating trailing zeros), somewhere you're treating it as a number.
1
0
0
I am working on a program that gets the users index position when they are typing in a Text Widget in Tkinter and saves it into a variable. But when ever the index is 1.20 it ends up turning into 1.2 Is there a way to make sure the variable with the index 1.20 in it will stay 1.20? I have tried using float(). But that did not seem to work. Any ideas? And thank you in advance.
How to prevent a number from being rounded down Python Tkinter
0.379949
0
0
91
26,743,435
2014-11-04T19:30:00.000
0
0
0
0
python,django
26,743,769
2
false
1
0
By others, do you mean someone on your local network or someone on internet. On local network its very easy, instead starting the local developmnet server by python manage.py runserver you can do python manage.py runserver 10.1.0.123:8000 (assuming 10.1.0.123 is your system's ip), then people on you local network can access http://10.1.0.123:8000 to see your site. If you want to show it to someone on internet, then either you host it on something like heroku, or another cheap and quick method if to configure your router to divert the specific port to your machine and give that person your dynamic ip. This only applies if you have a router like in a home dev setup. You can go to google and just type what is my ip and get your dynamic static ip
1
0
0
A bit of a beginner question.Iv just started learning Django and can pretty much create basic stuff .Now when I want to access my website on my computer I just type in the local URL and I can access the site,other links etc. If I want to show this to someone else how would I do it?..They wouldn't be able to just type in the local url so what would they need to do to access it?Also If someone asks me to create an API for them what exactly does it mean?I'm a beginner with web technologies so any help would be appreciated! Thanks.
Django site-external access
0
0
0
290
26,745,198
2014-11-04T21:18:00.000
1
0
1
0
c#,python,visual-studio,ide,ptvs
26,745,836
1
true
0
0
It is possible to expose documents from non-file sources in VS, but PTVS itself assumes a file system, because it needs to implement Python rules of what constitutes a package and a module, and how to locate them for imports (so it needs to be aware of subdirectories and parent directories, __init__.py files etc). So even if you were to expose a document directly from the database in a VS text editor with Python content type, you'd basically get syntax highlighting and rudimentary code completion, but type inference engine that drives advanced completion in PTVS won't work. So if you want full fledged editing capabilities, you will need to have the code presented in the file system.
1
0
0
I have written an application which can be extended with IronPython. All scripts are stored in a database and can be edited with ScintillaNet. Every thing works fine, but i don't like ScintillaNet as the script editor because of bad autocomplete and so on. No i want to use the Visual Studio Shell with PTVS (PlugIn) for editing my python code. What is the best way to do this? The first option i thought about is creating a kind of plugin which can connect to my database and i can open and edit the scripts from there. But than i have to write a complete VS 2013 plugin... The second option is, that i save the scripts in the explorer and than open it in Visual Studio. When the script is changed i can write the changes back to the database. (Maybe i could detect the changes with FileSystemWatcher). But this two ways seems not to be very good. Does anyone do something similar and had a great idea? Thank you!
Use PTVS in VS2013 shell for own application
1.2
0
0
78
26,746,379
2014-11-04T22:37:00.000
1
0
1
0
python,pyqt,multiprocessing,signals,pyside
30,091,826
5
false
0
1
I had the same problem in C++. From a QApplication, I spawn a Service object. The object creates the Gui Widget but it's not its parent (the parent is QApplication then). To control the GuiWidget from the service widget, I just use signals and slots as usual and it works as expected. Note: The thread of GuiWidget and the one of the service are different. The service is a subclass of QObject. If you need multi process signal/slot mechanism, then try to use Apache Thrift or use a Qt-monitoring process which spawns 2 QProcess objects.
1
20
0
Context: In Python a main thread spawns a 2nd process (using multiprocessing module) and then launches a GUI (using PyQt4). At this point the main thread blocks until the GUI is closed. The 2nd process is always processing and ideally should emit signal(s) to specific slot(s) in the GUI in an asynchronous manner. Question: Which approach/tools are available in Python and PyQt4 to achieve that and how? Preferably in a soft-interrupt manner rather than polling. Abstractly speaking, the solution I can think of is a "tool/handler" instantiated in the main thread that grabs the available slots from the GUI instance and connects with the grabbed signals from the 2nd process, assuming I provide this tool some information of what to expect or hard coded. This could be instantiated to a 3rd process/thread.
How to signal slots in a GUI from a different process?
0.039979
0
0
11,747
26,746,620
2014-11-04T22:55:00.000
0
0
0
1
python,bash
30,111,058
1
false
0
0
I had a similar problem (installed numpy with pip on macosx, but got f2py not found). In my case f2py was indeed in a location on my $PATH (/Users/username/Library/Python/2.7/bin), but had no execute permissions set. Once that was fixed all was fine.
1
1
0
Hi I am trying to use f2py in macosx. I have a homebrew python instalation and I have installed numpy using pip. If I write on terminal f2py I get -bash: f2py: command not found but if I write in a python script import numpy.f2pyit works well. How can I solve this problem runing f2py directly from terminal? Thank you!
How to run f2py in macosx
0
0
0
2,795
26,747,296
2014-11-04T23:50:00.000
1
0
1
0
python,loops,iterator,iteration
26,747,459
3
false
0
0
It comes up to my mind to keep a small buffer of the last two elements in two separate variables, a tuple, a list, etc and compare with the current element in the iterator.
1
1
0
Can you reset iterators? Or is there a way to save the next element without iterating through it?
Python Iterator: reset iterator?
0.066568
0
0
7,220
26,749,349
2014-11-05T03:43:00.000
2
0
1
0
python,linked-list
26,749,431
1
true
0
0
It sounds like you have some sort of hash to get a shortlist of possibilities, so, you hash your key to a small-ish number, e.g. 0-256 (as an example, it might hash to 63). You can then go directly to your data at index 63. Because you might have more than one item that hashes to 63, your entry for 63 will contain a list of (key,value) pairs, that you would have to search one by one - effectively, you've reduced your search area by 255/256th of the full list. Optionally, when the collisions for a particular key exceeds a threshold, you could repeat the process - so you get mydict[63][92], again reducing the problem size by the same factor. You could repeat this indefinitely.
1
0
0
My teacher wants us to recreate the dict class in Python using tuples and linkedlists (for collisions). One of the methods is used to return a value given a key. I know how to do this in a tuple ( find the key at location[0] and return location[1]) but I have no idea how I would do this in the case of a collision. Any suggestions? If more info is needed please let me know
Using tuples in a linked list in python
1.2
0
0
1,152
26,749,980
2014-11-05T04:56:00.000
0
0
1
1
file,python-2.7,batch-file,directory,call
26,750,103
2
false
0
0
do you mean calling a script without specifying the exact location from commandline? there are two ways: add it to your path (eg: set it in your PATH environment variable) setup an alias/some sort of shortcut in your bashrc/whatever CLI you are using (since you are using windows, one example would be to setup a cmdlet in windows powershell or something)
1
0
0
I need to call a .bat file from anywhere in the directory without including the specific directory in the script. You'll just need to specify the name of .bat file you want to call and then run. Is this possible?
Call .bat file from anywhere in the directory using python
0
0
0
120
26,751,015
2014-11-05T06:30:00.000
0
0
1
0
python,methods
26,751,037
2
false
0
0
You can import sript1.py and use function from it, because the script1.py is cached by interpreter. If I understand the question correctly.
1
0
0
I want to call a method in running python script from another script. For eg: script1.py is continuously running. In this script there are methods. From another script2.py i want to call method of script1.py
Calling methods on already running python script
0
0
0
693
26,751,140
2014-11-05T06:41:00.000
11
0
1
0
python,spyder
44,462,838
2
false
0
0
This is an update for Spyder version 3.1.4: Tools Preferences (The shortcut is: Ctrl+Alt+Shift+P) IPython console (in the left menu of the Preferences utility) Source code Buffer: 500 Lines (in the right menu under the "Display" tab) Increase the buffer to view more lines in the IPython console. Then close and restart Spyder.
1
12
0
How to see print history in console in spyder IDE ? If more data gets printed it does not shows on console and not even shows the scroll bar.
Spyder IDE Console History
1
0
0
21,534
26,751,776
2014-11-05T07:27:00.000
0
0
0
0
python,signalr
37,733,962
2
false
0
0
I just wanted to offer another approach. Maybe the simpler solution would be to execute the SignalrR javascript library in python and just create a translational layer to get it back into python objects. I do not have code proving this, but I was hoping this would encourage someone in the future to try it.
1
1
0
I want to consume the events/Signals exposed by the Application via .NET SignalR. My requirement is to receive those signals using Python. Kindly help me out.
How to receive the Events from .NET using Python
0
0
0
653
26,751,800
2014-11-05T07:30:00.000
1
1
1
0
python
26,751,874
3
false
0
0
No, it shouldn't be faster, and that shouldn't matter anyway: importing things is not usually considered a performance-critical operation, so you can expect it to be fairly slow compared to other things you can do in Python. If you require importing to be very fast, probably something is wrong with your design.
3
0
0
Does importing a specific function from a module is a faster process than importing the whole module? That is, is from module import x debugs faster than import module?
Debugging issues in importing modules in Python
0.066568
0
0
34
26,751,800
2014-11-05T07:30:00.000
1
1
1
0
python
26,751,880
3
true
0
0
I would say there is little or no peformance difference, as importing a module for the first time will execute the entire module - all classes, variables and functions are all built, regardless of the actually symbol you need. The 2nd time you import the module in the same program that will be much quicker, as the module is not reloaded, and all existing definitions are used.
3
0
0
Does importing a specific function from a module is a faster process than importing the whole module? That is, is from module import x debugs faster than import module?
Debugging issues in importing modules in Python
1.2
0
0
34
26,751,800
2014-11-05T07:30:00.000
1
1
1
0
python
26,751,885
3
false
0
0
The whole module has to compile before you can import the specific function. Instead, it's just a difference in namespace. (ie. you call module_x.function_y vs just calling function_y)
3
0
0
Does importing a specific function from a module is a faster process than importing the whole module? That is, is from module import x debugs faster than import module?
Debugging issues in importing modules in Python
0.066568
0
0
34
26,754,378
2014-11-05T10:07:00.000
0
0
0
0
python,django,web-services
26,754,714
1
false
1
0
Usually you have a webserver (Apache, nginx...) before that. Depending on the webserver and OS it can be rather easy to deactivate a site. For example Ubuntu + Apache configure a default host that returns only 503 Service Unavailable configure a site configuration with a VirtualHost with the actual WSGI setup for the webservice activate the website with a2ensite <name> and service apache2 reload activate the website with a2dissite <name> and service apache2 reload But there are very many other solutions to your problem.
1
0
0
i'm new in all of webservices and more webservices with python. I tested bottle, ladon, flask and django REST framework... With different opinions but all working more or less fine. Now i need to have something like a admin view to start or stop these services. Looking these frameworks i cant find anything about it. I searched here and neither. So i was looking for something like admin django view where i can start o stop a service. Which framework or technology do you recommend me? thank you.
Control activated webservices
0
0
0
39
26,755,660
2014-11-05T11:06:00.000
2
0
1
0
python-3.x,python-asyncio
26,763,233
1
true
0
0
There is no way to ask reader has incoming data or not. I guess to create asyncio.Task for reading data from asyncio stream reader in loop. If you need to write data asynchronously feel free to call StreamWriter.write() from any task that have some outgoing data. I strongly dont recommend to use protocols directly -- they are low-level abstraction useful for flow control but for application code is better to use high-level streams.
1
2
0
So I want to implement a simple comms protocol where reads & writes are completely asynchronous. That means that client sends some data and then server may or may not respond with an answer. So I can't just call reader.read() because that blocks until at least something is returned. And I may have something more to send in the mean time. So is there a way to check if reader has something to read? (please note that I'm talking specifically about the streams version: I'm fully aware that protocols version has separate handlers for reading and writing and does not suffer from this issue)
asyncio streams check if reader has data
1.2
0
1
719
26,756,182
2014-11-05T11:32:00.000
0
0
1
0
python,ruby,audio,signal-processing
46,366,112
2
false
0
0
from numpy import * U, S, Vh = linalg.svd(dot((tile(sum(x*x,0),(x.shape[0],1))*x),x.T))
1
1
0
I am looking for a code snippet that I have seen demonstrated as an inspiration to students that will process 2 audio files, recorded with 2 microphones that are spaced apart recording a 'cocktail' party, which will produce 2 or more separate outputs to isolate different voices on the basis of differential delay. The example I have seen used a single formula in python script to produce this effect, any pointers would be much appreciated.
Audio Source Signal Separation of 'Cocktail Party'
0
0
0
1,970
26,759,118
2014-11-05T14:03:00.000
1
0
1
0
python,return,wait
26,759,193
2
false
0
0
You could simply print the "Loading..." before entering the function or at the first line when you enter the function. However, this is not doing something "else". It is a sequential execution. If you want to do something else in parallel, yes, threads would be the way to go. You should post your code and perhaps tell us what exactly is it that you want to do for a precise answer!
1
0
0
I have a function which handles and returns some data. When I run my code, it take about 5 sec to return the answer. Is it possible to do something else at the same time? For instance, I'd like to write a simple line like "Loading...". Would a thread work?
Write a string when I wait for the return answer of a function
0.099668
0
0
78
26,760,398
2014-11-05T15:05:00.000
1
1
0
1
google-cloud-storage,google-api-python-client
27,596,094
1
false
0
0
Cause the compile errors occurred and could not solve easily, I finally use the previous pyOpenssl to solve this problem.
1
0
0
I am porting the GCS python client lib. and suffering some problems about dependency. Because I want to use gcs on the NAS without glibc, I got the error at the code: from oauth2client.client import SignedJwtAssertionCredentials the error shows the reason due to lack of gcc I trace the code and it seems to generate cryptography relative files (like _Cryptography_cffi_36a40ff0x2bad1bae.so) at run-time from the crypto.verify. Since the machine I used with gcc, is there any way to replace the cryptography library or I could pre-compile and generate the files at my build machine ? Thanks!
python cryptography run-time bindings files in GCS
0.197375
0
0
35
26,760,746
2014-11-05T15:22:00.000
0
1
0
0
android,python,python-import
26,783,905
1
true
0
0
You can't import android.broadcast because the module doesn't exits. I've unzipped /data/data/com.hipipal.qpy3/files/lib/python3.2/python32.zip and there is no trace of broadcasts. If you can find the module, you can put it under /data/data/com.hipipal.qpy3/files/lib/python3.2/site-packages.
1
0
0
I'm running Python3 on Android and I can import android, but I can't import any android submodules. My goal is to have my scripts react to events, such as plugging/unplugging the headset, but I'm struggling to follow the examples found online. One group seems to think that you should import jnius and use this autoclass() helper, while the other things you should directly import android.broadcast. I'm struggling because python cannot find either jnius or android.broadcast installed, yet android.Android() works fine. How do you properly import the android.broadcast.BroadcastListener object in python?
How do you import android modules on android?
1.2
0
0
835
26,763,250
2014-11-05T17:20:00.000
0
0
0
0
python,indexing,tkinter,text-widget
26,766,737
1
false
0
1
This is just how the tkinter text widget works. The index values are based on logical lines -- lines that end with a newline. Even though the line may wrap visually on the screen, the index won't switch from 1.x to 2.x until the first newline. Note that when computing an index relative to some other index, you have the option of counting lines by logical lines or display lines. For example, the relative index "1.0 + 2 display lines" may result in a different answer than "1.0 + 2 lines" depending on if the text is wrapped or not.
1
0
0
I am using Tkinter and I am using the text widget with the wrap set to WORD. Which wraps the word when it hits the end if you are still typing it. But when it does do this the row index stays 1 until you hit the enter key. When it should be 2 because it is the second row. Does anyone know why it does this? Or How to fix it?
Python Tkinter text widget index acting weird
0
0
0
226
26,763,448
2014-11-05T17:30:00.000
0
0
1
0
python,random
26,763,840
1
false
0
0
You are almost correct: you need a generator not with a period of 400!, but with an internal state of more than log2(400!) bits (which will also have a period larger than 400!, but the latter condition is not sufficient). So you need at least 361 bytes of internal state. CryptGenRandom doesn't qualify, but it ought to be sufficient to generate 361 or more bytes with which to seed a better generator. I think Marsaglia has versions of MWC with 1024 and 4096 bytes of state.
1
0
1
I would like to shuffle a relatively long array (length ~400). While I am not a cryptography expert, I understand that using a random number generator with a period of less than 400! will limit the space of the possible permutations that can be generated. I am trying to use Python's random.SystemRandom number generator class, which, in Windows, uses CryptGenRandom as its RNG. Does anyone smarter than me know what the period of this number generator is? Will it be possible for this implementation to reach the entire space of possible permutations?
Can CryptGenRandom generate all possible permutations?
0
0
0
431
26,766,803
2014-11-05T20:48:00.000
0
0
1
0
python,pandas,hierarchy,multi-index
26,769,689
1
false
0
0
In general in my experience is more difficult to compare different Data Frames, so I would suggest to use one. With some practical example I can try to give better advice. However, personally I prefer to use an extra column instead of many Multiindex levels, but it's just my personal opinion.
1
1
1
I am trying to analyze results from several thermal building simulations. Each simulation produces hourly data for several variables and for each room of the analyzed building. Simulations can be repeated for different scenarios and each one of these scenarios will produce a different hourly set of data for each room and each variable. At the moment I create a separate dataframe for each scenario (Multiindex with variables and rooms). My goal is to be able to compare different scenarios along different dimensions: same room, rooms average, time average, etc.. From what I have understood so far there are two options: create a dictionary of dataframes where the keys represents the scenarios add an additional level (3rd) to the multiindex in the same dataframe representing the scenario Which of the above will give me the best results in terms of performance and flexibility. Typical questions could be: in which scenario the average room temperature is below a threshold for more hours in which scenario the maximum room temperature is below a threshold what's the average temperature in July for each room As you can see I need to perform operations at different hierarchical levels: within a scenario and also comparison between different scenarios. Is it better to keep everything in the same dataframe or distribute the data?
Multiindex or dictionaries
0
0
0
173
26,767,777
2014-11-05T21:48:00.000
2
0
0
0
python
26,767,850
1
false
0
0
You can have the user press Ctrl+C, and in your program catch the KeyboardInterrupt exception and write your output there.
1
0
0
One of my data analysis application I am using Brute force search method for finding similar patterns, the possible conditions will be millions. For better performance all the results are store in the memory and will write into database every hours. But now I can't stop the application in middle, So how i can give a exit command(like a command or shortcut key) to the application through command prompt, so it will write all the results into database from memory and call exit function (like sys.exit() ). Can I accomplish this by using argparse or modules like click?. I couldn't find any examples.
stop execution of python script by user command?
0.379949
0
0
1,117
26,772,089
2014-11-06T05:04:00.000
-1
0
1
0
python,binary
26,772,317
4
false
0
0
Mac address is 48 bits long. If a Mac address is presented in binary, it will take 6 bytes.( 8bit(1Byte)*6=46bits) If a Mac address is presented in ascii, it will take 17 ASCII characters (17 bites). So, if the data is 6 bytes, it may contain Mac address in binary. if the data is 17 bytes or more, it may contain Mac address in ASCII.
2
0
0
I have a string variable for mac address. it can be binary - '\x00\x04\x96\x82Q\xbb' or ascii - 'c8:be:19:c6:a0:e0'. If it is binary i need to convert it to ascii string with b2a_hex function. Is it possible to know what type of string i am having now?
How to know binary string or ascii string in python
-0.049958
0
0
110
26,772,089
2014-11-06T05:04:00.000
0
0
1
0
python,binary
26,772,406
4
false
0
0
You can also use a simple find.If string.find(":") == -1 it is binary else it is ascii.
2
0
0
I have a string variable for mac address. it can be binary - '\x00\x04\x96\x82Q\xbb' or ascii - 'c8:be:19:c6:a0:e0'. If it is binary i need to convert it to ascii string with b2a_hex function. Is it possible to know what type of string i am having now?
How to know binary string or ascii string in python
0
0
0
110
26,775,558
2014-11-06T09:13:00.000
1
0
0
1
python,dot,pygraphviz
49,885,873
2
false
0
0
Your example is missing some information (a description of the nodes). Assuming those are somewhere and have just been omitted from your example, maybe the problem is that using node [shape=record] doesn't work with the port HTML attribute. For example, try node [shape=plaintext].
1
1
0
I am trying to create a PNG from DOT file dot -Tpng -o temp.png and I am getting the below errors: Warning: node s1, port eth2 unrecognized Warning: node s2, port eth2 unrecognized Warning: node s2, port eth3 unrecognized Warning: node s3, port eth2 unrecognized Warning: node s4, port eth4 unrecognized Warning: node s3, port eth3 unrecognized DOT FILE 1: graph G { node [shape=record]; graph [hostidtype="hostname", version="1:0", date="04/12/2013"]; edge [dir=none, len=1, headport=center, tailport=center]; "s1":"eth2" -- "s2":"eth2"; "s2":"eth3" -- "s3":"eth2"; "s4":"eth4" -- "s3":"eth3"; } When I try with the below topology file, it works. DOT FILE 2 graph G { node [shape=record]; graph [hostidtype="hostname", version="1:0", date="04/12/2013"]; edge [dir=none, len=1, headport=center, tailport=center]; "R1":"swp1" -- "R3":"swp3"; "R1":"swp2" -- "R4":"swp3"; } What is the difference here. For what reason is DOT FILE 1 giving errors ?
Port unrecognized in DOT file?
0.099668
0
0
1,514
26,778,828
2014-11-06T11:52:00.000
2
0
0
0
python,django,rest,django-rest-framework,tastypie
26,779,279
1
false
1
0
For the exact same reasons than you used the django-admin for the admin instead of writing views, forms and templates from scratch: less work tested and approved by many other developers therefore more secured DRF has a really nice architecture. When you use it, you'd think "that's how I would have done it" (the truth is "this is how I wish I would have done it"). DRF also let you browse/test the API via HTML. 'External developers' who wants to use the data from my website on their own website? Yes For me to get a nice overview of my data (even though the admin does this job good)? Admin is better indeed for that. Use the create, read, update and delete functions in ajax calls on different pages on my website? Yes Or should all views that get data from a model get the data through an API to check permissions etc.? You don't have to get data through your API (is that what you mean?). I've never setup an application this way. That being said, you can do it, start by building an API both for you and external developers especially if you're going to use a lot of Javascript.
1
0
0
I have been looking at django-tastypie and django-rest-framework. What is the advantage of using an API? I've thought of following use cases. Which use cases is an API primarily intended for? 'External developers' who wants to use the data from my website on their own website? For me to get a nice overview of my data (even though the admin does this job good)? Use the create, read, update and delete functions in ajax calls on different pages on my website? Or should all views that get data from a model get the data through an API to check permissions etc.? I have read some of the documentation for both APIs but it's still not completely clear to me. Django has request.is_ajax() and Django 1.7 introduced JsonResponse so I cannot see why a big complex framework could make a better job sending and receiving json but I guess I'm wrong based on the number of developers who use the api frameworks :-D
Why should I use an API as django-tastypie or django-rest-framework?
0.379949
0
0
768
26,780,925
2014-11-06T13:39:00.000
41
0
1
0
python,spyder
39,300,994
1
false
0
0
In Spyder v3.0.0, go to Source --> Fix Indentation. That worked for me.
1
16
0
I am using spyder IDE for python coding. I copied the code from some source,it works fine. But when I make edits in it indentation error occurs. There is always some mistake in the alignment of my edit and the other part of code. Code is quite large I can't re indent it.
Spyder Python indentation
1
0
0
33,712
26,781,438
2014-11-06T14:06:00.000
0
0
0
0
python,scrapy,robots.txt
29,877,623
1
true
1
0
Scrapy uses Python standard robots.txt parser which doesn't support wildcards.
1
0
0
So I have a Scrapy project set up and I have enabled the ROBOTSTXT_OBEY middleware which is working fine on robot.txt files that are in the following format: User-agent: * Disallow: /tools/ But when the same spider on a site with a robots.txt file in the following format it doesn't work: User-agent: * Disallow: *?next This results in pages still being crawled that should be blocked by robots.txt, the by the way in completely valid markup for a robots.txt file. Just wondered if anyone could shed any light on why this might be?
Scrapy ROBOTSTXT_OBEY not working in all cases
1.2
0
0
890
26,781,772
2014-11-06T14:22:00.000
0
0
0
0
python,xlsx,openpyxl,xlsxwriter
26,808,401
2
false
0
0
You can use openpyxl to edit existing xlsx files or save them as new files.
1
0
0
I have an xlsx file that I use as a template. I am tring to create a new excel file using this template how can I read this file and write it to a new xlsx file in python 27?
Which object I can use to read xlsx tempate
0
0
0
143
26,783,510
2014-11-06T15:44:00.000
0
0
0
0
python,google-app-engine,google-cloud-datastore
26,784,116
1
true
1
0
When you create your entity do this: MyModel(id=emailAddress).put() then use get_by_id user = MyModel.get_by_id(emailAddress)
1
0
0
I define my model ndb in python. I want to use the email address of a user as the key, how do I do that? user is passing in the email address through an html form. I have everything setup and working. I just don't know how to specify that the email address string is the key.
use email address as datastore key using python app engine
1.2
0
0
98
26,783,752
2014-11-06T15:56:00.000
0
1
0
1
ipython,ipython-notebook
27,171,998
1
true
0
0
I never did figure out the answer to my question -- why the port matters. However, I found that my ROI widgets had a rookie mistake on the JavaScript side (I'm fairly new to JS programming) that, when fixed, made all the problems go away. Ironically, the puzzle now is why it worked when I was using the default port!
1
0
0
I'm having a problem with running an ipython notebook server. I've written a series of custom ROI (Region Of Interest) widgets for the notebook that allow a user to draw shapes like rectangles and ellipses on an image displayed in the notebook, then send information about the shapes back to python running on the server. All information is passed via widget traitlets; the shape info is in a handles traitlet of type object. When I run this locally on port 8888 (the default) and access it with firefox running on the same computer, everything works. (The system in this case is a Mac running OSX Yosemite). Now I tried to access it remotely by making an ssh connection from another computer (ubuntu linux, in this case) and forwarding local port 8888 to 8888 on the host. This almost works: firefox running on the client is able to access the ipython notebook server, execute code in notebooks, etc. The ROI widgets also display and seem to work properly, except for one thing: no information about the shapes drawn makes it back to the server. This is not just an issue of remote access (although that's the most important for my intended use). I have exactly the same problem if I run locally, but use a port other than 8888. For instance, if I set the port to 9999 in ipython_notebook_config.py, run the notebook server and access it with a local firefox, I get exactly the same problem. Similarly, if I run ipython notebook twice with all default settings, the second instance binds port 8889, because 8888 was bound by the first. When I access the server running at 8888 with a local firefox, everything works; when I access the simultaneously running server running at 8889, my widgets once more fail to send info back to the server. If I use --debug, I can see all the comm_msgs passed. The server running on 8888 receives messages that contain shape info, as expected. These messages simply don't show up in the log of the server running at 8889. Any thoughts?
port usage in the ipython notebook
1.2
0
0
327
26,787,018
2014-11-06T18:42:00.000
2
0
1
0
python,python-2.7,python-3.x,list-comprehension
26,787,085
4
false
0
0
for x in lst: f(x) looks about equally short (it's actually one character shorter) as [f(x) for x in lst] Or is that not what you were trying to do?
2
0
0
I see that using list comprehension provides a very simple way to create new lists in Python. However, if instead of creating a new list I just want to call a void function for each argument in a list without expecting any sort of return value, should I use list comprehension or just use a for loop to iterate? Does the simplicity in the code justify creating a new list (even if it remains empty) for each set of iterations? Even if this added cost is negligible in small programs, does it make sense to do it in large-scale programs/production? Thanks!
When does using list comprehension in Python become inefficient?
0.099668
0
0
142
26,787,018
2014-11-06T18:42:00.000
0
0
1
0
python,python-2.7,python-3.x,list-comprehension
26,787,130
4
false
0
0
Are you asking if it is inefficient to create a list you don't need? Put that way, the answer should be obvious. (To satisfy the answer police: yes, it is less efficient to create a list you don't need.)
2
0
0
I see that using list comprehension provides a very simple way to create new lists in Python. However, if instead of creating a new list I just want to call a void function for each argument in a list without expecting any sort of return value, should I use list comprehension or just use a for loop to iterate? Does the simplicity in the code justify creating a new list (even if it remains empty) for each set of iterations? Even if this added cost is negligible in small programs, does it make sense to do it in large-scale programs/production? Thanks!
When does using list comprehension in Python become inefficient?
0
0
0
142
26,790,720
2014-11-06T22:36:00.000
2
0
1
1
python,stream,twisted,frp
26,817,585
1
true
0
0
I don't know of any neat tricks that will help you with this. I think you probably just have to implement the re-ordering (or order-maintaining, depending on how you look at it) logic in your Stream.map implementation. If operation i + 1 completes before operation i then Stream.map will probably just have to hold on to that result until operation i completes. Then it can add results i and i + 1 to the output Stream. This suggests you may also want to support back-pressure on your input. The re-ordering requirement means you have an extra buffer in your application. You don't want to allow that buffer to grow without bounds so when it reaches some maximum size you probably want to tell whoever is sending you inputs that you can't keep up and they should back off. The IProducer and IConsumer interfaces in Twisted are the standard way to do this now (though something called "tubes" has been in development for a while to replace these interfaces with something easier to use - but I won't suggest that you should hold your breath on that).
1
2
0
I'm currently designing an application using the Twisted framework, and I've hit a bit of a roadblock in my planning. Formal Description of the Problem My application has the following constraints: Data arrive in-order, but asynchronously. I cannot know when the next piece of my data will arrive The order in which data arrive must be preserved throughout the lifespan of the application process. Additional asynchronous operations must be mapped onto this "stream" of data. The description of my problem may remind people of the Functional Reactive Programming (FRP) paradigm, and that's a fair comparison. In fact, I think my problem is well-described in those terms and my question can be pretty accurately summarized thusly: "How can I leverage Twisted in such a way as to reason in terms of data streams?" More concretely, this is what I have figured out: A datum arrives and is unpacked into an instance of a custom class, henceforth referred to as "datum instance" The newly-arrived datum instance is appended to a collections.deque object, encapsulated by a custom Stream class. The Stream class exposes methods such as Stream.map that apply non-blocking computations asynchronously to: All elements already present in the Stream instance's deque. All future elements, as they arrive. Results of the operations performed in item 3 are appended to a new Stream object. This is because it's important to preserve the original data, as it will often be necessary to map several callable's to a given stream. At the risk of beating a dead horse, I want to insist upon the fact that the computations being mapped to a Stream instance are expected to return instances of Deferred. The Question Incidentally, this precisely where I'm stuck: I can implement items 1, 2 & 3 quite trivially, but I'm struggling with how to handle populating the results Stream. The difficulty stems from the fact that I have no guarantees of stream length, so it's completely possible for data to arrive while I'm waiting for some asynchronous operations to complete. It's also entirely possible for async operation Oi to complete after Oi + n, so I can't just add deque.append as a callback. So how should I approach this problem? Is there some nifty, hidden feature of Twisted I have yet to discover? Do any twisty-fingered developers have any ideas or patterns I could apply?
How can I map asynchronous operations to an ordered stream of data and obtain an identically-ordered result?
1.2
0
0
58
26,792,337
2014-11-07T01:03:00.000
1
0
1
0
python,list,syntax
26,792,371
4
false
0
0
list() is better - it's more readable. Other than that there is no difference.
1
0
0
If we have a list s, is there any difference between calling list(s) versus s[:]? It seems to me like they both create new list objects with the exact elements of s.
Difference between list() and [:]
0.049958
0
0
101
26,792,751
2014-11-07T01:50:00.000
4
0
0
0
python,django,http,authentication,cookies
26,803,136
1
false
1
0
No, only cookies stored in browser persistently. All other headers are transient by HTTP protocol definition.
1
4
0
I am trying to setup token based authentication in python. using Django/DRF. but this is more about http in general i think. When users put in username/password I return to them their token via JSON. The client then can post the token in HTTP Header for me to check. My problem is I want the token to persist in the header automatically, just like cookies. When the server says "set-cookie" to browser (Chrome/FF), the browser will automatically send up the cookie without me actually doing anything. Is there something I can do with this token? I have tried storing it in header: "Authorization", but the browser didn't return it. Is there something like "Set-Authorization"? thanks
How to get Authentication header to persist
0.664037
0
1
364
26,793,585
2014-11-07T03:30:00.000
1
0
0
0
python,cluster-analysis,k-means
26,826,242
3
false
0
0
K-means is indeed sensitive to noise BUT investigate your data! Have you pre-processed your "real-data" before applying the distance measure on it? Are you sure your distance metric represents proximity as you'll expected? There are a lot of possible "bugs" that may cause this scenario.. not necessary k-means fault
3
0
1
I use the k-means algorithm to clustering set of documents. (parameters are - number of clusters=8, number of runs for different centroids =10) The number of documents are 5800 Surprisingly the result for the clustering is 90% of documents belong to cluster - 7 (final cluster) 9% of documents belong to cluster - 0 (first cluster) and the rest 6 clusters have only a single sample. What might be the reason for this?
What can be the reasons for 90% of samples belong to one cluster when there is 8 clusters?
0.066568
0
0
272
26,793,585
2014-11-07T03:30:00.000
1
0
0
0
python,cluster-analysis,k-means
26,817,383
3
false
0
0
K-means is highly sensitive to noise! Noise, which is farther away from the data, becomes even more influential when your square its deviations. This makes k-means really sensitive to this. Produce a data set, with 50 points distributed N(0;0.1), 50 points distributed N(1;0.1) and 1 point at 100. Run k-means with k=2, and you are bound to get that one point a cluster, and the two real clusters merged. It's just how k-means is supposed to work: find a least-squared quantization of the data; it does not care about "clumps" in your data set or not. Now it may often be beneficial (with respect to the least-squares objective) to make one-element clusters if there are outliers (here, you apparently have at least 6 such outliers). In such cases, you may need to increase k by the number of such one-element clusters you get. Or use outlier detection methods, or a clustering algorithm such as DBSCAN which is tolerant wrt. noise.
3
0
1
I use the k-means algorithm to clustering set of documents. (parameters are - number of clusters=8, number of runs for different centroids =10) The number of documents are 5800 Surprisingly the result for the clustering is 90% of documents belong to cluster - 7 (final cluster) 9% of documents belong to cluster - 0 (first cluster) and the rest 6 clusters have only a single sample. What might be the reason for this?
What can be the reasons for 90% of samples belong to one cluster when there is 8 clusters?
0.066568
0
0
272