Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
31,468,947
2015-07-17T05:31:00.000
0
0
0
0
wxpython
31,476,854
1
false
0
1
Not really. But you can create your own about box by rolling your own dialog with wx.Dialog. Then you can put any widgets that you want to in your custom about box.
1
0
0
I have desktop application in wxpython. User will get notification whenever there is new release available. I would like to create new beta update channel for specific customers. I was thinking of adding check box in the AboutBox. If check box is checked, then user will get update from beta location. There is no way to add check box or other controls in AboutBox. Is there any to achieve this?
How to add check box control in AboutBox wxpython?
0
0
0
30
31,472,881
2015-07-17T09:42:00.000
-2
0
1
1
python,python-2.7,py2exe
31,472,969
1
false
0
0
Just unpack this exe with tool like 7-zip and you can run py2exe from resulting folder.
1
1
0
I want to transform a python script in a executable file. That is why, I want to install py2exe When I try to install the file "py2exe-0.6.9.win32-py2.7.exe", I got the message "Python version 2.7 required, which was not found in the registry" I suspect that py2exe is not finding my python.exe file (it ask me python directory but I cannot enter anything). Python 2.7.9 is installed on my laptop in the file Mydocuments (and I cannot not move the path)! I use Windows 8. Thank you a lot for your help and for your answer
Installation of py2
-0.379949
0
0
83
31,477,842
2015-07-17T14:11:00.000
2
0
0
1
python,google-app-engine,google-bigquery,google-cloud-datastore,google-prediction
31,478,203
1
true
1
0
Such a system is often called "frecency", and there's a number of ways to do it. One way is to have votes 'decay' over time; I've implemented this in the past on App Engine by storing a current score and a last-updated; any vote applies an exponential decay to the score based on the last-updated time, before storing both, and a background process runs a few times a day to update the score and decay time of any posts that haven't received votes in a while. Thus, a post's score always tends towards 0 unless it consistently receives upvotes. Another, even simpler system, is to serial-number posts. Whenever someone upvotes a post, increment its number. Thus, the natural ordering is by creation order, but votes serve to 'reshuffle' things, putting more upvoted posts ahead of newer but less voted posts.
1
0
0
I'm working on an Google App Engine (python) based site that allows for user generated content, and voting (like/dislike) on that content. Our designer has, rather nebulously, spec'd that the front page should be a balance between recent content and popular content, probably with the assumption that these are just creating a score value that weights likes/dislikes vs time-since-creation. Ultimately, the goals are (1) bad content gets filtered out somewhat quickly, (2) content that continues to be popular stays up longer, and (3) new content has a chance at staying long enough to get enough votes to determine if its good or bad. I can easily compute a score based on likes/dislikes. But incorporating the time factor to produce a single score that can be indexed doesn't seem feasible. I would essentially need to reindex all the content every day to adjust its score, which seems cost prohibitive once we have any sizable amount of content. So, I'm at a loss for potential solutions. I've also suggested something where where we time box it (all time, daily, weekly), but he says users are unlikely to look at the tabs other than the default view. Also, if I filtered based on the last week, I'd need to sort on time, and then the secondary popularity sort would essentially be meaningless since submissions times would be virtually unique. Any suggestions on solutions that I might be overlooking? Would something like Google's Prediction API or BigQuery be able to handle this better?
Computing an index that accounts for score and date within Google App Engine Datastore
1.2
0
0
68
31,478,137
2015-07-17T14:25:00.000
0
0
0
0
python,opencv,video
71,733,031
7
false
0
1
download Droidcam. It can be used by wifi and after that in cv2.VideoCapture(n) where n can be 1 or 2 for me its 2 you can use mobile cam in python open_cv2
1
15
0
I have just started learning OpenCV using Python and the first tutorial starts with capturing video using either in built laptop webcam or external webcam. And as it would happen, I have neither. So I thought if it would be possible to use Camera of my Android Smartphone and then capture that video using IP for further processing. My Smartphone: Moto E OS: Windows 7 Language: Python Android Application : IP Webcam I have searched the net extensively but am unable to find any working solution, so can anyone please guide me on how to capture the video from my smartphone using IP Webcam. Sorry for posting no code as I am just trending into this field so am completely clueless. Thanks.
Capturing Video from Android Smartphone using OpenCV Python
0
0
0
29,328
31,479,763
2015-07-17T15:49:00.000
0
1
0
1
python,linux,ssh,sftp
31,479,959
1
false
0
0
You can try and use a client-server or sockets approach. Your remote PC has a server running listening to commands or data coming into. Your client or local computer can send commands on the port and ip that the remote PC is listening to. The server then parses the data coming in, looks at whatever commands you have defined and executes them accordingly.
1
0
0
I have this python setup using objects that can perform specific tasks for I2C protocol. Rather than having the script create objects and run a single task when run from a command line command, is there a way to have the objects 'stay alive' in the background and somehow feed the program new data from the command line? The general idea is to have something running on a remote pc and use ssh to send commands (new data) over to the program. One idea I had was to have the program constantly check (infinite loop) for a data file containing a set of tasks to perform and run those when it exists. But it seems like it could go awry if I were to sftp a new data file over because the program could be reading the one that already exists and cause undesirable effects. I'm sure there are many better ways to go about a task like this, any help will be appreciated.
Running python in the background and feeding data
0
0
0
49
31,479,882
2015-07-17T15:56:00.000
4
0
0
0
python,flask
31,480,666
1
true
1
0
It is not Flask that turns * into %2A, it is the browser. Character * is not legal in an URL and there is nothing you can do about it. Browsers must escape illegal characters in sent requests. A browser might leave * in the address bar (and escape it silently), but you should not expect browsers to do so.
1
1
0
I am working on a RESTful API in Flask. It allows wildcards to be used. The problem is that when a URL is entered, such as mysite.com/get/abc*, Flask turns this URL into mysite.com/get/abc%2A, both on the backend and in the browser's URL bar. This is easy enough to handle on the backend, but how can I prevent the browser's URL bar from containing ugly things like '%2A'?
Flask: Prevent HTML escaping in the browser's URL bar
1.2
0
0
217
31,480,224
2015-07-17T16:14:00.000
1
0
0
0
user-interface,wxpython
31,480,387
1
true
0
1
When you say "the objects", what do you mean? If you mean a wx Frame, then you can call Frame.Freeze() to disable the frame, and Frame.Thaw() to unfreeze it. If you want to create a new dialog that must be interacted with and make all background windows unuseable, you can call Dialog.ShowModal(). Finally, many widgets have a Widget.Enable() function, to which you can pass True or False depending on if you want to enable it or not.
1
0
0
Right now I'm working on a program using wxPython, and I want to make a GUI in which under certain conditions, the widgets become inactive and the user can't interact with them. I'm pretty sure I have seen this in programs before, but I can't come up with a specific example. Is there a way to do this in wxPython? I have no idea what the technical name for this is, so even giving me that would be helpful.
wxPython inactive state?
1.2
0
0
85
31,481,253
2015-07-17T17:14:00.000
2
0
1
0
python,python-2.7
31,481,601
2
false
0
0
The trouble is, strip is not defined in any module. It is not a part of the standard library at all, but a method on str, which in turn is a built in class. So there isn't really any way of iterating through modules to find it.
1
1
0
Given a method name, how to determine which module(s) in the standard library contain this method? E.g. If I am told about a method called strip(), but told nothing about how it works or that it is part of str, how would I go and find out which module it belongs to? I obliviously mean using Python itself to find out, not Googling "Python strip" :)
How to determine which Python standard library module(s) contain a certain method?
0.197375
0
0
71
31,482,397
2015-07-17T18:27:00.000
0
0
0
1
python,linux,subprocess,popen
31,484,229
4
false
0
0
fork the subprocs using the NOHUP option
1
3
0
I am trying to open a subprocess but have it be detached from the parent script that called it. Right now if I call subprocess.popen and the parent script crashes the subprocess dies as well. I know there are a couple of options for windows but I have not found anything for *nix. I also don't need to call this using subprocess. All I need is to be able to cal another process detached and get the pid.
subprocess.popen detached from master (Linux)
0
0
0
5,358
31,485,333
2015-07-17T21:57:00.000
0
1
1
0
python,unit-testing,continuous-integration,functional-testing
31,485,412
1
true
0
0
CI is intended to be a system which provides a framework for all your unit tests and functional tests. The CI will kick off the unit tests , build and run functional tests and take appropriate actions as specified by you.
1
0
0
I understand that we use CI to test a software after any changes made to it.It will kick off unit tests and system level tests as soon as someone checks-in. Now, where unit and functional test scripts we wrote fit in here? Am I right that CI won't have any built-in tests: unit,functional,system. "We" write all those test scripts but have CI kick them off ?
Continuous Integration vs Software automation test engineer
1.2
0
0
113
31,485,636
2015-07-17T22:30:00.000
0
0
0
0
python,pygame
31,485,743
1
false
0
1
There are some answers already here. Anyway, use PGU (Pygame GUI Utilities), it's available on pygame's site. It turns pygame into GUI toolkit. There is an explanation on how to combine it and your game. Otherwise, program it yourself using key events. It's not hard but time consuming and boring.
1
0
0
I need a user input for my pygame program, but I need it on my GUI(pygame.display.set_mode etc.), not just like: var = input("input something"). Does anybody have suggestions how to do this?
Pygame, user input on a GUI?
0
0
0
113
31,491,583
2015-07-18T13:20:00.000
3
0
0
0
python,scikit-learn,random-forest,cross-validation,grid-search
31,493,214
1
true
0
0
As a first step, adding the verbose parameter to the RandomForestClassifier as well could let you see if the search is really stuck. It will display progress in fitting the trees (building tree 88 out of 100 ...). I don't really know why your search got stuck, but thinking about it removing the search on n_estimators should enable you to grid search the entire space of parameters you specified here in just 8 iterations.
1
2
1
I'm running a relatively large job, which involves doing a randomized grid search on a dataset, which (with a small n_iter_search) already takes a long time. I'm running it on a 64 core machine, and for about 2 hours it kept 2000 threads active working on the first folds. It then stopped reporting completely into the stdout. It's last report was: [Parallel(n_jobs=-1)]: Done 4 out of 60 | elapsed: 84.7min remaining: 1185.8min I've noticed on htop that almost all cores are at 0%, which would not happen when training random forests. No feedback or errors from the program, if it weren't for htop I would assume it is still training. This has happened before, so it is a recurring problem. The machine is perfectly responsive and the process seems alive. I already have verbose = 10. Any thoughts on how I can diagnose what it going on inside the RandomizedSearchCV? The grid search I'm doing: rfc = RandomForestClassifier(n_jobs=-1) param_grid = { 'n_estimators': sp_randint(100, 5000), 'max_features' : ['auto', None], 'min_samples_split' : sp_randint(2, 6) } n_iter_search = 20 CV_rfc = RandomizedSearchCV(estimator=rfc, param_distributions=param_grid, n_iter = n_iter_search, verbose = 10,n_jobs = -1)
How can I get randomized grid search to be more verbose? (seems stopped, but can't diagnose)
1.2
0
0
6,252
31,493,324
2015-07-18T16:39:00.000
4
0
0
0
python-3.x,pyqt,qtablewidget
31,641,703
1
true
0
1
This code worked fine for me: self.tablewidget_preview.horizontalHeader().setStretchLastSection(True) self.tablewidget_preview.horizontalHeader().setSectionResizeMode(QHeaderView.Stretch)
1
0
0
I'm having an instance of QTablewidget with some columns a few rows and some data in it. Now I am searching for a way to make the columns fill the full width of the tablewidget. Also, I want all the columns to have the same width. Is there a way to achieve this?
Make columns fill the full width of QTableWidget
1.2
0
0
2,439
31,493,649
2015-07-18T17:24:00.000
4
0
1
0
python,numpy,pip,python-wheel
31,493,846
1
true
0
0
This is probably not worth the hassle but it's up to you to make that trade-off. numpy.random.choice is not implemented in Python but in a .pyx file which needs to be compiled to C using Cython. You could refactor it and construct a new package which implements only that functionality (possibly with a few related data structures). But with recent improvements with Python wheels files installation of numpy should be much easier than in the past. So I reckon it's easier to install numpy as it is and accept that you have it as a dependency.
1
6
1
I need to use a function in numpy package, say numpy.random.choice (another Python lib function random.choice samples the list uniformly while I want it to do that from some discrete distributions). My program will be distributed to a lot of people to develop and test. So that means they should also install numpy before they are able to run the code. I'm now trying to find a way to get rid of installing the whole numpy library. Definitely rewriting the function myself is a solution (for example using alias method). But I'm wondering that is there a way that I can only install the part of numpy related to numpy.random.choice?
How to 'partially' install a Python package
1.2
0
0
1,071
31,495,357
2015-07-18T20:39:00.000
2
0
1
0
python,packages
31,495,419
2
false
0
0
Package manager solves things like dependencies and uninstalling. Additionally, when using pip to install packages, packages are usually being built with setup.py script. While it might not be an issue for pure Python modules, if package contains any extension modules or some other custom stuff, copying files to site-packages just won't work (I'm actually not sure why it worked in your case with numpy, since it does contain C extensions modules).
2
2
0
Whenever I google 'importing X package/module' I always see a bunch of tutorials about using pip or the shell commands. But I've always just taken the downloaded file and put it in the site-packages folder, and when I just use 'import' in PyCharm it has worked just fine. The reason I was wondering was because I was downloading NumPy today, and when I just copied the file the same way I'd been doing, PyCharm didn't show any errors. I was just wondering if I'm misunderstanding this whole concept of installing packages. EDIT: Thank you for your answers! I am off to learn how to use pip now.
Installing Packages in Python - Pip/cmd vs Putting File in Lib/site-packages
0.197375
0
0
308
31,495,357
2015-07-18T20:39:00.000
2
0
1
0
python,packages
31,495,433
2
false
0
0
One of the points of using a package manager (pip) is portability. With pip, you just include a requirements.txt in your project and you can work on it on any machine, be it Windows, Linux, or Mac. When moving to a new environment/OS, pip will take care of installing the packages properly for you; note that packages can have OS-specific steps so your copy-pasted Windows set-up might now work when you move to another OS. Moreover, with your copy-paste method, you carry the bulk of your dependencies everywhere. I imagine that if you want to switch machines (not necessarily OS), you copy everything from project code to dependencies. With pip, you can keep your working directories leaner, all at the cost of a single requirements.txt.
2
2
0
Whenever I google 'importing X package/module' I always see a bunch of tutorials about using pip or the shell commands. But I've always just taken the downloaded file and put it in the site-packages folder, and when I just use 'import' in PyCharm it has worked just fine. The reason I was wondering was because I was downloading NumPy today, and when I just copied the file the same way I'd been doing, PyCharm didn't show any errors. I was just wondering if I'm misunderstanding this whole concept of installing packages. EDIT: Thank you for your answers! I am off to learn how to use pip now.
Installing Packages in Python - Pip/cmd vs Putting File in Lib/site-packages
0.197375
0
0
308
31,496,020
2015-07-18T22:03:00.000
2
0
0
0
python,opencv,caffe
31,497,506
4
false
0
0
The problem has been solved by some tryings. Since I installed under my /.~local path, it should be noticed that [include],[bin] and [lib] should all point to the local version by modifying the bashrc. I just change the lib path while the other 2 paths remained unchanged, which point to the cluster's opencv version 2.4.9.(Mine is 2.4.11)
2
2
1
I am trying to run fast-rcnn on a cluster, where cv2.so is not installed for public use. So I directly move the cv2.so into a PATH, but it turns as: /lib64/libc.so.6: version `GLIBC_2.14' not found So I have to install the opencv on my local path again, this time it says: ImportError: /home/username/.local/lib/python2.7/site-packages/cv2.so: undefined symbol: _ZN2cv11arrowedLineERNS_3MatENS_6Point_IiEES3_RKNS_7Scalar_IdEEiiid This really confused me, could anyone give me a hand?
ImportError on cv2.so
0.099668
0
0
12,670
31,496,020
2015-07-18T22:03:00.000
3
0
0
0
python,opencv,caffe
42,237,346
4
false
0
0
I know this is a little late, but I just got this same error with python 2.7 and opencv 3.1.0 on Ubuntu. Turns out I had to reinstall opencv-python. Running sudo pip install opencv-python did the trick.
2
2
1
I am trying to run fast-rcnn on a cluster, where cv2.so is not installed for public use. So I directly move the cv2.so into a PATH, but it turns as: /lib64/libc.so.6: version `GLIBC_2.14' not found So I have to install the opencv on my local path again, this time it says: ImportError: /home/username/.local/lib/python2.7/site-packages/cv2.so: undefined symbol: _ZN2cv11arrowedLineERNS_3MatENS_6Point_IiEES3_RKNS_7Scalar_IdEEiiid This really confused me, could anyone give me a hand?
ImportError on cv2.so
0.148885
0
0
12,670
31,496,583
2015-07-18T23:34:00.000
1
0
0
1
python-2.7,google-app-engine,oauth-2.0
31,496,649
1
false
1
0
I think I found a better way of doing it, I just use the oauth callback to redirect only with no data, and then on the redirect handler I access the API data.
1
0
0
My Oauth2Callback handler is able to access the Google API data I want - I want to know the best way to get this data to my other handler so it can use the data I've acquired. I figure I can add it to the datastore, or also perform redirect with the data. Is there a "best way" of doing this? For a redirect is there a better way than adding it to query string?
How do I redirect and pass my Google API data after handling it in my Oauth2callback handler on Google App Engine
0.197375
0
1
37
31,497,217
2015-07-19T01:44:00.000
0
0
0
1
python-2.7,pycharm,speech
31,508,159
2
false
0
0
Figured it out - I just forgot to install Homebrew
1
4
0
When I try to use the Google Speech Rec API I get this error message. Any help? dyld: Library not loaded: /usr/local/Cellar/flac/1.3.1/lib/libFLAC.8.dylib Referenced from: /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/speech_recognition/flac-mac Reason: image not found I'm using PyCharm. I have tried copy pasting and uninstalling and reinstalling but to no avail. HELP :) My whole project is to get the user to say something, and have google translate translate it and have it say the answer. I have the translating and speaking covered, but the Speech Recognition is what I am having trouble with now. Thanks in advance Here's more error messages. Traceback (most recent call last): File >"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line >162, in _run_module_as_main "main", fname, loader, pkg_name) File >"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", >line >72, in _run_code exec code in run_globals File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site->packages/speech_recognition/main.py", line 12, in audio = r.listen(source) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site->packages/speech_recognition/init.py", line 264, in listen buffer = source.stream.read(source.CHUNK) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site->packages/pyaudio.py", line 605, in read return pa.read_stream(self._stream, num_frames) IOError: [Errno Input overflowed] -9981
osx - dyld: Library not loaded Reason: image not found - Python Google Speech Recognition API
0
0
0
4,422
31,498,495
2015-07-19T06:08:00.000
0
0
1
0
python,python-2.7,pip
51,034,083
7
false
0
0
If you get this error on Windows, like I did, then just run the command-line tool (cmd.exe or Powershell) as Administrator and try again.
2
16
0
Not sure whats going on here but I am getting an error every time I try to install something using pip I get the following error: Command "/usr/bin/python -c "import setuptools, tokenize;__file__='/private/var/folders/b0/5843zgyj1yz3b8q2l7wrtj8h0000gn/T/pip-build-V4hy8S/PySocks/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/b0/5843zgyj1yz3b8q2l7wrtj8h0000gn/T/pip-bIOl7C-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/var/folders/b0/5843zgyj1yz3b8q2l7wrtj8h0000gn/T/pip-build-V4hy8S/PySocks
Error when installing using pip
0
0
0
65,539
31,498,495
2015-07-19T06:08:00.000
18
0
1
0
python,python-2.7,pip
42,376,149
7
false
0
0
Try sudo pip install -U setuptools If this doesn't solve your problem then Firstly, you need the python-dev package because Pillow needs compile headers defined. sudo apt-get install python-dev On Ubuntu 14.04 you need few extra packages to get pillow working. Install all of them with the command: sudo apt-get install libtiff5-dev libjpeg8-dev zlib1g-dev libfreetype6-dev liblcms2-dev libwebp-dev tcl8.6-dev tk8.6-dev python-tk
2
16
0
Not sure whats going on here but I am getting an error every time I try to install something using pip I get the following error: Command "/usr/bin/python -c "import setuptools, tokenize;__file__='/private/var/folders/b0/5843zgyj1yz3b8q2l7wrtj8h0000gn/T/pip-build-V4hy8S/PySocks/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/b0/5843zgyj1yz3b8q2l7wrtj8h0000gn/T/pip-bIOl7C-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /private/var/folders/b0/5843zgyj1yz3b8q2l7wrtj8h0000gn/T/pip-build-V4hy8S/PySocks
Error when installing using pip
1
0
0
65,539
31,498,784
2015-07-19T07:02:00.000
6
0
0
0
python,numpy
31,499,284
1
false
0
0
The answer depends on the size of your arrays. While allocating a new memory region takes nearly a fixed amount of time, the time to fill this memory region grows linear with size. But, filling a new allocated memory with numpy.zeros is nearly twice as fast, as filling an existing array with numpy.fill, and three times faster than item setting x[:] = 0. So on my machine, filling vectors with less than 800 elements is faster than creating new vectors, with more than 800 elements creating new vectors gets faster.
1
6
1
In iterative algorithms, it is common to use large numpy arrays many times. Frequently the arrays need to be manually "reset" on each iteration. Is there a performance difference between filling an existing array (with nans or 0s) and creating a new array? If so, why?
Performance difference between filling existing numpy array and creating a new one
1
0
0
1,368
31,499,363
2015-07-19T08:35:00.000
0
0
0
1
python-2.7,jenkins
31,500,756
1
false
0
0
Does jenkins also provide the information 'which users' too or just the 'number of users', so you would have to get the 'which users' by your own? I don't have a jenkins installation with administrativ access, so I cannot check this myself.
1
0
0
I am running Python job from Jenkins... now my question is as follow: I am setting number of users as an external parameter, for example I am passing this command: python /home/py_version/single_run.py $number_of_users i want to be able to set a way to choose what are the users (in this case users ids) from the jenkins or the script itself... thanks!
How can I set number of parameters from jenkins running Python script?
0
0
0
366
31,503,613
2015-07-19T16:55:00.000
0
0
0
0
python,apache-spark,pyspark
31,504,403
1
false
0
0
Although the effect depends on your data set and operations, here are options I come up with Optimize text format (Avro, Kryo etc) so that hashmap can build quickly Combine the files into one HDFS file and increase block replication factor of the file, so that many Spark executors can get read the files locally Use Spark broadcast variables for the hashmap so that executors don't need to deserialize it
1
1
1
I need to build a hashmap using text files and map values using that hashmap. The files are already in HDFS. I want to map data using this hashmap. The text files are fairly small (I have around 10 files each few MB that I need to use for building the hashmap). If the files are already on HDFS is there anything else that I can do to optimize the processing, so that building the hashmap and the lookup will happen in a distributed fashion?
Distributed mapping and lookup with Spark
0
0
0
502
31,504,190
2015-07-19T17:55:00.000
2
0
0
0
python,django,pinax,pythonanywhere
31,564,132
1
false
1
0
The .html at the end of /tmp/django_project_template_e1ulrY_download/master.html seems suspect to me. I'm guessing that you got an error html page instead of the archive you requested. Check the contents of that file to see what happened.
1
3
0
I'm following the exact directions of getting started with pinax-project-account. You can see them [here][1]. I just created my virtual environment and installed the requirements. The problem with when I run this command: django-admin.py startproject --template=https://github.com/pinax/pinax-project-account/zipball/master. I get this error: CommandError: couldn't extract file /tmp/django_project_template_e1ulrY_downl oad/master.html to /tmp/django_project_template_wU3ju6_extract: Path not a re cognized archive format: /tmp/django_project_template_e1ulrY_download/master. html I can get this working on my local machine but I'm using python anywhere and it doesn't seem to like this command? Any ideas?
Django, Pinax, couldn't extract file
0.379949
0
0
150
31,506,425
2015-07-19T22:06:00.000
0
0
0
0
python,django,deployment,development-environment
31,506,797
2
false
1
0
Sounds like the quickest (if not most elegant) solution would be to call 'python manage.py runserver' at the end of your script.
1
0
0
There is a set of functions that I need to carry out during the start of my server. Regardless of path whether that be "/", "/blog/, "/blog/post". For developments purposes I'd love for this script to run every time I run python manage.py runserver and for production purposes I would love this script to run during deployment. Anyone know how this can be done? My script is scraping data off and making a call to Facebook's Graph API with python and some of its libraries.
Django app initialization process
0
0
0
835
31,508,612
2015-07-20T03:54:00.000
3
0
1
1
python,linux,pip
31,508,671
8
false
0
0
You need to install the development package for libffi. On RPM based systems (Fedora, Redhat, CentOS etc) the package is named libffi-devel. Not sure about Debian/Ubuntu systems, I'm sure someone else will pipe up with that.
3
90
0
I have installed libffi on my Linux server as well as correctly set the PKG_CONFIG_PATH environment variable to the correct directory, as pip recognizes that it is installed; however, when trying to install pyOpenSSL, pip states that it cannot find file 'ffi.h'. I know both thatffi.h exists as well as its directory, so how do I go about closing this gap between ffi.h and pip?
PIP install unable to find ffi.h even though it recognizes libffi
0.07486
0
0
93,372
31,508,612
2015-07-20T03:54:00.000
266
0
1
1
python,linux,pip
31,508,663
8
false
0
0
You need to install the development package as well. libffi-dev on Debian/Ubuntu, libffi-devel on Redhat/Centos/Fedora.
3
90
0
I have installed libffi on my Linux server as well as correctly set the PKG_CONFIG_PATH environment variable to the correct directory, as pip recognizes that it is installed; however, when trying to install pyOpenSSL, pip states that it cannot find file 'ffi.h'. I know both thatffi.h exists as well as its directory, so how do I go about closing this gap between ffi.h and pip?
PIP install unable to find ffi.h even though it recognizes libffi
1
0
0
93,372
31,508,612
2015-07-20T03:54:00.000
24
0
1
1
python,linux,pip
38,077,173
8
false
0
0
To add to mhawke's answer, usually the Debian/Ubuntu based systems are "-dev" rather than "-devel" for RPM based systems So, for Ubuntu it will be apt-get install libffi libffi-dev RHEL, CentOS, Fedora (up to v22) yum install libffi libffi-devel Fedora 23+ dnf install libffi libffi-devel OSX/MacOS (assuming homebrew is installed) brew install libffi
3
90
0
I have installed libffi on my Linux server as well as correctly set the PKG_CONFIG_PATH environment variable to the correct directory, as pip recognizes that it is installed; however, when trying to install pyOpenSSL, pip states that it cannot find file 'ffi.h'. I know both thatffi.h exists as well as its directory, so how do I go about closing this gap between ffi.h and pip?
PIP install unable to find ffi.h even though it recognizes libffi
1
0
0
93,372
31,514,741
2015-07-20T10:58:00.000
1
1
0
1
python,emacs,gdb,gud
31,729,095
2
false
0
0
I am going to go out on a limb and say this is a bug in gud mode. The clue is the -interpreter-exec line in the error. What happens here is that gud runs gdb in a special "MI" ("Machine Interface") mode. In this mode, commands and their responses are designed to be machine-, rather than human-, readable. To let GUIs provide a console interface to users, MI provides the -interpreter-exec command, which evaluates a command using some other gdb "interpreter" (which doesn't mean what you may think and in particular has nothing to do with Python). So, gud sends user input to gdb, I believe, with -interpreter-exec console .... But, in the case of a continuation line for a python command, this is the wrong thing to do. I tried this out in Emacs and I was able to make it work for the python command when I spelled it out -- but py, pi, and python-interactive all failed.
1
1
0
I want to debug a c++ program using gdb. I use the pi and the py commands to evaluate python commands from within gdb, which works fine when I invoke gdb from the command line. However, when I invoke gdb from within emacs using M-x gdb and then gdb -i=mi file_name, the following errors occur: the pi command correctly opens an interactive python shell, but any input to this shell yields the errors like this: File "stdin", line 1 -interpreter-exec console "2" SyntaxError: invalid syntax the py command works correctly for a single command (like py print 2+2), but not for multiple commands I can get around those problems by starting gdb with gud-gdb, but then I dont have the support for gdb-many-windows. Maybe the problem is caused by the prompt after typing pi, which is no longer (gdb) but >>> instead?
gdb within emacs: python commands (py and pi)
0.099668
0
0
810
31,515,322
2015-07-20T11:28:00.000
1
0
0
0
python-2.7,odoo-8,openerp-8
38,759,025
2
false
1
0
self._fields it will return the fields which are available in that model to a dictionary type model._fields[fieldname] will return the datatype of field as key and the field name with respective model as value example self._fields['price_unit'] in sale.order.line will return Float: sale.order.line.price_unit
1
2
0
I am new to odoo. Is anyone have tutorial of using _fields feature in odoo 8 ? In odoo 8, _columns is deprecated. A common pattern in OpenERP was to do Model fields introspection using _columns property. From 8.0 _columns is deprecated by _fields that contains list of consolidated fields instantiated using old or new API. There is no clear documentation about _fields options. Please give a right tutorial on this.
How to use _fields option in odoo 8
0.099668
0
0
834
31,515,583
2015-07-20T11:41:00.000
4
0
1
0
python,debugging,python-3.4,python-idle
31,515,714
1
true
0
0
You must be opening the code window not the shell window.. Try opening the shell window.. It has a Debug menu(the shell window) but the code window does not have one..
1
0
0
I thought it was coming by default with the IDLE but I don't have it. By the way, I installed Python 3.4. A few researches on the net revealed themselves unfruitful. Any idea about what's going on and how to fix this?
My Python IDLE is missing the Debugging menu
1.2
0
0
2,270
31,517,047
2015-07-20T12:52:00.000
1
0
1
0
python-2.7,inheritance,class-diagram
31,517,498
2
false
0
0
It seems to me that your generic class in this is Argument, and that your three other classes should inherit from it. That would indeed require wrapper classes for String and Integer. Your question reminds me of the Composite Design Pattern, but I am not sure whether it is what you want.
1
0
0
I have the following 'entities' in my model: DataItem, String, Integer and Argument. I am unsure how to create a class diagram which reflects the following aspects: A String is a DataItem An Integer is a DataItem An Argument can be a String, an Integer, or a DataItem (i.e. neither String nor Integer). Do I have to create the additional Classes "StringArgument", "IntegerArgument", and "DataItemArgument" or is there a better solution? I assume that making Argument inherit from String, Integer, and DataItem is not a good solution, right? In general: How do I model cases in which a class inherits either from one class, or from another? PS: The implementation will be in Python 2.7 but I am interested in the general problem so any solutions referring to other languages are fine.
How to model and implement a class that may inherit from different classes?
0.099668
0
0
54
31,517,753
2015-07-20T13:25:00.000
0
0
0
0
python,sql
31,517,891
1
false
0
0
I simply left out sess.commit() as the next line of code.
1
0
0
I am using SQLAlchemy and am trying to update a boolean column value. I have the following command: sess.query(Testing).filter(Testing.id == id).update({Testing.state: True}) I do not seem to get any errors, however, when I go to the database, nothing changes. Have I implemented something incorrectly with the command?
SQLAlchemy Update Command
0
1
0
277
31,520,331
2015-07-20T15:21:00.000
1
0
0
0
python,animation,pygame,2d,sprite
31,527,282
1
false
0
1
You should assume that when blitting a pygame.Surface, the position gets converted to an int via int()
1
3
0
I use Python 2.x and Pygame to code games. Pygame has a built-in rect (Rectangle) class that only supports ints instead of floats. So I have made my own rect class (MyRect) which supports floats. Now my question is as follows: A 2D platformer char moves its position (x, y -> both floats). Now when I blit the char onto the screen, is the position rounded to an int (int(round(x))) or just converted into an int (int(x))? I know this might sound a bit stupid, but I've got an issue with this and I'd like to know how this is usually handled.
Movement in 2D Games (round position when blitting?)
0.197375
0
0
68
31,522,754
2015-07-20T17:30:00.000
0
1
0
1
python,oracle
31,525,508
1
true
0
0
Well, that was pretty simple. I just had to add it to the .bashrc file in my root directory.
1
1
0
Pretty new to all this so I apologize if I butcher my explanation. I am using python scripts on a server at work to pull data from our Oracle database. Problem is whenever I execute the script I get this error: Traceback (most recent call last): File "update_52w_forecast_from_oracle.py", line 3, in import cx_Oracle ImportError: libnnz11.so: cannot open shared object file: No such file or direct ory But if I use: export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib Before executing the script it runs fine but only for that session. If I log back in again I have to re-set the path. Anything I can do to make this permanent? I'm trying to use Cron as well to automate the script once a week. It was suppose to automate early Monday morning but it didn't run. EDIT: Just had to add the path to my .bashrc file in the root directory.
cx_Oracle, and Library paths
1.2
0
0
841
31,524,210
2015-07-20T18:54:00.000
2
0
0
0
python,mysql
31,524,504
1
true
0
0
Python distributions do not include support for MySQL, which is only available by installing a third-party module such as PyMySQL or MySQLdb. The only relational support included in Python is for the SQLite database (in the shape of the sqlite3 module). There is, however, nothing to stop you distributing a third-party module as a part of your application, thereby including the support your project requires. PyMySQL would probably be the best choice because, being pure Python, it will run on any platform and give you best portability.
1
0
0
I'm wondering if there is any built-in way in Python to test a MySQL server connection. I know I can use PyMySQL, MySQLdb, and a few others, but if the user does not already have these dependencies installed my script will not work? How can I write a Python script to test a MySQL connection without requiring external dependencies?
Test MySQL Connection with default Python
1.2
1
0
244
31,526,259
2015-07-20T20:59:00.000
0
0
1
0
python,python-2.7,qpython,qpython3
32,291,741
2
false
0
1
You could copy the modules you want installed into the site-packages folder using a file manager open up the folder: /sdcard/com.hipipal.qpyplus/lib/python2.7/site-packages/ that is where you can put the modules, i have not tried it with the pip console.
1
3
0
I downloaded some python library package (for example uncompile2) from GitHub. How I can install it into Qpython from the directory in my android device with pip-console? Or at least to try install...
How to install library with pip-console?
0
0
0
9,530
31,527,206
2015-07-20T22:10:00.000
1
0
0
0
python,audio,signal-processing,fft
31,528,542
1
false
0
0
FFT data is in units of normalized frequency where the first point is 0 Hz and one past the last point is fs Hz. You can create the frequency axis yourself with linspace(0.0, (1.0 - 1.0/n)*fs, n). You can also use fftfreq but the components will be negative. These are the same if n is even. You can also use rfftfreq I think. Note that this is only the "positive half" of your frequencies, which is probably what you want for audio (which is real-valued). Note that you can use rfft to just produce the positive half of the spectrum, and then get the frequencies with rfftfreq(n,1.0/fs). Windowing will decrease sidelobe levels, at the cost of widening the mainlobe of any frequencies that are there. N is the length of your signal and you multiply your signal by the window. However, if you are looking in a long signal you might want to "chop" it up into pieces, window them, and then add the absolute values of their spectra. "is it correct" is hard to answer. The simple approach is as you said, find the bin closest to your frequency and check its amplitude.
1
3
1
I have a WAV file which I would like to visualize in the frequency domain. Next, I would like to write a simple script that takes in a WAV file and outputs whether the energy at a certain frequency "F" exceeds a threshold "Z" (whether a certain tone has a strong presence in the WAV file). There are a bunch of code snippets online that show how to plot an FFT spectrum in Python, but I don't understand a lot of the steps. I know that wavfile.read(myfile) returns the sampling rate (fs) and the data array (data), but when I run an FFT on it (y = numpy.fft.fft(data)), what units is y in? To get the array of frequencies for the x-axis, some posters do this where n = len(data): X = numpy.linspace(0.0, 1.0/(2.0*T), n/2) and others do this: X = numpy.fft.fftfreq(n) * fs)[range(n/2)] Is there a difference between these two methods and is there a good online explanation for what these operations do conceptually? Some of the online tutorials about FFTs mention windowing, but not a lot of posters use windowing in their code snippets. I see that numpy has a numpy.hamming(N), but what should I use as the input to that method and how do I "apply" the output window to my FFT arrays? For my threshold computation, is it correct to find the frequency in X that's closest to my desired tone/frequency and check if the corresponding element (same index) in Y has an amplitude greater than the threshold?
FFT in Python with Explanations
0.197375
0
0
1,122
31,528,166
2015-07-20T23:50:00.000
2
0
0
1
python,python-2.7
31,528,729
1
false
0
0
If you're not using shell=True, there isn't really a "command line" involved. subprocess.Popen is just passing your argument list to the underlyingexecve() system call. Similarly, there's no escaping, because there's no shell involved and hence nothing to interpret special characters and nothing that is going to attempt to tokenize your string. There isn't a character limit to worry about because the arguments are never concatenated into a single command line. There may be limits on the maximum number of arguments and/or the length of individual arguments. If you are using shell=True, you have to construct the command line yourself before passing it to subprocess.
1
0
0
I'm using subprocess.call where you just give it an array of argumets and it will build the command line and execute it. First of all is there any escaping involved? (for example if I pass as argument a path to a file that has spaces in it, /path/my file.txt will this be escaped? "/path/my file.txt") And is there any way to get this command line that's generated (after escaping and all) before being executed? As I need to check if the generated command line is not longer than certain amount of characters (to make sure it will not give an error when it gets executed).
Is there any way to get the full command line that's executed when using subprocess.call?
0.379949
0
0
56
31,530,442
2015-07-21T04:33:00.000
1
0
0
0
python,python-2.7,ssl,openssl
31,530,967
1
false
0
0
You'll need to install the modified OpenSSL. Python merely has bindings, which will then call the functions in the compiled OpenSSL libraries. If the modified OpenSSL library is installed and in your path completely replacing the original OpenSSL library, then Python will "use" it. This assumes that the modified library is in fact compatible with the original OpenSSL. On a side-note, using modified cryptographic libraries is a terrible idea from a security perspective.
1
0
0
I am writing a python script that imports ssl library. I need to create ssl socket. However, I find that I will need to use a modified version of openssl library. The author of the modified version told me that the underlying implementation of ssl module is using the openssl library. The author provided me with a file named ssllib.c. I searched the folder of the openssl library that I installed: openssl-0.9.8k_X64 but I could not find any ssl_lib.c file. Also, the author referring to openssl by openssl-1.0.1e which is another version than mine. My question: How can I compile my python script with a modified version of openssl? Please, consider that I am using Windows x64 system and Python 2.7.
How can I use a modified openssl library (written in C) in my python code?
0.197375
0
1
131
31,530,728
2015-07-21T05:01:00.000
1
0
1
0
python,excel
31,531,821
1
true
0
0
I wished there was a straight forward support from openpyxl/xlsxwriter to copy sheets across different workbooks. However, I see you would have to mash up a recipe using a couple of libraries: One for reading the worksheet data and, Another for writing data to a unified xlsx For both of the above there are lot of options in terms of python packages.
1
2
0
I've been trying to do this and I really got no clue. I've search a lot and i know that i can merge the files with easily with VBA or other languages, but i really want to do it with Python. Can anyone get me on track?
Merge(Combine) several .xlsx with one worksheet into just one workbook (Python)
1.2
1
0
262
31,532,501
2015-07-21T07:10:00.000
0
0
1
0
python,pandas,qgis
58,310,829
2
false
0
0
Navigate to C:\QGIS\apps\Python27\ or C:\QGIS\apps\Python37. Right click while holding shift and open command prompt or powershell here. Type python -m pip install pandas.
1
4
0
I'm building a QGIS plugin and using Python Pandas library. How to install Pandas library in QGIS python? Please help me !
How to install pandas library in QGIS python?
0
0
0
5,648
31,534,583
2015-07-21T08:58:00.000
0
0
1
1
python
31,535,279
3
false
0
0
It will probably depend on file system internals. On a typical unix machine, I would expect the order of items in the return value from os.listdir to be in the order of the details in the directory's "dirent" data structure (which, again, depends on the specifics of the file system). I would not expect a directory to have the same ordering over time, if files are added and deleted. I would not expect two "directories with the same contents" on two different machines to have a consistent ordering, unless specific care was taken when copying from one to the other. Depending on a variety of specifics, the ordering may change on a single machine, over time, without any explicit changes to the directory, as various file system compacting operations take place (although I don't think I've seen a file system that would actually do this, but it's definitely something that could be done). In short, if you want any sort of ordering you can reason about, sort the results, somehow. Then you have the guarantee that the ordering will be whatever your sorting imposes.
1
13
0
From Python's doc, os.listdir() returns a list containing the names of the entries in the directory given by path. The list is in arbitrary order. What I'm wondering is, is this arbitrary order always the same/deterministic? (from one machine to another, or through time, provided the content of the folder is the same) Edit: I am not trying to make it deterministic, nor do I want to use this. I was just wondering (for example, what does the order depend on?)
Is os.listdir() deterministic?
0
0
0
3,004
31,536,863
2015-07-21T10:44:00.000
0
0
1
1
python,qt,io,hard-drive,child-process
31,543,489
2
false
0
0
There are no guarantees as to fairness of I/O scheduling. What you're describing seems rather simple: the I/O scheduler, whether intentionally or not, gives a boost to new processes. Since your disk is tapped out, the order in which the processes finish is not under your control. You're most likely wasting a lot of disk bandwidth on seeks, due to parallel access from multiple processes. TL;DR: Your expectation is unfounded. When I/O, and specifically the virtual memory system, is saturated, anything can happen. And so it does.
1
1
0
Not sure this is the best title for this question but here goes. Through python/Qt I started multiple processes of an executable. Each process is writing a large file (~20GB) to disk in chunks. I am finding that the first process to start is always the last to finish and continues on much, much longer than the other processes (despite having the same amount of data to write). Performance monitors show that the process is still using the expected amount of RAM (~1GB), but the disk activity from the process has slowed to a trickle. Why would this happen? It is as though the first process started somehow gets its' disk access 'blocked' by the other processes and then doesnt recover after the other processes have finished... Would the OS (windows) be causing this? What can I do to alleviate this?
Why do multiple processes slow down?
0
0
0
1,550
31,537,752
2015-07-21T11:27:00.000
2
0
0
1
python,tornado
31,539,885
1
false
0
0
In that handler's initialize() method, call self.transforms.append(tornado.web.GZipContentEncoding)
1
2
0
How can I serve compressed responses only for a single RequestHandler from my Tornado application?
Tornado gzip compressed response for a specific RequestHandler
0.379949
0
0
1,628
31,537,841
2015-07-21T11:30:00.000
1
0
1
1
python,visual-studio,azure,ptvs
31,542,784
1
true
0
0
After adding the PATH environment variable, all I needed to do was close Visual Studio and open it again. For anyone who struggled with the same issue, just close the programme and it might work!
1
0
0
I am having an issue with Visual Studio. I have everything set up in my project in the Python Environments including Platformio, which I would like to use. When I do os.system("platformio init") it fails and produces this error: 'platformio' is not recognized as an internal or external command, operable program or batch file. I added the platformio folder in the python library Search Paths, but still no success. I do not have python or platformio installed on the local machine, only in the PTVS. The python program works fine without installing it on the local machine, so I would like to maintain it that way if possible. Please anyone, help!
PTVS using os.system fails
1.2
0
0
69
31,540,347
2015-07-21T13:24:00.000
1
0
0
1
python-2.7,dronekit-python,dronekit
31,924,023
3
false
0
0
I think Sony Nguyen is asking for running the vehicle_state.py outside the Mavproxy command prompt, just like runnning the .py file normally. I'm also looking for a solution as well.
1
2
0
I managed to run examples in command prompt after running mavproxy.py and loading droneapi. But when I double click on on my script, it throws me "'local_connect' is not defined", it runs in terminal as was told above, but I cannot run it only with double click. So my question is: Is there any way to run script using droneapi only with double click? using Windows 8.1 Thanks in advance
Run python script with droneapi without terminal
0.066568
0
0
617
31,540,437
2015-07-21T13:27:00.000
0
0
0
0
python
31,540,754
3
false
0
0
I might, but as a former user of the excellent xlrd package, I really would recommend switching to pyopenxl. Quite apart from other benefits, each worksheet has a columns attribute that is a list of columns, each column being a list of cells. (There is also a rows) attribute. Converting your code would be relatively painless as long as there isn't too much and it's reasonably well-written. I believe I've never had do do anything other than pip install pyopenxl to add it to a virtual environment. I observe that there's no code in your question, and it's harder (and more time-consuming) to write examples than point out required changes in code, so since you are an xlrd user I'm going to assume that you can take it from here. If you need help, edit the question and add your problem code. If you get through to what you want, submit it as an answer and mark it correct. Suffice to say I recently wrote some code to extract Pandas datasets from UK government health statistics, and pyopenxl was amazingly helpful in my investigations and easy to use. Since it appears from the comments this is not a new question I'll leave it at that.
1
1
1
I have an excel file with 234 rows and 5 columns. I want to create an array for each column so that when I can read each column separately in xlrd. Does anyone can help please?
Creating arrays in Python by using excel data sheet
0
1
0
898
31,541,685
2015-07-21T14:18:00.000
0
0
0
1
python,supervisord
31,541,908
2
false
0
0
Maybe you should try restarting your superviord process using user stavros.
1
6
0
I have supervisord run a program as user stavros, and I would like to give the same user permission to restart it using supervisorctl. Unfortunately, I can only do it with sudo, otherwise I get a permission denied error in socket.py. How can I give myself permission to restart supervisord processes?
Allow user other than root to restart supervisorctl process?
0
0
0
3,601
31,541,746
2015-07-21T14:20:00.000
22
0
1
0
python,multithreading,python-asyncio
31,543,062
1
true
0
0
When the worker thread's queue (or, more generally, any thread's queue) is empty, should it be stopped until is has something to do again, or is it okay to leave continuously running? Do concurrent threads take up a lot of processing power when they aren't doing anything other than watching its queue? You should just use a blocking call to queue.get(). That will leave the thread blocked on I/O, which means the GIL will be released, and no processing power (or at least a very minimal amount) will be used. Don't use non-blocking gets in a while loop, since that's going to require a lot more CPU wakeups. Should the two threads' queues be combined? Since the watcher thread is continuously running a single method, I guess the worker thread would be able to just pull tasks from the single queue that the watcher thread puts in. If all the watcher is doing is pulling things off a queue and immediately putting it into another queue, where it gets consumed by a single worker, it sounds like its unnecessary overhead - you may as well just consume it directly in the worker. It's not exactly clear to me if that's the case, though - is the watcher consuming from a queue, or just putting items into one? If it is consuming from a queue, who is putting stuff into it? I don't think it'll matter since I'm not multiprocessing, but is this setup affected by Python's GIL (which I believe still exists in 3.4) in any way? Yes, this is affected by the GIL. Only one of your threads can run Python bytecode at a time, so won't get true parallelism, except when threads are running I/O (which releases the GIL). If your worker thread is doing CPU-bound activities, you should seriously consider running it in a separate process via multiprocessing, if possible. Should the watcher thread be running continuously like that? From what I understand, and please correct me if I'm wrong, asyncio is supposed to be used for event-based multithreading, which seems relevant to what I'm trying to do. It's hard to say, because I don't know exactly what "running continuously" means. What is it doing continuously? If it spends most of its time sleeping or blocking on a queue, it's fine - both of those things release the GIL. If it's constantly doing actual work, that will require the GIL, and therefore degrade the performance of the other threads in your app (assuming they're trying to do work at the same time). asyncio is designed for programs that are I/O-bound, and can therefore be run in a single thread, using asynchronous I/O. It sounds like your program may be a good fit for that depending on what your worker is doing. The main thread is basically always just waiting for the user to press a key to access a different part of the menu. This seems like a situation asyncio would be perfect for, but, again, I'm not sure. Any program where you're mostly waiting for I/O is potentially a good for for asyncio - but only if you can find a library that makes curses (or whatever other GUI library you eventually choose) play nicely with it. Most GUI frameworks come with their own event loop, which will conflict with asyncio's. You would need to use a library that can make the GUI's event loop play nicely with asyncio's event loop. You'd also need to make sure that you can find asyncio-compatible versions of any other synchronous-I/O based library your application uses (e.g. a database driver). That said, you're not likely to see any kind of performance improvement by switching from your thread-based program to something asyncio-based. It'll likely perform about the same. Since you're only dealing with 3 threads, the overhead of context switching between them isn't very significant, so switching from that a single-threaded, asynchronous I/O approach isn't going to make a very big difference. asyncio will help you avoid thread synchronization complexity (if that's an issue with your app - it's not clear that it is), and at least theoretically, would scale better if your app potentially needed lots of threads, but it doesn't seem like that's the case. I think for you, it's basically down to which style you prefer to code in (assuming you can find all the asyncio-compatible libraries you need).
1
30
0
I have a pretty basic understanding of multithreading in Python and an even basic-er understanding of asyncio. I'm currently writing a small Curses-based program (eventually going to be using a full GUI, but that's another story) that handles the UI and user IO in the main thread, and then has two other daemon threads (each with their own queue/worker-method-that-gets-things-from-a-queue): a watcher thread that watches for time-based and conditional (e.g. posts to a message board, received messages, etc.) events to occur and then puts required tasks into... the other (worker) daemon thread's queue which then completes them. All three threads are continuously running concurrently, which leads me to some questions: When the worker thread's queue (or, more generally, any thread's queue) is empty, should it be stopped until is has something to do again, or is it okay to leave continuously running? Do concurrent threads take up a lot of processing power when they aren't doing anything other than watching its queue? Should the two threads' queues be combined? Since the watcher thread is continuously running a single method, I guess the worker thread would be able to just pull tasks from the single queue that the watcher thread puts in. I don't think it'll matter since I'm not multiprocessing, but is this setup affected by Python's GIL (which I believe still exists in 3.4) in any way? Should the watcher thread be running continuously like that? From what I understand, and please correct me if I'm wrong, asyncio is supposed to be used for event-based multithreading, which seems relevant to what I'm trying to do. The main thread is basically always just waiting for the user to press a key to access a different part of the menu. This seems like a situation asyncio would be perfect for, but, again, I'm not sure. Thanks!
When should I be using asyncio over regular threads, and why? Does it provide performance increases?
1.2
0
0
7,368
31,542,307
2015-07-21T14:44:00.000
0
0
1
0
python,sql,mongodb,memory-management,mongoengine
31,543,147
1
false
0
0
The only chance you have is to take some sample documents to calculate their average size. The more difficult part is to know what the available memory is, keeping in mind that there are other processes that consume ram in parallel! So even when you take this road, you need to keep an amount of ram free. I doubt that the effort is worth it.
1
0
0
Im using python 2.7 with mongodb as my database. (actually it dosen't matter which database i use) In my database i have millions of documents, from time to time i need to iterate over all of them. It's not realistic to pull all the documents in one query because that will kill the memory, instead i pull each iteration 1000 documents and iterate them, when i finish i'm pulling another 1000 and so on. I was wondering if there is any formula to calculate the best number of pulling each iteration from the database. I couldn't find over the internet something that answer my issue. Basically my question is what is the best way of finding the best number to pull from the database in each iteration.
Decide how many documents to pull from Database for memory utilization
0
1
0
48
31,545,025
2015-07-21T16:49:00.000
1
0
0
0
python,django,django-models
31,545,842
2
true
1
0
You can dump the db directly with mysqldump as allcaps suggested, or run manage.py migrate first and then it should work. It's telling you there are migrations that you have yet to apply to the DB.
1
1
0
I used to use manage.py sqlall app to dump the database to sql statements. While, after upgrading to 1.8, it doesn't work any more. It says: CommandError: App 'app' has migrations. Only the sqlmigrate and sqlflush commands can be used when an app has migrations. It seems there is not a way to solve this. I need to dump the database to sql file, so I can use it to clone the whole database else where, how can I accomplish this?
Django: How to dump the database in 1.8?
1.2
1
0
309
31,545,637
2015-07-21T17:21:00.000
2
0
0
0
python,graph-tool
31,548,775
1
false
0
0
There are several ways. During compilation you can pass the option --disable-openmp to the configure script. If you just want to disable openmp during runtime you can either: Set the environment variable OMP_NUM_THREADS=1 Call the graph-tool function openmp_set_num_threads(1)
1
0
0
I want to disable OpenMP in graph-tool, but I can't find anything about OpenMP in graph-tool's official documentation. Is there any way to turn it off?
How to disable OpenMP support in graph-tool
0.379949
0
0
362
31,547,234
2015-07-21T18:47:00.000
9
0
0
0
python,django,sqlite
31,547,325
2
false
1
0
I have had a lot of these problems with Sqlite before. Basically, don't have multiple threads that could, potentially, write to the db. If you this is not acceptable, you should switch to Postgres or something else that is better at concurrency. Sqlite has a very simple implementation that relies on the file system for locking. Most file systems are not built for low-latency operations like this. This is especially true for network-mounted filesystems and the virtual filesystems used by some VPS solutions (that last one got me BTW). Additionally, you also have the Django layer on top of all this, adding complexity. You don't know when Django releases connections (although I am pretty sure someone here can give that answer in detail :) ). But again, if you have multiple concurrent writers, you need a database layer than can do concurrency. Period. I solved this issue by switching to postgres. Django makes this very simple for you, even migrating the data is a no-brainer with very little downtime.
1
3
0
I've been struggling with "sqlite3.OperationalError database is locked" all day.... Searching around for answers to what seems to be a well known problem I've found that it is explained most of the time by the fact that sqlite does not work very nice in multithreading where a thread could potentially timeout waiting for more than 5 (default timeout) seconds to write into the db because another thread has the db lock . So having more threads that play with the db , one of them using transactions and frequently writing I've began measuring the time it takes for transactionns to complete. I've found that no transaction takes more than 300 ms , thus rendering as not plausible the above explication. Unless the thread that uses transactions makes ~21 (5000 ms / 300 ms) consecutive transactions while any other thread desiring to write gets ignored all this time So what other hypothesis could potentially explain this behavior ?
Django sqlite database is locked
1
1
0
4,653
31,551,135
2015-07-21T23:05:00.000
0
0
0
0
python,excel,pivot-table,xlsx,openpyxl
31,556,316
1
false
0
0
This is currently not possible with openpyxl.
1
0
0
I'm working with XLSX files with pivot tables and writing an automated script to parse and extract the data. I have multiple pivot tables per spreadsheet with cost categories, their totals, and their values for each month etc. Any ideas on how to use openpyxl to parse each pivot table?
Extracting data from excel pivot tables using openpyxl
0
1
0
1,122
31,553,081
2015-07-22T02:55:00.000
0
0
0
0
python,websocket,apache-kafka
31,553,699
1
false
0
0
Your server websockets have callbacks for error as well as close. Are you monitoring both? Sockets do not have to tell the other end. Naturally because you can close a browser window and the server will never know. When you open them you can set a time-out that will cause the socket to close if it doesn't see activity in that time. You could also 'ping' the connections to force and error/close if they disconnect. Lastly you could have a session GUID that both the client and browser know about (cookie or localStorage). If a client reconnects and the GUID shows an active connection on the server you can close that connection before opening a fresh one.
1
0
0
I use websocket establish connect 10000 clients to the server. But when some of the clients disconnect the conn, the server could not find this situation and still keep this conn.So when clients conn to server again,a new conn established and the number of conn in the server become such a large num.If i dont restart the server, the conn num will still imcrease...
when using Websocket establish connections, sometimes clients free the conn,but the server didnt
0
0
1
98
31,553,322
2015-07-22T03:27:00.000
1
0
1
0
python,django,directory,pycharm
31,554,899
1
true
1
0
Open File > settings menu and then goto project: foo > Project Structure and press Add Content Root, then select destination directory. and after folder added in list, right click on the folder and set as source, in last step press OK...
1
1
0
I want to use a folder that is not in the base directory of my django project without adding it in to the base directory.
PyCharm - how to use a folder that is not in the base directory
1.2
0
0
83
31,559,238
2015-07-22T09:31:00.000
0
0
0
0
python,pyramid,pylons
31,573,526
1
true
1
0
Search form is a classical example of a form which should use GET. Just use GET and get the correct behaviour for free :) I don't see anything requiring POST in your question.
1
1
0
I have a mini-search form on a Pyramid app webpage, where contents are read and processed upon POST request when user presses a Search button. I selected POST method of submitting since the web form is otherwise complex and processing them this way plays well with WTForms as well as it seems default and convenient way of handling forms in Pyramid (if request.method == 'POST': ... etc). But that gets me a problem - I do not have query string (available in request.params) anymore to form an URL that can be copied and pasted elsewhere to redo the search. request.params is a read-only NestedMultiDict, so I can't add query parameters in there. Web forms are rendered using Chameleon and in typical way (return {..} for Chameleon template engine to get them and use for rendering HTML). Is there a way of passing query string explicitly to the next request so that after pressing Search the user gets search query string added to URL? (I do not want to use kludges like HTTPFound redirect to the same view, etc).
Setting query_string for next request / sending search urls around
1.2
0
0
55
31,559,638
2015-07-22T09:49:00.000
1
0
1
0
python,pip,pypi
51,655,921
2
false
0
0
This is what worked for me: unset all_proxy (optional in case none are set) pip install 'requests[socks]' inside the venv
1
7
0
I am getting the following exception while trying to install using pip: Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ProtocolError('Connection aborted.', error(111, 'Connection refused'))': /simple/<package>/ Where does pip install the packages from? How do I proxy or use alternate internal site to get these packages?
pip install and custom index url
0.099668
0
1
21,181
31,565,422
2015-07-22T14:03:00.000
0
0
0
0
python,scrapy
31,590,603
3
true
1
0
Scrapy's CrawlSpider has an internal _follow_links member variable which is not yet documented (experimental as for now) setting self._follow_links = False will tell scrapy to stop following more links. But continue to finish up all the Request objects it has already created
1
1
0
Basically I have a regex rule for following pages Each page has 50 links When i hit a link that is too old (based on a pre-defined date-time) I want to tell scrapy to stop following more pages, but NOT stop it entirely, it must continue to scrape the links it has already decided to scrape -> (complete all Request objects created). JUST that it must NOT follow any more links. So the program will eventually grind to a stop (when it's done scraping all the links) Is there any way i can do this inside the spider?
How to tell scrapy crawler to STOP following more links dynamically?
1.2
0
1
433
31,566,168
2015-07-22T14:31:00.000
1
0
1
0
python,centos
31,566,317
1
true
0
0
Import the time library: import time; And use: time.sleep(1)
1
0
0
I have a "FOR" loop in the python 2.7 web scraping programme and I am going to insert a time gap of 1 second at the end of the FOR loop. How can I do that? Thanks.
How to insert a time gap of 1 second between execution of two statements in python
1.2
0
0
1,092
31,566,675
2015-07-22T14:51:00.000
5
0
0
0
python,django,django-rest-framework
31,571,773
2
false
1
0
The obvious answer is that HyperLinkedIdentityField is meant to point to the current object only, whereas HyperLinkedRelatedField is meant to point to something that the current object references. I suspect under the hood the two are different only in that the identity field has less work to do in order to find the related model's URL routes (because the related model is the current model), while the related field has to actually figure out the right URLs for some other model. In other words, HyperLinkedIdentityField is lighter-weight (more efficient), but won't work for models other than the current model.
1
15
0
I've of course reviewed the docs, but was wondering if anyone could more succinctly explain the difference in use case and application between these fields. Why would one use one field over the other? Would there be a difference between these fields for a OneToOne relationship?
For Django Rest Framework, what is the difference in use case for HyperLinkedRelatedField and HyperLinkedIdentityField?
0.462117
0
0
3,157
31,568,442
2015-07-22T16:07:00.000
0
0
0
0
python,tkinter
31,597,337
2
false
0
1
If you are using tkinter, you can use Python 3: tkinter.iconbitmap(yourIconFilePath) Python 2: Tkinter.iconbitmap(yourIconFilePath) NOTE: Icon files have the *.ico extension
1
1
0
I wanted to insert an icon in my GUI. I have already tried inserting one but I need help and want to make it transparent. Can I make the icon transparent in any way? Any help would be appreciated.
Make the logo transparent in Python tkinter 3.4
0
0
0
1,083
31,572,540
2015-07-22T19:41:00.000
0
0
0
0
python,html,virtual-machine,anaconda,userappdatapath
31,594,070
1
true
1
0
The problem was that Internet Explorer on the VM was very old and therefore not running the html code properly. Updated to Firefox and it worked!
1
0
0
I am trying to run my python files from a Remote Desktop Connection (virtual machine?). I copied over a few folders I thought would be relevant and ran Anaconda to install python and the add-ons. My code runs, but the output is html files and in the VM they are empty. I checked the code for the html and it looks like it writes information from my local C:\ drive. For example, this is a snippet from the html: BEGIN C:\Users\jbyrusb\AppData\Local\Continuum\Anaconda\lib\site-packages\bokeh\server\static\css/bokeh.min.css I tried to copy the AppData folder over to the VM. Still, the html files come up empty. Does anyone know why/ a better way to move my things onto a VM? This is my first time using one.
Using a Remote Deskdop Connection to run Python, empty html files
1.2
0
0
45
31,572,781
2015-07-22T19:54:00.000
3
0
1
0
python,probability
31,572,925
3
true
0
0
Yes. From docs: Almost all module functions depend on the basic function random(), which generates a random float uniformly in the semi-open range [0.0, 1.0). Python uses the Mersenne Twister as the core generator.
1
0
0
I know that we use random.random() to generate a random decimal between 0 and 1. Is the probability that the random number is less than .5 equal to 50%? And that the random number is greater than .5 equal to 50%?
Python - probability of a random number between 0 and 1
1.2
0
0
3,152
31,574,425
2015-07-22T21:36:00.000
1
0
1
0
python,pip
31,576,628
1
false
0
0
I had a similar issue, it was resolved for me by manually installing the libraries that had the most dependencies with sudo apt-get install. If you manually go for : sudo apt-get install python-scipy sudo apt-get install python-numpy sudo apt-get install pandas Once these large packages which have many dependencies are installed using apt-get, installing the rest via pip will be easy and will go smoothly. N.B. - Use pip3 and sudo apt-get install python3-scipy commands if you're using Python3.
1
0
0
I have a very long requirements.txt file and everytime a single package fails installing from pip install -r requirements.txt and I relaunch the same command again, it will restart from the very beginning of the list. Is there any option to save the successful installs and not start it all over from scratch (some are very long to compile). Thanks a lot.
pip install -r requirements delete all packages on failure
0.197375
0
0
86
31,575,359
2015-07-22T22:49:00.000
0
0
1
0
python,ipython-notebook
31,575,455
2
true
0
0
No, installing a python module in the terminal is sufficient. There is not such thing as installing within the IPython notebook. Simply restart your IPython notebook after the install. If it still does not work, you are probably not using the same python interpreter between the two: check python version (which python), make sure you are not using a virtual environment, and that $PYTHON_PATH is not somehow different, etc.
2
3
0
I recently downloaded some software that requires one to change to the directory with python files, and run python setup.py install --user in the Terminal. One then checks whether the code is running correctly by trying from [x] import [y] This works on my Terminal. However, when I then try from [x] import [y] in the notebook, it never works. So, this makes me think I must install the setup.py file within the iPython notebook. How does one do this?
Executing "python setup.py install" inside an iPython notebook
1.2
0
0
3,817
31,575,359
2015-07-22T22:49:00.000
0
0
1
0
python,ipython-notebook
31,576,569
2
false
0
0
You might be using the wrong version of the iPython notebook. Perhaps you've been using ipython3 notebook instead of ipython notebook or vice-versa. If Python2.7 has the package you want, it won't work if you try to import it into the ipython3 notebook. If there's a version mismatch then you can usually just get the relevant package using a sudo pip3 install package_name or sudo pip install package_name depending on the version you want. Of course pip can be obtained by sudo apt-get install python3-pip for Python 3 and sudo apt-get install python-pip for Python2.7.
2
3
0
I recently downloaded some software that requires one to change to the directory with python files, and run python setup.py install --user in the Terminal. One then checks whether the code is running correctly by trying from [x] import [y] This works on my Terminal. However, when I then try from [x] import [y] in the notebook, it never works. So, this makes me think I must install the setup.py file within the iPython notebook. How does one do this?
Executing "python setup.py install" inside an iPython notebook
0
0
0
3,817
31,576,343
2015-07-23T00:40:00.000
4
0
0
0
javascript,python,firefox,selenium
31,576,491
2
false
0
0
Selenium actually already runs private mode by default. Every time you start any driver via Selenium it creates a brand new anonymous profile. This of course if you haven't specified an already created profile.
1
1
0
Is there a way in Selenium web driver (python, Firefox) to check that the current window is in private mode(private window so the cookies won't be cached) or it is just a normal window?
Selenium: Check if the current window is a private window or normal window?
0.379949
0
1
452
31,576,911
2015-07-23T01:52:00.000
1
0
1
0
python,python-2.7
31,576,988
4
false
0
0
Just create 4 loops, one for each side of the array, that counts through the values of the index that changes for that side. For example, the first side, whose x index is always 0, could vary the y from 0 to n-2 (from the top-left corner to just shy of the bottom-left); repeat for the other sides.
1
1
0
I'm traversing a two-dimensional list (my representation of a matrix) in an unusual order: counterclockwise around the outside starting with the top-left element. I need to do this more than once, but each time I do it, I'd like to do something different with the values I encounter. The first time, I want to note down the values so that I can modify them. (I can't modify them in place.) The second time, I want to traverse the outside of the matrix and modify the values of the matrix as I go, perhaps getting my new values from some generator. Is there a way I can abstract this traversal to a function and still achieve my goals? I was thinking that this traverse-edge function could take a function and a matrix and apply the function to each element on the edge of the matrix. However, the problems with this are two-fold. If I do this, I don't think I can modify the matrix that's given as an argument, and I can't yield the values one by one because yield isn't a function. Edit: I want to rotate a matrix counterclockwise (not 90 degrees) where one rotation moves, for example, the top-left element down one spot. To accomplish this, I'm rotating one "level" (or shell) of the matrix at a time. So if I'm rotating the outermost level, I want to traverse it once to build a list which I can shift to the left, then I want to traverse the outermost level again to assign it those new values which I calculated.
Custom list traversal and modification
0.049958
0
0
58
31,579,293
2015-07-23T06:01:00.000
0
0
1
0
python,exception
31,579,506
1
true
0
0
If you do not want the exception to be thrown back to the user, then you should handle the exception in Y . I believe its better to give a meaningful message back to the user after handling an exception rather than throwing the exception back to the user. Unless what you are building is actually a framework, in which case your users would be other developers and then exceptions is the way to go.
1
0
0
I have a project X from which I am calling a function from the module m, in the module m I have defined a custom exeception called XException(Exception) if an error happens in X.m.func I raise XException ! so the question is, as I am calling the func from X.m should I handle the raised exception in my this project Y, since in this project Y I am just calling the function like this X.m.func() in Y.module.function ? Since when exception is raised execption of Y.module.function stops which is ok and which is what I want but not sure if I should handle the exception in Y project and log the message ?
catching exception raised in different project method
1.2
0
0
36
31,580,276
2015-07-23T07:00:00.000
0
0
0
1
python,twitter,apache-kafka
31,580,503
1
false
0
0
Don't see how that would be possible, but instead you can: Use Kafka's API to obtain an offset that is earlier than a given time (getOffsetBefore). Note that the granularity depends on your storage file size IIRC and thus you can get an offset that is quite a bit earlier than the time you specified Keep a timestamp in the message itself and use it in conjunction with above to skip messages Keep an external index of time->offset yourself and use that
1
0
0
So I'm using Apache Kafka as a message queue to relay a Twitter Stream to my consumers. If I want to go back, I want to have a value (offset) which I can send Kafka. So, for eg, if I want to go back one day, I have no idea what the offset would be for that. Hence, can I set the offset manually? Maybe a linux/epoch timestamp?
Apache Kafka: Can I set the offset manually
0
0
0
294
31,580,478
2015-07-23T07:10:00.000
1
0
0
0
python,file,data-structures,disk
31,581,102
2
true
0
0
There's quite a number of problems you have to solve, some are quite straight forward and some are a little bit more elaborate, but since you want to do it yourself I don't think you minding about filling out details yourself (so I'll skip some parts). First simple step is to serialize and deserialize nodes (in order to be able to store on disk at all). That could be done in an ad hoc manner by having your nodes having an serialize/deserialize method - in addition you might want to have the serialized data to have an type indicator so you can know which class' deserialize you should use to deserialize data. Note that on disk representation of a node must reference other nodes by file offset (either directly or indirectly). The actual reading or writing of the data is done by ordinary (binary) file operations, but you have to seek to the right position in the file first. Second step is to have the possibility to allocate space in the file. If you only want to have a write-once-behaviour it's quiete forward to just grow the file, but if you want to modify the data in the file (adding and removing nodes or even replacing them) you will have to cope with situation where regions in the file that are no longer in use and either reuse these or even pack the layout of the file. Further steps could involve making the update atomic in some sense. One solution is to have a region where you write enough information so that the update can be completed (or abandoned) if it were terminated prematurely in it's most simple form it might just be a list of indempotent operations (operation that yields the same result if you repeat them, fx writing particular data to a particular place in the file). Note that while (some of) the builtin solutions does indeed handle writing and reading the entire graph to/from disk they do not really handle the situation where you want to read only part of the graph or modifying the graph very efficient (you have to read mostly the whole graph and writing the complete graph in one go). Databases are the exception where you may read/write smaller parts of your data in a random manner.
1
1
1
I couldn't find any resources on this topic. There are a few questions with good answers describing solutions to problems which call for data stored on disk (pickle, shelve, databases in general), but I want to learn how to implement my own. 1) If I were to create a disk based graph structure in Python, I'd have to implement the necessary methods by writing to disk. But how do I do that? 2) One of the benefits on disk based structures is having the efficiency of the structure while working with data that might not all fit on memory. If the data does not fit in memory, only some parts of it are accessed at once. How does one access only part of the structure at once?
Creating a disk-based data structure
1.2
0
0
1,607
31,582,012
2015-07-23T08:28:00.000
0
1
0
0
python,curl
31,584,263
1
false
0
0
Can you stay under command line ? If yes, try the python lib nammed "pexpect". It's pretty useful, and let you run commands like on a terminal, from a python program, and interact with the terminal !
1
1
0
Using python 2.7, I need to convert the following curl command to execute in python. curl -b /tmp/admin.cookie --cacert /some/cert/location/serverapache.crt --header "X-Requested-With: XMLHttpRequest" --request POST "https://www.test.com" I am relatively new to Python and are not sure how to use the urllib library or if I should use the requests library. The curl options are especially tricky for me to convert. Any help will be appreciated.
curl and curl options to python conversion
0
0
1
205
31,582,768
2015-07-23T09:02:00.000
1
0
1
1
python,python-2.7
33,733,485
1
true
0
0
Since nobody answered, I'll post what I found here. These instructions are for an 'offline' build machine, e.g. download/obtain everything you need prior to setting up the build environment. I don't connect my build machines to the internet. The instructions assume you downloaded the 2.7.10 PSF source release. This may have been made easier in git. I'm only showing the 32-bit build here, the 64-bit build needs some extra steps. Pre-reqs: Microsoft Windows 7 Professional with service pack 1 (64-bit) Install Microsoft Visual Studio Team System 2008 development edition, service pack 1 ActivePython 2.7.8.10 32-bit. Note: Needs to be 32-bit to get access to msm.merge2.1 which is a 32-bit COM object. put Nasm.exe 2.11.06 in path Install ActiveState Perl 64-bit, including Perl v5.20.2 Set the environment variable HOST_PYTHON to c:\python27\python.exe Set the environment variable PYTHON to python For building documentation, install the following. If you are connected to the internet you can let pip download these as they are dependencies of Sphinx. pip install alabaster-0.7.6-py2-none-any.whl install MarkupSafe-0.23 (no wheel available) by the usual route of python setup.py install from the source directory pip install Jinja2-2.8-py2.py3-none-any.whl pip install Pygments-2.0.2-py2-none-any.whl pip install pytz-2015.4-py2.py3-none-any.whl Install Babel-2.0, as above no wheel or egg, so needs to be from source. pip install --no-deps sphinx_rtd_theme-0.1.8-py2.py3-none-any.whl (due to circular dependency with Sphinx) pip install Sphinx-1.3.1-py2.py3-none-any.whl Go to tools/buildbot/build.bat and edit the file, change the 'Debug' build targets to 'Release'. Remove '_d' from the kill_python exe name. Go to the 'Doc' directory. Type 'make.bat htmlhelp' to build the help. Go to file Tools/buildbot/buildmsi.bat, and change the help workshop command line to point to what you created in the previous step, e.g.: "%ProgramFiles%\HTML Help Workshop\hhc.exe" Doc\build\htmlhelp\python2710.hhp Edit Tools/buildbot/external.bat, stop the build being a debug build by changing as follows: if not exist tcltk\bin\tcl85g.dll ( @rem all and install need to be separate invocations, otherwise nmakehlp is not found on install cd tcl-8.5.15.0\win nmake -f makefile.vc INSTALLDIR=..\..\tcltk clean all nmake -f makefile.vc INSTALLDIR=..\..\tcltk install cd ..\.. ) if not exist tcltk\bin\tk85g.dll ( cd tk-8.5.15.0\win nmake -f makefile.vc INSTALLDIR=..\..\tcltk TCLDIR=..\..\tcl-8.5.15.0 clean nmake -f makefile.vc INSTALLDIR=..\..\tcltk TCLDIR=..\..\tcl-8.5.15.0 all nmake -f makefile.vc INSTALLDIR=..\..\tcltk TCLDIR=..\..\tcl-8.5.15.0 install cd ..\.. ) if not exist tcltk\lib\tix8.4.3\tix84g.dll ( cd tix-8.4.3.5\win nmake -f python.mak DEBUG=0 MACHINE=IX86 TCL_DIR=..\..\tcl-8.5.15.0 TK_DIR=..\..\tk-8.5.15.0 INSTALL_DIR=..\..\tcltk clean nmake -f python.mak DEBUG=0 MACHINE=IX86 TCL_DIR=..\..\tcl-8.5.15.0 TK_DIR=..\..\tk-8.5.15.0 INSTALL_DIR=..\..\tcltk all nmake -f python.mak DEBUG=0 MACHINE=IX86 TCL_DIR=..\..\tcl-8.5.15.0 TK_DIR=..\..\tk-8.5.15.0 INSTALL_DIR=..\..\tcltk install cd ..\.. ) In buildbot/external-common.bat, simply remove the clause building Nasm as we are already providing that as a binary. I haven't documented the build of the wininst*.exe stubs from distutils, but the PSF ones are binary-identical to the ones in the ActiveState Python distribution 2.7.8.10, so you can just copy from there. Finally, from the root directory run tools\buildbot\buildmsi.bat. This will build the 32-bit installer.
1
0
0
I mean all of it, starting from all sources, and ending up with the .MSI file on the Python website. This includes building the distutils wininst*.exe files. I have found various READMEs that get me some of the way, but no comprehensive guide.
How do I build the latest Python 2 for Windows?
1.2
0
0
132
31,582,821
2015-07-23T09:04:00.000
0
0
0
0
python,excel,import
31,582,822
3
false
0
0
exFile = ExcelFile(f) #load file f data = ExcelFile.parse(exFile) #this creates a dataframe out of the first sheet in file
1
4
1
I have an excel file composed of several sheets. I need to load them as separate dataframes individually. What would be a similar function as pd.read_csv("") for this kind of task? P.S. due to the size I cannot copy and paste individual sheets in excel
How to open an excel file with multiple sheets in pandas?
0
1
0
15,976
31,582,861
2015-07-23T09:07:00.000
0
0
0
0
python,django,sqlite
31,583,131
2
false
0
0
Sqlite needs to access the provided file. So this is more of a filesystem question rather than a python one. You have to find a way for sqlite and python to access the remote directory, be it sftp, sshfs, ftp or whatever. It entirely depends on your remote and local OS. Preferably mount the remote subdirectory on your local filesystem. You would not need to make a copy of it although if the file is large you might want to consider that option too.
2
1
0
I have a Django application that runs on apache server and uses Sqlite3 db. I want to access this database remotely using a python script that first ssh to the machine and then access the database. After a lot of search I understand that we cannot access sqlite db remotely. I don't want to download the db folder using ftp and perform the function, instead I want to access it remotely. What could be the other possible ways to do this? I don't want to change the database, but am looking for alternate ways to achieve the connection.
Remotely accessing sqlite3 in Django using a python script
0
1
0
1,226
31,582,861
2015-07-23T09:07:00.000
3
0
0
0
python,django,sqlite
31,583,957
2
true
0
0
Leaving aside the question of whether it is sensible to run a production Django installation against sqlite (it really isn't), you seem to have forgotten that, well, you are actually running Django. That means that Django can be the main interface to your data; and therefore you should write code in Django that enables this. Luckily, there exists the Django REST Framework that allows you to simply expose your data via HTTP interfaces like GET and POST. That would be a much better solution than accessing it via ssh.
2
1
0
I have a Django application that runs on apache server and uses Sqlite3 db. I want to access this database remotely using a python script that first ssh to the machine and then access the database. After a lot of search I understand that we cannot access sqlite db remotely. I don't want to download the db folder using ftp and perform the function, instead I want to access it remotely. What could be the other possible ways to do this? I don't want to change the database, but am looking for alternate ways to achieve the connection.
Remotely accessing sqlite3 in Django using a python script
1.2
1
0
1,226
31,585,809
2015-07-23T11:18:00.000
1
0
1
0
python,code-generation,distutils
31,618,879
1
true
0
0
I solved this by subclassing build_py instead of build. It turns out build_py has a build_lib attribute that will be the path to the "build" directory. By looking at the source code I think there is no better way.
1
2
0
I am generating some Python files in my setup.py as part of the build process. These files should be part of the installation. I have successfully added my code generator as a pre-build step (by implementing my own Command and overriding the default build to include this). How do I copy my generated files from the temporary directory into the build output? Should I copy it myself using e.g. copy_file? If so, how do I get the path to the build output? Or should I declare it as part of the build somehow? I'd rather not clutter the source directory tree with my generated files, hence I prefer to avoid copying the files there and then declaring them as part of the package.
Add generated Python file as part of build
1.2
0
0
67
31,593,267
2015-07-23T16:37:00.000
1
0
0
1
python,sockets,networking
31,597,835
1
false
0
0
So it looks like buffer or memoryview will do the trick. Although, there is some discrepancies in the sites I found regarding whether python 2.7 supported this or not, I will have to test it out to make sure
1
1
0
Is there a PACKET_MMAP or similar flag for python sockets? I know in C one can use a zero-copy/circular buffer with the previous mention flag to avoid having to copy buffers from kernel space to user space but I cannot find anything similar in the python documentation. Thanks for any input on docs or code to look into.
Python socket with PACKET_MMAP
0.197375
0
0
178
31,598,119
2015-07-23T21:12:00.000
0
0
0
0
python,html,forms
31,598,300
2
true
1
0
Flask+Jinja should work well for what you're trying to do. Essentially, your first page would be a form with a number of form elements. When your form is submitted, that data gets passed back to your flask app where you can extract the users selections. Using these selections you can generate/populate the next html page. Since the user can seemingly select any combination of fields, the template for your second html page should contain all the possible tables and then only show the selected ones using an if...else statement
1
0
0
For starters - I am using a combination of HTML, Python+Flask/jinja I have an HTML page which contains a basic form. When users input data to that form, it is passed through my Python/flask script and populates a different HTML template with the inputted form values. What I need to accomplish is creating variations of the final HTML template based on what fields users choose in the beginning form. e.g. The flow would appear as: User selects fields to use in HTML form > data is passed through flask app > data is populated in final HTML template, which is designed around the fields selected in the original form. The final HTML template is essentially a series of tables. Based on which fields the user selects in the form, some tables will be needed and others not. Whether the user selects a field should determine whether or not the table appears in the final HTML templates code or not. I'm not entirely sure what tools I can use to accomplish this, and whether I will need something to supplement flask/jinja. Thanks for any input.
How can I create an HTML page that depends on form inputs?
1.2
0
0
72
31,600,127
2015-07-24T00:08:00.000
-2
0
1
1
python,windows,tarfile
31,600,239
1
true
0
0
A quick test tells me that a (forward) slash is always used. In fact, the tar format stores the full path of each file as a single string, using slashes (try looking at a hex dump), and python just reads that full path without any modification. Likewise, at extraction time python hard-replaces slashes with the local separator (see TarFile._extract_member). ... which makes me think that there are surely some nonconformant implementations of tar for Windows that create tarfiles with backslashs as separators!?
1
2
0
Is the path separator employed inside a Python tarfile.TarFile object a '/' regardless of platform, or is it a backslash on Windows? I basically never touch Windows, but I would kind of like the code I'm writing to be compatible with it, if it can be. Unfortunately I have no Windows host on which to test.
Does os.path.sep affect the tarfile module?
1.2
0
0
242
31,600,249
2015-07-24T00:23:00.000
0
1
1
0
python,json,reddit
31,689,325
1
false
0
0
The number of comments you get from the API has a hard limit, for performance reasons; to ensure you're getting all comments, you have to parse through the child nodes and make additional calls as necessary. Be aware that the subreddit listing will only include the latest 1000 posts, so if your target subreddit has more than that, you probably won't be able to obtain a full backup anyways.
1
2
0
I'm looking to backup a subreddit to disk. So far, it doesn't seem to be easily possible with the way that the Reddit API works. My best bet at getting a single JSON tree with all comments (and nested comments) would seem to be storing them inside of a database and doing a pretty ridiculous recursive query to generate the JSON. Is there a Reddit API method which will give me a tree containing all comments on a given post in the expected order?
Get a JSON tree of all comments of a post?
0
0
1
429
31,601,238
2015-07-24T02:28:00.000
5
0
0
0
python,urllib
54,285,222
7
false
0
0
Change from urllib.request import urlopen to from urllib import urlopen I was able to solve this problem by changing like this. For Python2.7 in macOS10.14
3
22
0
I opened python code from github. I assumed it was python2.x and got the above error when I tried to run it. From the reading I've seen Python 3 has depreciated urllib itself and replaced it with a number of libraries including urllib.request. It looks like the code was written in python 3 (a confirmation from someone who knows would be appreciated.) At this point I don't want to move to Python 3 - I haven't researched what it would do to my existing code. Thinking there should be a urllib module for Python 2, I searched Google (using "python2 urllib download") and did not find one. (It might have been hidden in the many answers since urllib includes downloading functionality.) I looked in my Python27/lib directory and didn't see it there. Can I get a version of this module that runs on Python27? Where and how?
Python 2.7.10 error "from urllib.request import urlopen" no module named request
0.141893
0
1
108,467
31,601,238
2015-07-24T02:28:00.000
0
0
0
0
python,urllib
37,034,337
7
false
0
0
For now, it seems that I could get over that by adding a ? after the URL.
3
22
0
I opened python code from github. I assumed it was python2.x and got the above error when I tried to run it. From the reading I've seen Python 3 has depreciated urllib itself and replaced it with a number of libraries including urllib.request. It looks like the code was written in python 3 (a confirmation from someone who knows would be appreciated.) At this point I don't want to move to Python 3 - I haven't researched what it would do to my existing code. Thinking there should be a urllib module for Python 2, I searched Google (using "python2 urllib download") and did not find one. (It might have been hidden in the many answers since urllib includes downloading functionality.) I looked in my Python27/lib directory and didn't see it there. Can I get a version of this module that runs on Python27? Where and how?
Python 2.7.10 error "from urllib.request import urlopen" no module named request
0
0
1
108,467
31,601,238
2015-07-24T02:28:00.000
7
0
0
0
python,urllib
34,079,450
7
false
0
0
Instead of using urllib.request.urlopen() remove request for python 2. urllib.urlopen() you do not have to request in python 2.x for what you are trying to do. Hope it works for you. This was tested using python 2.7 I was receiving the same error message and this resolved it.
3
22
0
I opened python code from github. I assumed it was python2.x and got the above error when I tried to run it. From the reading I've seen Python 3 has depreciated urllib itself and replaced it with a number of libraries including urllib.request. It looks like the code was written in python 3 (a confirmation from someone who knows would be appreciated.) At this point I don't want to move to Python 3 - I haven't researched what it would do to my existing code. Thinking there should be a urllib module for Python 2, I searched Google (using "python2 urllib download") and did not find one. (It might have been hidden in the many answers since urllib includes downloading functionality.) I looked in my Python27/lib directory and didn't see it there. Can I get a version of this module that runs on Python27? Where and how?
Python 2.7.10 error "from urllib.request import urlopen" no module named request
1
0
1
108,467
31,601,820
2015-07-24T03:43:00.000
1
1
0
0
python,django,code-coverage
31,603,369
3
false
1
0
On a well tested project, coverage would be ideal but with some untested legacy code I don't think there is a magical tool. You could write a big test loading all the pages and run coverage to get some indication. Cowboy style: If it's not some critical code and you're fairly sure it's unused (i.e. not handling payments, etc.). Comment it out, check that the tests pass, deploy and wait a week or so before removing it definitely (or putting it back if you got a notification).
1
5
0
We have been using Django for a long time. Some old code is not being used now. How can I find which code is not being used any more and remove them. I used coverage.py with unit tests, which works fine and shows which part of code is never used, but the test covered is very low. Is there any way to use it with WSGI server to find which code have never served any web requests?
How to find unused code in Python web site?
0.066568
0
0
1,412
31,606,659
2015-07-24T09:17:00.000
1
0
0
1
python,c,unix,gcc
31,606,702
1
false
0
0
Answer to your first paragraph: Use MinGW for the compiler (google it, there is a -w64 version if you need that) and MSYS for a minimal environment including shell tools the Makefile could need.
1
0
0
I have a c-program which includes a make file that works fine on unix systems. Although I would like to compile the program for windows using this make file, how can i go around doing that? Additionally I have python scripts that call this c-program using ctypes, I don't imagine I will have to much of an issue getting ctypes working on windows but i heard its possible to include all the python and c scripts in one .exe for windows, has anyone heard of that?
Compiling a unix make file for windows
0.197375
0
0
69
31,607,458
2015-07-24T09:53:00.000
1
0
0
0
python,matplotlib,plot,scipy,clipboard
63,382,231
4
false
0
0
The last comment is very useful. Install the package with pip install addcopyfighandler Import the module after importing matplotlib, for instance: import matplotlib.pyplot as plt import matplotlib.font_manager as fm from matplotlib.cm import get_cmap import addcopyfighandler Use ctr + C to copy the Figure to the clipboard And enjoy.
1
13
1
In MATLAB, there is a very convenient option to copy the current figure to the clipboard. Although Python/numpy/scipy/matplotlib is a great alternative to MATLAB, such an option is unfortunately missing. Can this option easily be added to Matplotlib figures? Preferably, all MPL figures should automatically benefit from this functionality. I'm using MPL's Qt4Agg backend, with PySide.
How to add clipboard support to Matplotlib figures?
0.049958
0
0
12,554
31,611,075
2015-07-24T12:54:00.000
0
0
0
0
python,machine-learning,scikit-learn,random-forest,pruning
66,269,162
2
false
0
0
You could try ensemble pruning. This boils down to removing from your random forest a number of the decision trees that make it up. If you remove trees at random, the expected outcome is that the performance of the ensemble will gradually deteriorate with the number of removed trees. However, you can do something more clever like removing those trees whose predictions are highly correlated with the predictions of the rest of the ensemble, and thus do to significantly modify the outcome of the whole ensemble. Alternatively, you can train a linear classifier that uses as inputs the outputs of the individual ensembles, and include some kind of l1 penalty in the training to enforce sparse weights on the classifier. The weights with 0 or very small value will hint which trees could be removed from the ensemble with a small impact on accuracy.
2
4
1
I have sklearn random forest regressor. It's very heavy, 1.6 GBytes, and works very long time when predicting values. I want to prune it to make lighter. As I know pruning is not implemented for decision trees and forests. I can't implement it by myself since tree code is written on C and I don't know it. Does anyone know the solution?
Random Forest pruning
0
0
0
5,467
31,611,075
2015-07-24T12:54:00.000
3
0
0
0
python,machine-learning,scikit-learn,random-forest,pruning
31,611,419
2
false
0
0
The size of the trees can be a solution for you. Try to limit the size of the trees in the forest (max leaf noders, max depth, min samples split...).
2
4
1
I have sklearn random forest regressor. It's very heavy, 1.6 GBytes, and works very long time when predicting values. I want to prune it to make lighter. As I know pruning is not implemented for decision trees and forests. I can't implement it by myself since tree code is written on C and I don't know it. Does anyone know the solution?
Random Forest pruning
0.291313
0
0
5,467
31,611,089
2015-07-24T12:54:00.000
0
0
0
0
python,tree,kdtree
31,646,627
1
false
0
0
A typical KD tree node contains a reference to the data point. A KD tree that only keeps the coordinates is much less useful. This way, you can easily identify them.
1
1
1
I've been studying KD Trees and KNN searching in 2D & 3D space. The thing I cannot seem to find a good explanation of is how to identify which objects are being referenced by each node of the tree. Example would be an image comparison database. If you generated descriptors for all the images, would you push all the descriptor data on to one tree? If so, how do you know which nodes are related to which original images? If not, would you generate a tree for each image, and then do some type of KD-Tree Random Forest nearest neighbor queries to determine which trees are closest to each other in 3-D space? The image example might not be a good use case for KD-Trees since it's highly dimensional space, but I'm more using it to help explain the question I'm asking. Any guidance on practical applications of KD-Tree KNN queries for comparing objects is greatly appreciated. Thanks!
How to identify objects related to KD Tree data?
0
0
0
245
31,612,074
2015-07-24T13:39:00.000
0
0
0
0
python,theano,deep-learning
31,778,280
1
false
0
0
When pickling models, it is always better to save the parameters and when loading re-create the shared variable and rebuild the graph out of this. This allow to swap the device between CPU and GPU. But you can pickle Theano functions. If you do that, pickle all associated function at the same time. Otherwise, they will have each of them a different copy of the shared variable. Each call to load() will create new shared variable if they where pickled. This is a limitation of pickle.
1
0
1
I am looking for some suggestions about how to do continue training in theano. For example, I have the following: classifier = my_classifier() cost = () updates = [] train_model = theano.function(...) eval_model = theano.function(...) best_accuracy = 0 while (epoch < n_epochs): train_model() current_accuracy = eval_model() if current_accuracy > best_accuracy: save classifier or save theano functions? best_accuracy = current_accuracy else: load saved classifier or save theano functions? if we saved classifier previously, do we need to redefine train_model and eval_model functions? epoch+=1 #training is finished save classifier I want to save the current trained model if it has higher accuracy than previously trained models, and load the saved model later if the current trained model accuracy is lower than the best accuracy. My questions are: When saving, should I save the classifier, or theano functions? If the classifier needs to be saved, do I need to redefine theano functions when loading it, since classifier is changed. Thanks,
Theano continue training
0
0
0
246
31,612,336
2015-07-24T13:51:00.000
0
0
0
0
python,pygame,event-loop
32,638,692
1
false
0
1
Okay, I have got a solution which works like a charm. Since we have a server for handling some game events, we decided to grab a focus inside one game and share all keyboard events to other games through the server using web sockets.
1
0
0
There are two games (A and B), on the left and right screen sides. The game A responds to mouse clicks, the game B — to left/right keydowns. There is also a server which handles games behaviour: start(A), start(B) pause(A), pause(B) unpause(A) user clicks on window A, window A gets focus pause(A), unpause(B) user tries to press left/right keys, it obviously doesn't focus window B and I can't read keydown events from the game B I run games as two separate subprocesses (using subprocess.Popen I start a.py and b.py). Where a.py and b.py are simple pygame games that listen my server events inside their game loop. Is it possible to share event loop between two different pygame windowed apps? Or may be it's better to change focus in some OS-specific way?
How to switch focus between multiple pygame apps?
0
0
0
198
31,619,813
2015-07-24T21:08:00.000
5
0
1
0
ipython-notebook,autosave,jupyter
53,081,276
18
false
0
0
For me, it happens when all the cell's output is too long. Just clear some output to solve this.
12
48
0
I am working in iPython 3/Jupyter running multiple kernels and servers. As such, i often forget to personally save things as I jump around a lot. The autosave has failed for the past 3 hours. The error says: "Last Checkpoint: 3 hours ago Autosave Failed! I try to manually File>>Save and Checkpoint, and nothing changes. Help! Next to my Python 2 kernel name, there is a yellow box that say forbidden instead of edit. It goes away when i click on it. I don't know if that has anything to do with the failure to save, but it doesn't change once clicked.
iPython Notebook/Jupyter autosave failed
0.055498
0
0
51,059
31,619,813
2015-07-24T21:08:00.000
0
0
1
0
ipython-notebook,autosave,jupyter
65,117,945
18
false
0
0
Had the same problem. What worked for me was removing "COALESCE" statement from one of the SQL queries that were part of notebook. Super weird stuff, now idea how it makes sense.
12
48
0
I am working in iPython 3/Jupyter running multiple kernels and servers. As such, i often forget to personally save things as I jump around a lot. The autosave has failed for the past 3 hours. The error says: "Last Checkpoint: 3 hours ago Autosave Failed! I try to manually File>>Save and Checkpoint, and nothing changes. Help! Next to my Python 2 kernel name, there is a yellow box that say forbidden instead of edit. It goes away when i click on it. I don't know if that has anything to do with the failure to save, but it doesn't change once clicked.
iPython Notebook/Jupyter autosave failed
0
0
0
51,059
31,619,813
2015-07-24T21:08:00.000
13
0
1
0
ipython-notebook,autosave,jupyter
32,612,328
18
false
0
0
The problem is that the notebook was started with with two different users. The most common scenario is the following: Starts with elevated user/root sudo ipython notebook Do some work and then start with ipython notebook From #1 a hidden directory was created called .ipynb_checkpoints with root privileges. As a result you will not be able to save updates unless the notebook is running as root. To fix this simply delete the .ipynb_checkpoints directory
12
48
0
I am working in iPython 3/Jupyter running multiple kernels and servers. As such, i often forget to personally save things as I jump around a lot. The autosave has failed for the past 3 hours. The error says: "Last Checkpoint: 3 hours ago Autosave Failed! I try to manually File>>Save and Checkpoint, and nothing changes. Help! Next to my Python 2 kernel name, there is a yellow box that say forbidden instead of edit. It goes away when i click on it. I don't know if that has anything to do with the failure to save, but it doesn't change once clicked.
iPython Notebook/Jupyter autosave failed
1
0
0
51,059
31,619,813
2015-07-24T21:08:00.000
0
0
1
0
ipython-notebook,autosave,jupyter
57,022,240
18
false
0
0
I just had this problem. All I did was quit/logout of my multiple notebooks. Then closed the anaconda dashboard. Then relaunched everything. The only thing you gotta worry about is losing the work you have already done. For that I copied my code into notepad and just copied it right back.
12
48
0
I am working in iPython 3/Jupyter running multiple kernels and servers. As such, i often forget to personally save things as I jump around a lot. The autosave has failed for the past 3 hours. The error says: "Last Checkpoint: 3 hours ago Autosave Failed! I try to manually File>>Save and Checkpoint, and nothing changes. Help! Next to my Python 2 kernel name, there is a yellow box that say forbidden instead of edit. It goes away when i click on it. I don't know if that has anything to do with the failure to save, but it doesn't change once clicked.
iPython Notebook/Jupyter autosave failed
0
0
0
51,059