Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
40,457,403 | 2016-11-07T03:31:00.000 | 0 | 0 | 1 | 0 | macos,python-2.7 | 40,571,852 | 1 | false | 0 | 0 | OK, so the solution for this problem is... when you download the python updater for OS X from the python website, the folder "Extras" is not included. Manually copy the folder from the previous version e.g. /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/ to /System/Library/Frameworks/Python.framework/Versions/2.7/ will solve the problem. Don't forget to restore the access rights after copying the folder. | 1 | 0 | 0 | Some functions of apps requires Python 2.7.12 others require 2.7.5 Is it possible to use both of them? The directory in Pythons Framework (/System/Library/Frameworks/Python.framework/Versions) does not indicate the individual build. Only the main build. (2.7 = 2.7.5) How apps handle different builds? | Multiple Python Versions on OS X | 0 | 0 | 0 | 53 |
40,465,084 | 2016-11-07T12:22:00.000 | 1 | 0 | 0 | 0 | python,matrix,vector,machine-learning,scikit-learn | 40,473,264 | 1 | true | 0 | 0 | In short - no, it is not possible. Mostly because some aritchmetical operations are not even performed in python when you use scikit-learn - they are actually performed by C-based extensions (like libsvm library). You could monkey patch .dot of numpy to do what you want, but you do not have any guarantee that scikit-learn will still work, since it performs some operations using numpy and others using C-extensions. | 1 | 0 | 1 | I've looked through the documentation over at scikit learn and I haven't seen a straightforward way to replace the matrix-vector product evaluations during propagation with a custom evaluation call.
Is there a way to do this that's already part of the API...or are there any tricks that would allow me to inject a custom matrix vector product evaluator? | Python & scikit Learn: replace matrix vector product during training with custom call | 1.2 | 0 | 0 | 38 |
40,467,306 | 2016-11-07T14:14:00.000 | 1 | 0 | 0 | 0 | python,pdf,printing,barcode,barcode-printing | 40,477,133 | 2 | false | 1 | 0 | Just FWIW, Code-128 is NOT a 2D barcode, it is a "simple" 1D barcode.
That said, there are Code-128 fonts around, which means you can use them in PDF form fields, which you can fill, maybe flatten the document, and send to the printer. No need to fiddle around with layout, after you created your base PDF.
To fill, you could use command line tools, such as FDFMerge by Appligent, where you can easily create data files from your database system, and merge that data with the base PDF. | 2 | 1 | 0 | What tools do I need to render 60,000+ unique Code-128 barcodes and arrange them in a grid in a PDF file for volume printing? Printing this many barcodes digitally seems like a challenge on its own, so there must be some lore from folks who have dealt with warehousing and bulk labelling.
Existing projects and commercial products focus barcode generation instead of layout and printing. I messed around with some Python that renders a PDF, but the tough part is dealing with various labelling templates and understanding that printers print better or worse barcodes depending on the rotation of the heads.
Should I even be using PDF for this? I have spent too much time already trying to line up the output of an HTML page for a crappy labelling template. I would appreciate a link to an open source library or even commercial tool for laying out barcodes at this scale. | Generate 2D barcodes and and arrange on a grid for bulk printing | 0.099668 | 0 | 0 | 657 |
40,467,306 | 2016-11-07T14:14:00.000 | 0 | 0 | 0 | 0 | python,pdf,printing,barcode,barcode-printing | 41,408,481 | 2 | false | 1 | 0 | We use LaTeX with the textpos package to get absolute positioning. To create the actual barcode symbols we use the pst-barcode package. We generate the LaTeX source file in a script language and then run pdflatex to get the pdf with the symbols. It is really easy when using LaTeX. | 2 | 1 | 0 | What tools do I need to render 60,000+ unique Code-128 barcodes and arrange them in a grid in a PDF file for volume printing? Printing this many barcodes digitally seems like a challenge on its own, so there must be some lore from folks who have dealt with warehousing and bulk labelling.
Existing projects and commercial products focus barcode generation instead of layout and printing. I messed around with some Python that renders a PDF, but the tough part is dealing with various labelling templates and understanding that printers print better or worse barcodes depending on the rotation of the heads.
Should I even be using PDF for this? I have spent too much time already trying to line up the output of an HTML page for a crappy labelling template. I would appreciate a link to an open source library or even commercial tool for laying out barcodes at this scale. | Generate 2D barcodes and and arrange on a grid for bulk printing | 0 | 0 | 0 | 657 |
40,468,726 | 2016-11-07T15:24:00.000 | 0 | 0 | 1 | 0 | python,pyinstaller | 40,638,224 | 1 | false | 0 | 0 | I think in your code have function to print some data with the codec which the window shell does not support display. Remove them and try again(I cannot comment because not enough rep so i wrote here) | 1 | 0 | 0 | I get unicodedecodeerror when I try to install pyinstaller.
The error message reades:
UnicodeDecodeError: 'cp949' codec can't decode byte 0xe2 in position 208687: illegal multibyte sequence
When I google this error, it looks like an error with codec to read the file.
Tried some of the solutions found online but didn't work.
How can I fix this? | python pyinstaller UnicodeDecodeError cp949 | 0 | 0 | 0 | 725 |
40,468,809 | 2016-11-07T15:28:00.000 | 0 | 0 | 1 | 1 | python,bash,shell,debugging,pycharm | 40,469,976 | 1 | true | 0 | 0 | Does the script need to be run from bash? If not you could add a new Python run configuration (Run -> Edit configurations...). This can be run in PyCharms debug mode and will stop at breakpoints defined in the GUI. Rather than having to use set_trace, you can toggle the 'Show Python Prompt' button in console view to get a prompt so you can interact with the programme at the breakpoint. | 1 | 2 | 0 | I wonder if there is any possibility to use the PyCharm inline Debugger when I run a script out of the Terminal.
So I hope to do the following:
Set breakpoint in PyCharm Editor
Run ./script.sh -options from Terminal
When script.sh calls a pyfile.py Python script with the breakpoint in it, it should stop there
Giving me the possibility to use the visual debugging features of PyCharm
The above does not work right now. My only chance is to do:
import pdb
pdb.set_trace()
Then I could work with the pdb - but clearly I don't want to miss the great visual capabilities of the PyCharm Debugger.
I saw that PyCharm uses pydevd instead of pdb. Is there maybe a similar possibility to invoke pydevd and work with the visual debugging then?
Thank you for your help in advance.
Best regards,
Manuel | Using the inline Debugger of PyCharm when running a bash-Script (.sh) within the PyCharm Terminal | 1.2 | 0 | 0 | 1,370 |
40,469,724 | 2016-11-07T16:17:00.000 | 0 | 0 | 0 | 0 | python,outlook,vsto,exchangewebservices | 40,484,743 | 1 | false | 0 | 0 | OK,
I have found kind of solution,
outlook only pulling the message again if the message moved to different folder,
so i moved the message to another folder (junk) and back to the original folder
and then outlook fetched the updated message.
I know it's not the best solution though | 1 | 0 | 0 | I am using Exchange Web Services (EWS) with python,
I used "UpdateItem" (soap request) to update message (IPF.Note) body and subject,
i can see the changes in OWA but outlook not fetching the updated message under any circumstances,
is there any property or another method i need to use to make outlook notice the change and download the message again?
I tried to use the Update Folder button and still nothing.
I am using outlook 2016 with Exchange online (Office 365). | EWS updating message body not triggering outlook redownload of the message | 0 | 0 | 1 | 77 |
40,470,825 | 2016-11-07T17:16:00.000 | 3 | 0 | 0 | 0 | android,python,kivy,buildozer | 40,470,942 | 1 | false | 0 | 1 | Use buildozer android_new debug instead, you are using android which builds with the old toolchain and does not support python3. | 1 | 2 | 0 | I'm trying to build my kivy application against python3.
First I downloaded the crystax ndk, and set the ANDROIDNDK to it's location. I added python3crystax to my requirements in the buildozer.spec and launched the build with:
buildozer android debug deploy run logcat
This command results in the following error:
Command failed: pip install --target=/home/cedric/Documents/Development/python/kivyapp/.buildozer/applibs python3crystax
If I try to install python3crystax manually with pip it seems, that this package dosen't even exists?
trying it with
buildozer android debug deploy run logcat
causes the following error:
ERROR: The colorama Python module could not be found, please install
version 0.3.3 or higher
ERROR: The appdirs Python module could not be found, please install
it.
ERROR: The sh Python module could not be found, please install version
1.10 or higher
ERROR: The jinja2 Python module could not be found, please install it.
All modules are installed with their current version.
Can anybody help me to solve this problem?
Thanks Cedric | kivy buildozer cant compile application targeting python3 | 0.53705 | 0 | 0 | 929 |
40,471,023 | 2016-11-07T17:27:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine | 40,473,578 | 1 | true | 1 | 0 | One case would be inside a transaction in which you want to read some related entity values but you don't care about accessing those particular entities consistently or not (in the context of that transaction).
In such case reading from the datastore would unnecessarily include those related entities in the transaction which contributes to datastore contention and could potentially cause exceeding various per-transaction limits.
Reading memcached values for those related entities instead would not include the entities in the transaction itself. Now I'm not 100% certain if this is applicable to ndb's memcache copy of an entity (I don't even know how to access that), I used my own memcache copies of such entities, updated whenever I modify these entities. | 1 | 0 | 0 | If read/writes into the ndb datastore automatically caches both in-context and via memcache, in what cases would you want to call the memcache api directly (in the context of the datastore)?
To elaborate, would I ever need to set the memcache for a particular datatstore read/write and get reads from the memcache instead of the datastore directly? | Google App Engine ndb memcache when to use memcache | 1.2 | 0 | 0 | 148 |
40,471,132 | 2016-11-07T17:33:00.000 | 2 | 0 | 0 | 0 | python,django | 40,474,452 | 2 | true | 1 | 0 | Yes, generally POST is a better way of submitting data than GET. There is a bit of a confusion about terminology in Django. While Django is, indeed MVC, models are models, but views are in fact controllers and views are templates. Since you are going to use AJAX to submit and retrieve the data, you don't care about templates. So what you most likely want is something like this
in your urls.py as part of your urlpatterns variable
url(r'mything/$', MyView.as_view())
in your views.py
from django.views import View
from django.http import HttpResponse
class MyView(View):
def post(self, request):
data = request.POST
... do your thing ...
return HttpResponse(results)
and in your javascript
jQuery.post('/mything/', data, function() { whatever you do here }) | 1 | 2 | 0 | I am working on my first django project which is also my first backend project. In the tutorials/reading I have completed, I haven't come across passing information back to django without a modelform.
My intention is to calculate a value on a page using javascript and pass it to django when a user hits a submit button on that page. The submit button will also be a link to another page. I know I could process the information in a view via the url if I knew how to pass the information back to django.
I'm aware that django uses MVC and as I have my models and views in place, I am lead to believe that this has something to do with controllers.
Basically, I would like to know how to pass information from a page to django as a user follows a link to another page. I understand that this isn't the place for long step by step tutorials on specific topics but I would appreciate any links to resources on this subject. I don't know what this process is even called so I can't search documentation for it.
EDIT:
From further reading, I think that I want to be using the submit button to GET or POST the value. In this particular case, POST is probably better. Could someone confirm that this is true? | Django - beginner- what is the process for passing information to a view via a url? | 1.2 | 0 | 0 | 47 |
40,471,747 | 2016-11-07T18:10:00.000 | 2 | 0 | 0 | 0 | python,algorithm,graph-theory,networkx,independent-set | 40,473,685 | 1 | false | 0 | 0 | Unfortunately, the problem of finding the minimum independent dominating set is NP-complete. Hence, any known algorithm which is sound and complete will be inefficient.
A possible approach is to use an incomplete algorithm (aka local search).
For example, the following algorithm is known to have a factor (1 + log|V|) approximation:
1. Choose a node with the maximal number of neighbors and add it to the dominating set.
2. Remove the node and all of it's neighbors from the graph.
3. Repeat until there are no more nodes in the graph. | 1 | 1 | 1 | I developed an algorithm that finds the minimum independent dominating set of a graph based on a distance constraint. (I used Python and NetworkX to generate graphs and get the pairs)
The algorithm uses a brute force approach:
Find all possible pairs of edges
Check which nodes satisfy the distance constraint
Find all possible independent dominating sets
Compare the independent dominating sets found and find the minimum dominating set
For small number of nodes it wouldnt make a difference but for large numbers the program is really slow.
Is there any way that I could make it run faster using a different approach?
Thanks | Finding minimum indepedent dominating set using a greedy algorithm | 0.379949 | 0 | 0 | 1,579 |
40,472,585 | 2016-11-07T19:01:00.000 | 0 | 0 | 0 | 0 | android,python,tensorflow | 40,672,019 | 3 | false | 0 | 1 | I tried to use python in my android application with some 3rd party terminals like SL4A and Qpython. Those will support to run the python files directly in our android application so we have to install SL4A apk's and we need to call that intent.But these will support for some level I guess.
I tried to import tensorflow in that terminal it shows module not found. So I thought this tensorflow will not work in these terminals.
So I am trying to create one .pb file from the python files which are working in unix platform.So We need to include that output .pb file in our android application and we need to change the c++ code regarding that .pb file.I am thinking in this way.let see it will work or not.I will update soon if it working. | 1 | 1 | 1 | The sample app given by google for tensorflow on android is written in C++.
I have a tensorflow application written in python. This application currently runs on desktop. I want to move the application to android platform. Can I use bazel to build the application that is written in python directly for android? Thanks.
Also sample tensorflow app in python on android will be much appreciated. | Tensorflow on android: directly build app in python? | 0 | 0 | 0 | 534 |
40,474,538 | 2016-11-07T21:01:00.000 | 1 | 0 | 1 | 0 | python,multiprocessing,python-multiprocessing | 40,474,832 | 1 | true | 0 | 0 | This depends almost entirely on the problem you try to tackle. If you distribute a large task to several workers and one unpredictably gets a much larger chunk than the others, you will have this situation.
There are several options to avoid it:
Try to estimate the effort for each chunk more precisely. Depending on your task, this might be possible. The chunks with the most predicted effort should be split.
A very common way to approach this is to split the task into lots of very small chunks, many more than workers are present. Then feed all chunks into a queue and let your workers eat their chunks from the queue. This way when a worker receives an easy chunk it will finish it fast and take at once the next chunk from the queue, thus not ending up idle while other workers seem to be "stuck" with their harder chunk.
A real deadlock will not be fixed of course by whatever approach. | 1 | 0 | 0 | Currently I am working on an asynchronous gradient algorithm with Python multiprocessing module, the main idea is that I run multiple processes that update an array of global parameters asynchronously. I have finished most of the framework but I got a problem that some processes seems to "get stuck" sometimes while other are still running, that causes this algorithm less effective. So I am wondering if there are good ways to make sure that they use roughly the same amount of time?
Thanks! | How to make sure each process uses roughly same amount of time when using multiprocessing module in Python? | 1.2 | 0 | 0 | 37 |
40,481,810 | 2016-11-08T08:03:00.000 | 1 | 0 | 0 | 0 | python,pyqt,qtablewidget | 40,492,017 | 2 | false | 0 | 1 | len(tablewidget.selectedIndexes()) should probably do what you want. | 1 | 2 | 0 | In the python plugin, I'm developing, I need to retrieve the number of selected rows in a QTableWidget. I can loop through each row of the QTableWidget and check them whether it is selected or not. Instead, Is there a straightforward way to get the selected rows count of a QTableWidget in PyQt?
Something like:
QTableWidget.selectedRowsCount() | Getting Selected Rows Count in QTableWidget - PyQt | 0.099668 | 0 | 0 | 10,499 |
40,482,242 | 2016-11-08T08:31:00.000 | 0 | 0 | 0 | 1 | python,angularjs,templates,tornado | 40,517,136 | 1 | false | 1 | 0 | This is not really a Tornado question, as this is simply how Web works.
One possible solution is to have only one form, but display its fields so that they look like two forms; in addition, have two separate submit buttons, each with its own name and value. Now, when you click on either button the whole form will be submitted, but in the handler you can process only the fields associated with the clicked button, while still displaying values in all the fields. | 1 | 0 | 0 | I have two forms, when I submit form#1 I get some corresponding file, but when I submit form#2 thenafter, the corresponding file gets shown but form#1 goes empty. So basically I want some thing like a SPA(e.g angular) but I am taking form#1 and form#2 as separate requests routes and each render my index.html every time, so form#2 is wiped off when I submit form#1 and vice-versa.
I dont want a working code but any ideas on how I do that with Tornado (not angular, or say Tornado + Angular ? )
I think one way for example is to handle these requests via a controller and do an AJAX post to corresponding Tornado Handler, which after the file is rendered, displays / serves the very file back again. But this uses AngularJS as a SPA. Any other solution possible?
Thanks in Advance | ways to avoid previous reload tornado | 0 | 0 | 0 | 47 |
40,485,285 | 2016-11-08T11:06:00.000 | 4 | 0 | 0 | 0 | python,pandas,scikit-learn,random-forest | 40,523,772 | 4 | true | 0 | 0 | Based on the documentation and previous experience, there is no way to get a list of the features considered at least at one of the splitting.
Is your concern that you do not want to use all your features for prediction, just the ones actually used for training? In this case I suggest to list the feature_importances_ after fitting and eliminate the features that does not seem relevant. Then train a new model with only the relevant features and use those features for prediction as well. | 1 | 12 | 1 | Is there a way to retrieve the list of feature names used for training of a classifier, once it has been trained with the fit method? I would like to get this information before applying to unseen data.
The data used for training is a pandas DataFrame and in my case, the classifier is a RandomForestClassifier. | Retrieve list of training features names from classifier | 1.2 | 0 | 0 | 17,091 |
40,485,380 | 2016-11-08T11:10:00.000 | 1 | 0 | 1 | 1 | python,ubuntu,pip | 47,112,240 | 12 | false | 0 | 0 | cannot install pip 9 for python3 on ubuntu16 with pip or pip3
solve by: sudo apt-get upgrade python3-pip (here may be run the apt update first.)
pip3 -V
pip 9.0.1 from /home/roofe/.local/lib/python3.5/site-packages (python 3.5)
roofe@utnubu:~$ pip install --upgrade pip
Collecting pip
Downloading pip-9.0.1-py2.py3-none-any.whl (1.3MB)
100% |████████████████████████████████| 1.3MB 14kB/s
Installing collected packages: pip
Successfully installed pip-9.0.1
note: the upper command only successly installed for python2.
roofe@utnubu:~$ pip3 install --upgrade pip3
Collecting pip3
Could not find a version that satisfies the requirement pip3 (from versions: )
No matching distribution found for pip3
You are using pip version 8.1.1, however version 9.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
roofe@utnubu:~$ pip install --upgrade pip3
Collecting pip3
Could not find a version that satisfies the requirement pip3 (from versions: )
No matching distribution found for pip3
You are using pip version 8.1.1, however version 9.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command. | 3 | 25 | 0 | OS: ubuntu 16.04LTS
Python: 2.7.12 + Anaconda2-4.2.0 (64 bit)
I typed pip install --upgrade $TF_BINARY_URL to install tensorflow but terminal showed that my pip verson was 8.1.1, however version 9.0.1 is available.
Then I typed pip install --upgrade pip to upgrade but it showed
Requirement already up-to-date: pip in ./anaconda2/lib/python2.7/site-packages,
I still can't use pip version 9.0.1 to install tensorflow. Does anyone know what's going on ?? | can`t upgrade pip to the newest version 9.0.1 (OS:ubuntu 16.04LTS) | 0.016665 | 0 | 0 | 84,686 |
40,485,380 | 2016-11-08T11:10:00.000 | 3 | 0 | 1 | 1 | python,ubuntu,pip | 49,766,580 | 12 | false | 0 | 0 | If you're only installing things to one user account it is also possible to use pip install --user --upgrade pip avoiding the question of to sudo or not to sudo... just be careful not to use that account with system wide installation of pip goodies. | 3 | 25 | 0 | OS: ubuntu 16.04LTS
Python: 2.7.12 + Anaconda2-4.2.0 (64 bit)
I typed pip install --upgrade $TF_BINARY_URL to install tensorflow but terminal showed that my pip verson was 8.1.1, however version 9.0.1 is available.
Then I typed pip install --upgrade pip to upgrade but it showed
Requirement already up-to-date: pip in ./anaconda2/lib/python2.7/site-packages,
I still can't use pip version 9.0.1 to install tensorflow. Does anyone know what's going on ?? | can`t upgrade pip to the newest version 9.0.1 (OS:ubuntu 16.04LTS) | 0.049958 | 0 | 0 | 84,686 |
40,485,380 | 2016-11-08T11:10:00.000 | 44 | 0 | 1 | 1 | python,ubuntu,pip | 41,006,251 | 12 | false | 0 | 0 | sudo -H pip install --upgrade pip
sudo is "super user do". This will allow you to execute commands as a super user. The H flag tells sudo to keep the home directory of the current user. This way when pip installs things, like pip itself, it uses the appropriate directory. | 3 | 25 | 0 | OS: ubuntu 16.04LTS
Python: 2.7.12 + Anaconda2-4.2.0 (64 bit)
I typed pip install --upgrade $TF_BINARY_URL to install tensorflow but terminal showed that my pip verson was 8.1.1, however version 9.0.1 is available.
Then I typed pip install --upgrade pip to upgrade but it showed
Requirement already up-to-date: pip in ./anaconda2/lib/python2.7/site-packages,
I still can't use pip version 9.0.1 to install tensorflow. Does anyone know what's going on ?? | can`t upgrade pip to the newest version 9.0.1 (OS:ubuntu 16.04LTS) | 1 | 0 | 0 | 84,686 |
40,486,931 | 2016-11-08T12:25:00.000 | 0 | 0 | 1 | 0 | python | 40,493,309 | 1 | false | 0 | 0 | In short: you can't do that.
Longer answer: this apparently simple task involves not only what would be some somewhat complicated code (15-50 lines), just to separate your original image into arbitrarily shaped parts, paste them on a default background, choosing an appropriate name and saving each part.
It involves object recognizing - just like humans do - so, it is a bit strong to say "you can't do that", but that involves A.I. techniques for image recognition, a lot, and I mean a lot of work in getting your neural-networking tool correctly coupled and trained to identify distinguished objects. (A lot like: years with a team, not hours for a person).
And anyway, even if you'd know where to start from, PIL alone does not fit that - PIL is a quick image manipulation library with a handfull of image methods. Although most of the machinery for Python A.I. image processing can use PIL to convert an image to a NumPy array representation were the "real algorithms" will be run.
Now, if you don't care that the program knows which objects are in the image, just that distinct objects on the image are saved to different files, one could come with a series of image processing filters to segment the image without resourcing to A.I. at all - that would be just a lot of work like 3 days up to two weeks for one with some experience in image processing. A couple hours if the person is with her skills really sharp and the images have a good contrast with the background and no object superposition. | 1 | 0 | 0 | I would like to know what is the best way to split a given image into multiple images where each image includes one object.
For example, in vase the image includes - Tree, dog, cat, ball
The output should be four different images of Tree, dog, cat, ball
BTW, I'm using PIL module in python.
Thanks | Python - How to split image using PIL | 0 | 0 | 0 | 578 |
40,490,923 | 2016-11-08T15:41:00.000 | 2 | 0 | 0 | 1 | python,google-app-engine,data-modeling | 40,512,917 | 1 | true | 0 | 0 | Per comments above it looks like what you need is isinstance(user, Employee) / isinstance(user, Manager). | 1 | 1 | 0 | It looks like the ndb.polymodel.PolyModel class used to have a class_name() method but as far as I can tell it has been deprecated.
I have a data structure using polymodel that is in the form of a parent User class with two child classes - Employee and Manager, and I want to do some basic checks throughout to determine if the User object is of the class Employee or class Manager.
At the moment, I am just calling the object's .__class__.__name__ attribute directly, but I am wondering why the PolyModel.class_name() method was deprecated. Is there a better way to determine class inheritance? | Determining object class in ndb PolyModel Google App Engine | 1.2 | 0 | 0 | 42 |
40,491,145 | 2016-11-08T15:51:00.000 | 8 | 0 | 0 | 0 | python,flask | 40,491,639 | 1 | true | 1 | 0 | Flask does nothing to request data besides parsing it from the raw HTTP request. It has no way to know what constraints an arbitrary function has. It's up to you to check any constraints. All data will be strings by default. Don't use eval or exec. Use your database driver's parametrized queries to avoid SQL injection. If you render a template with Jinja it will escape data for use in HTML by default. | 1 | 7 | 0 | Should data which comes from the user (like cookie values, variable parts in a route, query args) be treated as insecure and processed in a particular way? Does Flask already sanitize escape input data so passing it to a function test(input_data) is secure? | Is request data already sanitized by Flask? | 1.2 | 0 | 0 | 3,019 |
40,491,298 | 2016-11-08T15:57:00.000 | 5 | 0 | 1 | 0 | python,statistics | 43,329,072 | 3 | false | 0 | 0 | I have a simple statistical solution :
Confidence intervals are based on the standard error.
The standard error in your case is the standard deviation of your 1000 bootstrap means. Assuming a normal distribution of the sampling distribution of your parameter(mean), which should be warranted by the properties of the Central Limit Theorem, just multiply the equivalent z-score of the desired confidence interval with the standard deviation. Therefore:
lower boundary = mean of your bootstrap means - 1.96 * std. dev. of your bootstrap means
upper boundary = mean of your bootstrap means + 1.96 * std. dev. of your bootstrap means
95% of cases in a normal distribution sit within 1.96 standard deviations from the mean
hope this helps | 2 | 9 | 1 | I'm trying to calculate the confidence interval for the mean value using the method of bootstrap in python. Let say I have a vector a with 100 entries and my aim is to calculate the mean value of these 100 values and its 95% confidence interval using bootstrap. So far I have manage to resample 1000 times from my vector using the np.random.choice function. Then for each bootstrap vector with 100 entries I calculated the mean. So now I have 1000 bootstrap mean values and a single sample mean value from my initial vector but I'm not sure how to proceed from here. How could I use these mean values to find the confidence interval for the mean value of my initial vector? I'm relatively new in python and it's the first time I came across with the method of bootstrap so any help would be much appreciated. | How to calculate 95% confidence intervals using Bootstrap method | 0.321513 | 0 | 0 | 15,818 |
40,491,298 | 2016-11-08T15:57:00.000 | 8 | 0 | 1 | 0 | python,statistics | 40,491,405 | 3 | true | 0 | 0 | You could sort the array of 1000 means and use the 50th and 950th elements as the 90% bootstrap confidence interval.
Your set of 1000 means is basically a sample of the distribution of the mean estimator (the sampling distribution of the mean). So, any operation you could do on a sample from a distribution you can do here. | 2 | 9 | 1 | I'm trying to calculate the confidence interval for the mean value using the method of bootstrap in python. Let say I have a vector a with 100 entries and my aim is to calculate the mean value of these 100 values and its 95% confidence interval using bootstrap. So far I have manage to resample 1000 times from my vector using the np.random.choice function. Then for each bootstrap vector with 100 entries I calculated the mean. So now I have 1000 bootstrap mean values and a single sample mean value from my initial vector but I'm not sure how to proceed from here. How could I use these mean values to find the confidence interval for the mean value of my initial vector? I'm relatively new in python and it's the first time I came across with the method of bootstrap so any help would be much appreciated. | How to calculate 95% confidence intervals using Bootstrap method | 1.2 | 0 | 0 | 15,818 |
40,491,707 | 2016-11-08T16:17:00.000 | 5 | 0 | 0 | 0 | python,machine-learning,scikit-learn | 40,492,219 | 1 | true | 0 | 0 | I believe you are asking for the cluster assignment of each item in your dataset, X.
You can use the labels_ attribute. db.labels_ Each index here corresponds to the same index in X, so you can see the assignments. | 1 | 3 | 1 | I use dbscan scikit-learn algorithm for clustering.
db = DBSCAN().fit(X) returns me 8 for example. My goal is to recover the cluster by cluster components. I said that X is a vector to vector and what I expect when I speak of cluster members, it is the sub-vectors of X.
Is there anyone to help me? | Get cluster members/elements clustering with scikit-learn DBSCAN | 1.2 | 0 | 0 | 3,915 |
40,492,518 | 2016-11-08T17:01:00.000 | 11 | 1 | 1 | 0 | python,python-2.7,python-3.x,python-import,dynamic-linking | 40,492,802 | 1 | true | 0 | 0 | No, loading a pure-Python module is not considered a form of dynamic linking.
Traditional dynamic linking loads machine code into a new chunk of memory, and multiple executable processes can be given access (the dynamically linked library only needs to be loaded once, virtual memory takes care of the rest). The linker connects the executable and the dynamic library at runtime.
Loading a Python module, on the other hand, loads the bytecode for the modules into the Python process itself (Python will compile the source code if no bytecode cache is available at this time too). The loaded modules are not shared between processes. No translation has to take place, the result of running the bytecode produces new objects in the Python heap that all existing code in the interpreter can interact with.
No linker is involved in this process, no separate memory, to the OS there are no separate sections of memory to be managed as the module is simply part of the Python process memory. | 1 | 11 | 0 | In the parlance of posix and general technical software development. Does an import of a purely python ( not cython or c compiled libraries ) module constitute a dynamic linking? | Is an import in python considered to be dynamic linking? | 1.2 | 0 | 0 | 7,778 |
40,492,606 | 2016-11-08T17:05:00.000 | 2 | 1 | 0 | 1 | python,linux,copy-paste | 40,493,312 | 1 | false | 0 | 0 | Short answer: no, you can't.
Long answer: the component that does "copy&paste" is not alone defined by the distribution. This is a function of the desktop system / window manager. In other words: there is no such thing as the "default system file" copier for "Linux".
There are file mangers like dolphin for KDE; or nautilus on gnome that all come with their own implementation of file copy. Some good, some not so much (try copying a whole directory with thousands of files with nautilus).
But the real question here: why do you want to do that? What makes you think that your file-copy implementation that requires an interpreter to run ... is suited to replace the defaults that come with Linux? Why do you think that your re-invention of an existing wheel will be better at doing anything?!
Edit: if your reason to "manage" system copy is some attempt to prevent the user from doing certain things ... you should better look into file permissions and such ideas. Within a Linux environment, you simply can't manage what the user is doing in the first place by manipulating some tools. Instead: understand the management capabilities that the OS offers to you, and use those! | 1 | 0 | 0 | I have coded a python app to manage file copies on linux, I want to know how can I get it to process copy/paste calls, like those launched by pressing ctrl + c/ ctrl + v or right click / Copy..., or drag and drop, instead of using system copier.
Can I do this for all deb based linux dist. or its on different ways for Ubuntu, Mint, Debian, and so on????
Forgive my English and thanks in advance! | Change default system file copier in Linux | 0.379949 | 0 | 0 | 205 |
40,503,284 | 2016-11-09T09:12:00.000 | 0 | 0 | 1 | 0 | python,text,editor | 40,503,592 | 2 | false | 0 | 0 | try installing Terminal plugin and run python from sublime text | 1 | 0 | 0 | I want to be able to run the python intpreter from inside sublime text so I can run simple snippets of code. Is there any easy way of doing this? Or do I have to open it in another window? | Is there anyway to run the python interpreter from sublime text 3? | 0 | 0 | 0 | 5,838 |
40,505,396 | 2016-11-09T11:02:00.000 | 0 | 1 | 0 | 0 | python,python-3.x,environment-variables,telegram-bot,python-telegram-bot | 40,550,658 | 1 | false | 0 | 0 | You can use several instances of your bot - one for development and
tests, one for production. When you done with new features in
development branch - build and run it and stop your production bot at the
same time. If you satisfied with development version run it on production.
You can copy your code from production bot and create new bot and work with it in development and tests. | 1 | 2 | 0 | I have several telegram bots which are running on production.
But also I have to develop new features at the same time. Are there any best practices to use environments (like development, test and production) in telegram bots, that will allow me to develop and test new features without corrupting stable versions behaviour?
I am using python3 and python-telegram-bot library. | Are there any best practices to use environments in telegram bot? | 0 | 0 | 0 | 441 |
40,507,261 | 2016-11-09T12:40:00.000 | 0 | 0 | 0 | 1 | python,luigi | 40,868,113 | 1 | true | 0 | 0 | I would go ahead and create an unique output for the task, even if the output is not used in your further processing. It would just be a marker that the task with the particular set of inputs has completed successfully. You could do a simple FileTarget, a PostgresTarget, etc. | 1 | 2 | 0 | As part of a Luigi pipeline we want to notify microservices waiting for the data being computed using a POST request.
Until now we were using the RunAnywayTarget but it is a problem if we fire up Luigi faster than the rate of data change. So my question is,
what is the best pattern to create a task that does something in the pipeline but that doesn't create any piece of data, like doing a POST request to a REST service, send a message to Kafka, etc...?
I know that I could create a task with no output that does the request in the run method, but then how should this NotificationTask be re-run again if for some reason the end service failed during the first run? The dependencies will be there and it won't be run again. | How to create non persisted tasks in Luigi? | 1.2 | 0 | 0 | 232 |
40,510,025 | 2016-11-09T15:02:00.000 | 1 | 0 | 1 | 1 | python,python-2.7,logic,command-prompt | 40,510,507 | 1 | true | 0 | 0 | If you are in Linux, you could pause the program with Ctrl-Z (and either resume it with fg, or send it to continue its work in background with bg).
Considering you use command prompt, I assume you are on Windows, there's no method I know of. You might try to use a new 'cmd' window and minimize it (maybe change its priority from Task Manager) | 1 | 0 | 0 | Is there any combination of keys to pause and resume the program execution in the command prompt?
As i have a big program to run, it takes 30 mins to complete the execution, it will be helpful if i can pause and resume to stop in the middle of the program and to resume it when it needed. | To pause and resume the program execution in the command prompt | 1.2 | 0 | 0 | 1,809 |
40,511,920 | 2016-11-09T16:40:00.000 | 3 | 0 | 1 | 0 | python-2.7,modelica,fmi | 40,543,983 | 3 | false | 0 | 0 | The problem is that pyfmi.fmiFMUModelCS2 is a Cython class dependent on external libraries which makes it unpickable. So it is not possible unfortunately.
If you want to use multiprocessing the only way forward that I see is that you first create the processes and then load the FMUs into the separate processes. In this way you do not need to pickle the classes. | 1 | 4 | 1 | I am trying to simulate multiple Modelica FMUs in parallel using python/pyfmi and multiprocessing. However I am not able to return any pyfmi FMI objects from the subprocesses once the FMUs are initialized. It seems that pyfmi FMI objects (e.g. pyfmi.fmi.FMUModelCS2 or pyfmi.fmi.FMUState2) are not pickable. I also tried dill to pickle, which doesn't work for me eather. With dill the objects are picklable though, meaning no error, but somehow corrupted if I try to reload them afterwards. Does anyone have an idea of how to solve this issue? Thanks! | Using pyfmi with multiprocessing for simulation of Modelica FMUs | 0.197375 | 0 | 0 | 1,150 |
40,513,466 | 2016-11-09T18:15:00.000 | 0 | 0 | 0 | 0 | python,tensorflow | 40,789,964 | 3 | false | 0 | 0 | Please check your sample's version. I met the same problem and finally solved it. I found my tf version is 0.11, but I downloaded the master one,
then I compare the code asyntax difference. | 2 | 3 | 1 | I am trying to run the Tensorflow for Poets sample. I pass the following:
python examples/image_retraining/retrain.py --bottlenext_dir=tf_files/bottlenecks --how_many_training_steps 500 --model_dir=tf_files/inception --output_graph=tf_files/retrained_graph.pb --output_labels=tf_files/retrained_labels.txt --image_dir tf_files/flower_photos
I get the error
File "examples/image_retraining/retrain.py", line 1013, in <module>
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
TypeError: run() got an unexpected keyword argument 'argv'
When I check the source of app.py it shows argv as an argument. According to t.version I am running 0.11.0rc0
Any ideas? | tensorflow retrain.py app.run() got unexpected keyword argument 'argv' | 0 | 0 | 0 | 3,375 |
40,513,466 | 2016-11-09T18:15:00.000 | -1 | 0 | 0 | 0 | python,tensorflow | 40,622,206 | 3 | false | 0 | 0 | You can also specifically checkout just the working fully_connected_feed.py file from the r0.11 branch by using the git command:
git checkout 5b18edb fully_connected_feed.py
NOTE: You need to be in the mnist/ directory to use this command | 2 | 3 | 1 | I am trying to run the Tensorflow for Poets sample. I pass the following:
python examples/image_retraining/retrain.py --bottlenext_dir=tf_files/bottlenecks --how_many_training_steps 500 --model_dir=tf_files/inception --output_graph=tf_files/retrained_graph.pb --output_labels=tf_files/retrained_labels.txt --image_dir tf_files/flower_photos
I get the error
File "examples/image_retraining/retrain.py", line 1013, in <module>
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
TypeError: run() got an unexpected keyword argument 'argv'
When I check the source of app.py it shows argv as an argument. According to t.version I am running 0.11.0rc0
Any ideas? | tensorflow retrain.py app.run() got unexpected keyword argument 'argv' | -0.066568 | 0 | 0 | 3,375 |
40,513,969 | 2016-11-09T18:48:00.000 | 1 | 0 | 0 | 0 | python,sockets | 40,514,101 | 2 | false | 0 | 0 | A socket is like two unidirectional pipes. You won't ever read back data that you wrote. You'll only get data written by the other side. | 1 | 1 | 0 | Which is better analogy for describing the communication channel between two INET sockets:
one two-directional "pipe"
two unidirectional "pipes"
If I'm sending something to a two-directional "pipe" and then right away try to receive something from there, I'm expecting to get back what I just sent (unless other end managed to consume it in the meanwhile).
If there are two unidirectional pipes, one for sending and other for receiving (and vice versa for the other end), then I expect writes in one end don't affect the reads in the same end.
I'm new to sockets and after reading Python Socket HOWTO I wasn't able to tell which model is being used. I tried to deduce it by an experiment, but I'm not sure I set it up correctly.
So, can sending in one end affect receiving in the same end, or are these directions separated as if there were two "pipes"? | Sockets analogy: a pipe or two pipes? | 0.099668 | 0 | 1 | 98 |
40,514,084 | 2016-11-09T18:57:00.000 | 5 | 0 | 1 | 0 | python,selenium,phantomjs,pip | 40,514,085 | 1 | false | 1 | 0 | Here are the answers:
1) sudo apt-get install python-pip
2) sudo pip install selenium
3) sudo apt-get install phantomjs
tested working. i hope it helps you. | 1 | 0 | 0 | I run a python program that uses selenium and phantomjs and got these errors 2) and 3) then when I run pip install selenium i got error 1):
1) The program 'pip' is currently not installed.
2) ImportError: No module named 'selenium'
3) selenium.common.exceptions.WebDriverException: Message: 'phantomjs' executable needs to be in PATH.
All done on Ubuntu 14.04 x64 | How to install pip and selenium and phantomjs on ubuntu | 0.761594 | 0 | 1 | 4,788 |
40,515,674 | 2016-11-09T20:42:00.000 | 0 | 0 | 1 | 0 | python,django,installation,virtualenv | 40,517,103 | 1 | false | 1 | 0 | you must activete that new venv ...Scripts/activate.py and enjoy
...and install inside venv django, after that go to you project and run 'python manage.py runserver
thats all | 1 | 0 | 0 | Sorry if this is an obvios or dump question but the thing is that i've been having problems with the installation of Djandgo, and the virtualenvs.
I'm a windows 10 user and I've following a series of tutorials of Django in wich they create a virtualenv an inside of it, using pip, they proceed with the installation of the framework.
The problem is that, I drop the old project or virtualenv wich had Django installed and started a new one, a new virtualenv (creating a new folder and typing virtualenv .), and reinstalled django on it but now, when i go throught the cmd to the directory
J:\project2\Scripts\django-admin.py
I receive and error:
Traceback (most recent call last): File
"J:\project2\Scripts\django-admin.py", line 2, in
from django.core import management ImportError: No module named django.core
is it because I re-installed again Django in another new virtualenv?
Thanks to all :) | Installing Django in a Virtualenv means that i have to re-install it everytime I make another project? | 0 | 0 | 0 | 499 |
40,521,509 | 2016-11-10T06:50:00.000 | 0 | 0 | 0 | 0 | python,django,messages | 40,524,619 | 1 | true | 1 | 0 | You don't have to iterate over messages to expire them. Django does that for you.
When one request gets a message it's iterated over with the next request, gets displayed if the template allows it and is removed from request data. That means it's shown once and is removed.
The only way to get a message from your email module to be displayed in the account module is to redirect the user to an account page directly after the action that adds the message has been completed (after an email has been sent, for example). You have complete control over this from your views. | 1 | 0 | 0 | I am currently using the built-in django-messages framework of django version 1.10.
However, since the messages are stored in the request, and therefore not "namespaced" as it were for different modules, I am concerned that this might lead to potential circumstances where messages created by one module (e.g. a messaging framework "your message has been sent") might bleed into another.
Is there a way to "namespace" these messages so we dont have this unintended affect?
In addition, the documentation says that messages expire if they are iterated over, does that mean that if I forget to iterate over them, they have the potential to build up over multiple requests? | How to prevent Django messages from leaking out to other modules? | 1.2 | 0 | 0 | 42 |
40,521,812 | 2016-11-10T07:11:00.000 | 0 | 1 | 1 | 0 | python,python-3.4,cd | 40,521,903 | 2 | false | 0 | 0 | You can either pass the path to my_data.dat as a parameter to the script, or (if the data file is always in the same directory as the script) you can use the variable __file__ to determine where the script is and look for the data file in that directory. | 1 | 0 | 0 | So I have a python script my_script.py which is inside the folder Test and I am outside this folder. The script requires to access some input data from the file my_data.dat stored inside the folder Test.
Now, when I try to run this script from outside the folder Test, using simple python my_script.py, it tries to locate my_data.dat in the folder from which I am running the script and so fails. How can I tell it use the correct path without actually modifying the path in the my_script.py?
I don't want to modify the paths in the script because the script is to be used generically for various paths and the whole thing would be automated later. | Running python codes from outside the directory when script has filepaths | 0 | 0 | 0 | 375 |
40,522,177 | 2016-11-10T07:33:00.000 | 1 | 0 | 0 | 0 | django,python-2.7,django-rest-framework,jwt | 70,026,089 | 4 | false | 1 | 0 | Do this jwt.decode(token,settings.SECRET_KEY, algorithms=['HS256']) | 1 | 7 | 0 | I have started using djangorestframework-jwt package instead of PyJWT , I just could not know how to decode the incoming token (I know there is verify token methode).... All I need to know is how to decode the token and get back info encoded...... | How to decode token and get back information for djangorestframework-jwt packagefor Django | 0.049958 | 0 | 0 | 11,026 |
40,522,186 | 2016-11-10T07:34:00.000 | 0 | 0 | 0 | 0 | python,numpy,vector,scipy | 40,522,400 | 1 | false | 0 | 0 | Use the euclidean metric? I think the precomputed clusters have a center. So you can calculate the distanc between every center and the new vector. | 1 | 0 | 1 | I am clusterizing a bunch of feature vectors using scipy linkage with ward method.
I want a predictive model that works in two steps:
Training data is clusterized
A new vector comes, the distance between the vector and each cluster is computed, the new vector is assigned the "nearest" cluster's label
How can I compute the distances between a new vector and precomputed clusters? | Scipy hierarchial clustering - clusterize new vector | 0 | 0 | 0 | 27 |
40,526,541 | 2016-11-10T11:29:00.000 | 2 | 0 | 0 | 0 | python-2.7,gpu,caffe,pycaffe | 40,559,136 | 3 | true | 0 | 0 | The surest way I know is to properly configure the solver.prototxt file.
Include the line
solver_mode: GPU
If you have any specifications for the engine to use in each layer of your model, you'll want to also make sure they refer to GPU software. | 1 | 0 | 0 | Is there any way to ensure caffe using GPU? I was compiled caffe after installing CUDA driver and without CPU_ONLY flag in cmake and while compiling cmake logged detection of CUDA 8.0.
But while train a sample, I doubt it using GPU according nvidia-smi result. How can I ensure? | How can I make ensure caffe using GPU? | 1.2 | 0 | 0 | 2,861 |
40,527,057 | 2016-11-10T11:58:00.000 | 0 | 0 | 0 | 0 | python,tkinter,pygame | 40,528,039 | 3 | false | 0 | 1 | Pygame is the best choice for games, tkinter is more friendly for make utilities softwares. Tkinter have limitations, you will be freeze in some steps and have double of time to figure out. You can also use Pyglet, is easiest than, but Pygame still the best choice, fast, lot of functionality, you can do all things you want. | 3 | 0 | 0 | I'm about to start programming a game for my computing A level. The game will be a version of scrabble but won't have a board. It will be how many words can you make in an amount of time. The game will also have menus, buttons and logins for different users to access the game. I'm wanting to know if it would be better to use Tkinter or Pygame for this or if I can use aspects of both: eg Tkinter for menus and Pygame for the main loop. Any help would be much appreciated I'm quite new to both these ideas so please explain any specialist terminology. Thanks a lot | Tkinter or Pygame which should I use | 0 | 0 | 0 | 3,886 |
40,527,057 | 2016-11-10T11:58:00.000 | 2 | 0 | 0 | 0 | python,tkinter,pygame | 40,528,139 | 3 | false | 0 | 1 | SO, expanding the issue:
I love Pygame, in that it offers a simple API for one to draw things on a screen-canvas, and a nice O.O. hierarchy and tooling for sprites and game objects on screen.
On the other hand, it did not evolve to have a nice installer for Python3 - in some platforms it is near impossible to get it working in Python3 (despite the Pygame code itself being Python3 ready).
It also does not offer any support for menus, or buttons, or even text-entry - you have to either use a third party module for that, or create allby hand yourself. Yu have to implement things like reading the keyboard code, drawing the corresponding glyph on the correct location on the canvas - and the keyboard reading is raw, and won't give you things like character composition provided by the O.S. - which might be important in a word related application.
In short: you need a fully featured app, and should be using tkinter for that. As for the mainloop: you have to use Tkinter's loop and implement
after event calls to get the control to parts of your code that have to initiate actions.
Pygame gives you full control of the mainloop - and I like that for learning purposes - but most gaming or GUI toolkits have their all mainloop, and you have to register your callbacks.
It is even possible to have an application that have the "control screens" - menus, buttons, logins, and so on, written in Tkinter and the main game screen, where the action takes place, made in Pygame. That won't solve Pygame's hard-to-install problems, and may look awkward for the players themselves.
For multimedia-stuff I am moving my projects to Pyglet, since it is a well behaved Python module,and have some capabilities pygame lacks. But Tkinter can do pretty nice things in its Canvas widget, and sure enough could hld your game.
(You should have noted I emphasized Python3 a lot - so, while you did not ask, you should definitely use Python 3.5 (or 3.6) for your project - even if you choose Pygame - the abyss between Python versions is widening and Python 2 has a date to be discontinued)
update: I just tried "pip install pygame" on a Python 3.5 virtualenv, and it did install flawlessly - so the project is alive and kicking, and installing may not be hard anymore.
That said, you'd still have to create all the code for menus and buttons. | 3 | 0 | 0 | I'm about to start programming a game for my computing A level. The game will be a version of scrabble but won't have a board. It will be how many words can you make in an amount of time. The game will also have menus, buttons and logins for different users to access the game. I'm wanting to know if it would be better to use Tkinter or Pygame for this or if I can use aspects of both: eg Tkinter for menus and Pygame for the main loop. Any help would be much appreciated I'm quite new to both these ideas so please explain any specialist terminology. Thanks a lot | Tkinter or Pygame which should I use | 0.132549 | 0 | 0 | 3,886 |
40,527,057 | 2016-11-10T11:58:00.000 | 0 | 0 | 0 | 0 | python,tkinter,pygame | 40,527,429 | 3 | false | 0 | 1 | Have you ever used any of them ? If the answer to that is yes, then you should pick the more familiar one. The pick does not really matter if it is going to be a simple scrabble game with not a lot of animations. Personally, I think that Tkinter is easier. | 3 | 0 | 0 | I'm about to start programming a game for my computing A level. The game will be a version of scrabble but won't have a board. It will be how many words can you make in an amount of time. The game will also have menus, buttons and logins for different users to access the game. I'm wanting to know if it would be better to use Tkinter or Pygame for this or if I can use aspects of both: eg Tkinter for menus and Pygame for the main loop. Any help would be much appreciated I'm quite new to both these ideas so please explain any specialist terminology. Thanks a lot | Tkinter or Pygame which should I use | 0 | 0 | 0 | 3,886 |
40,527,528 | 2016-11-10T12:21:00.000 | 0 | 0 | 1 | 0 | python-2.7 | 40,527,574 | 1 | false | 0 | 0 | If you have pip installed on your system, execute
pip install numpy | 1 | 0 | 1 | I'm working on a code to run it in abaqus. I need in my code to use numpy module. I have python 2.7.11 on my computer. I have installed it on windows 8.1.
I have downloaded numpy-1.11.Zip already.
I look for an easy detailed guide for installing it on my python
Thank You! | installing numpy on a python 2.7 with system windows 8.1 | 0 | 0 | 0 | 45 |
40,532,355 | 2016-11-10T16:21:00.000 | 0 | 0 | 1 | 0 | python,spyder | 65,472,739 | 4 | false | 0 | 0 | I was having this problem and in keyboard shortcuts clicked 'restore to default' and after saving restarted Spyder and it worked for me again. | 3 | 2 | 0 | I am new to python and programming in general but now I try to learn python-programming in spyder(Python 3.5).
I have a very simple question: to run a command and advance to the next line I should be able to simply click the button on top or use shift+enter, neither of these work. It does work in Jupyter but not in spyder.
Is there something wrong with my run settings? | shift+enter/return for run does not work in Spyder | 0 | 0 | 0 | 5,598 |
40,532,355 | 2016-11-10T16:21:00.000 | 3 | 0 | 1 | 0 | python,spyder | 40,532,768 | 4 | false | 0 | 0 | There are two possible answers to your question.
If you don't have anything selected in the Spyder Editor and press F9, Spyder will evaluate the current line and automatically move to the next one.
If you want to run things in a similar way as in the Jupyter notebook, you can break a file in cells, by introducing comments of the form # %%. After that you'll see that the active cell is colored differently. Then you can evaluate those cells by running Shift+Enter (to move to the next cell) or Ctrl+Enter (to stay in the current one). | 3 | 2 | 0 | I am new to python and programming in general but now I try to learn python-programming in spyder(Python 3.5).
I have a very simple question: to run a command and advance to the next line I should be able to simply click the button on top or use shift+enter, neither of these work. It does work in Jupyter but not in spyder.
Is there something wrong with my run settings? | shift+enter/return for run does not work in Spyder | 0.148885 | 0 | 0 | 5,598 |
40,532,355 | 2016-11-10T16:21:00.000 | 1 | 0 | 1 | 0 | python,spyder | 52,599,373 | 4 | false | 0 | 0 | If you want to use this combination (shift+enter) firstly you have to divide to your code. And you can divide your code like this (#%%) and then you can use (shift+enter). Otherwise you can only use f9 | 3 | 2 | 0 | I am new to python and programming in general but now I try to learn python-programming in spyder(Python 3.5).
I have a very simple question: to run a command and advance to the next line I should be able to simply click the button on top or use shift+enter, neither of these work. It does work in Jupyter but not in spyder.
Is there something wrong with my run settings? | shift+enter/return for run does not work in Spyder | 0.049958 | 0 | 0 | 5,598 |
40,534,098 | 2016-11-10T17:54:00.000 | 1 | 0 | 0 | 0 | python,machine-learning,tensorflow,deep-learning | 40,534,978 | 1 | false | 0 | 0 | When you create your network and attach some loss you call minimize on the optimizer, which (under the hood) calls "apply_gradients". This function adds gradient computing ops to your graph. All you have to do is now request the op responsible for your partial derivative and pass the precomputed partial derivative through feed_dict option. Use tensorboard to visualize your graph and investigate names of gradients you are interested in. By default they will be in the "gradient" namescope, and naming of each op will be analogous to your operations, so something among the lines of gradient/output_op:0 etc. | 1 | 0 | 1 | What I want to do is to simulate the back-propagation process on different machines, from one machine, I get the gradient from layer3 d(layer3_output)/d(layer2_output) as a numpy array, how am I able to get d(layer3_output)/d(layer1_output) efficiently given the gradient I received and passed to the previous layer? | Can I apply calculated gradient in tensorflow? | 0.197375 | 0 | 0 | 351 |
40,534,282 | 2016-11-10T18:07:00.000 | 1 | 0 | 0 | 0 | python,django,multithreading | 40,536,028 | 2 | false | 1 | 0 | Don't share in memory objects if you're going to mutate them. Concurrency is super hard to do right and premature optimization is evil. Give each user their own view of the data and only share data via the database (using transactions to make your updates atomic). Keep and increment counters in your database every time you make an update, make transactions fail if those number have changed since the data was read (as somebody else has mutated it).
Also, don't make important architectural decisions when tired! :) | 2 | 0 | 0 | Let's say that I have a Django web application with two users. My web application has a global variable that exist on the server (a Pandas Dataframe created from data from an external SQL database).
Let's say that a user makes an update request to that Dataframe and now that Dataframe is being updated. As the Dataframe is being updated, the other user makes a get request for that Dataframe. Is there a way to 'lock' that Dataframe until user 1 is finished with it and then finish the request made by user 2?
EDIT:
So the order of events should be:
User 1 makes an update request, Dataframe is locked, User 2 makes a get request, Dataframe is finished updating, Dataframe is unlocked, User 2 gets his/her request.
Lines of code would be appreciated! | Django - Two Users Accessing The Same Data | 0.099668 | 0 | 0 | 401 |
40,534,282 | 2016-11-10T18:07:00.000 | 2 | 0 | 0 | 0 | python,django,multithreading | 40,534,608 | 2 | false | 1 | 0 | Ehm... Django is not a server. It has a single-threaded development server in it, but it should not be used for anything beyond development and maybe not even for that. Django applications are deployed using WSGI. WSGI server running your app is likely to start several separate worker threads and will be killing and restarting these threads according to the rules in its configuration.
This means, that you cannot rely on multiple requests hitting the same process. Django app lifecycle is between getting a request and returning a response. Anything that is not explicitly made persistent between those two events should be considered gone.
So, when one of your users updates a global variable, this variable only exists in the one process this user randomly accessed. The second user might or might not hit the same process and therefore might or might not get the same copy of the variable. More than that, the process will sooner or later be killed by the WSGI server and all the updates will be gone.
What I am getting at is that you might want to rethink your architecture before you bother with the atomic update problems. | 2 | 0 | 0 | Let's say that I have a Django web application with two users. My web application has a global variable that exist on the server (a Pandas Dataframe created from data from an external SQL database).
Let's say that a user makes an update request to that Dataframe and now that Dataframe is being updated. As the Dataframe is being updated, the other user makes a get request for that Dataframe. Is there a way to 'lock' that Dataframe until user 1 is finished with it and then finish the request made by user 2?
EDIT:
So the order of events should be:
User 1 makes an update request, Dataframe is locked, User 2 makes a get request, Dataframe is finished updating, Dataframe is unlocked, User 2 gets his/her request.
Lines of code would be appreciated! | Django - Two Users Accessing The Same Data | 0.197375 | 0 | 0 | 401 |
40,534,871 | 2016-11-10T18:45:00.000 | 1 | 0 | 0 | 0 | python,scikit-learn,regression,knn | 40,537,920 | 1 | true | 0 | 0 | I'm afraid not. In part, this is due to some algebraic assumptions that the relationship is symmetric: A is a neighbour to B iff B is a neighbour to A. If you give different k values, you're guaranteed to break that symmetry.
I think the major reason is simply that the algorithm is simpler with a fixed quantity of neighbors, yielding better results in general. You have a specific case that KNN doesn't fit so well.
I suggest that you stitch together your two models, switching dependent on the imputed second derivative. | 1 | 0 | 1 | I am using scikit-learn's KNN regressor to fit a model to a large dataset with n_neighbors = 100-500. Given the nature of the data, some parts (think: sharp delta-function like peaks) are better fit with fewer neighbors (n_neighbors ~ 20-50) so that the peaks are not smoothed out. The location of these peaks are known (or can be measured).
Is there a way to vary the n_neighbors parameter?
I could fit two models and stitch them together, but that would be inefficient. It would be preferable to either prescribe 2-3 values for n_neighbors or, worse, send in an list of n_neighbors. | Varying n_neighbors in scikit-learn KNN regression | 1.2 | 0 | 0 | 198 |
40,536,560 | 2016-11-10T20:36:00.000 | 2 | 0 | 1 | 0 | python,jupyter-notebook,ipython | 61,906,508 | 10 | false | 0 | 0 | just below the Python logo there is a button saying
not trusted
click on it and set it as trusted notebook. | 2 | 83 | 0 | I am very new to this and don't know why the autocomplete is not working. I tried modifying the iPython config file, installed readline, but still nothing. | IPython and Jupyter autocomplete not working | 0.039979 | 0 | 0 | 77,907 |
40,536,560 | 2016-11-10T20:36:00.000 | 1 | 0 | 1 | 0 | python,jupyter-notebook,ipython | 65,983,688 | 10 | false | 0 | 0 | I use JupyterLab 3.0.6. I have ipython 7.19.0 and jedi 0.18 installed. As @DaveHalter indicated, better than <% config Completer.use_jedi = False> is to use the previous version of the jedi <pip install jedi == 0.17.2>.
In 2021-01-31 it worked perfectly for me. | 2 | 83 | 0 | I am very new to this and don't know why the autocomplete is not working. I tried modifying the iPython config file, installed readline, but still nothing. | IPython and Jupyter autocomplete not working | 0.019997 | 0 | 0 | 77,907 |
40,536,821 | 2016-11-10T20:55:00.000 | 3 | 0 | 1 | 0 | python,constructor,destructor,with-statement,contextmanager | 40,537,209 | 3 | false | 0 | 0 | By using these instead of __init__ / __del__ I appear to be creating an implicit contract with callers that they must use with, yet there's no way to enforce such a contract
You have a contract either way. If users use your object without realizing it requires cleanup after use, they'll screw things up no matter how you implement the cleanup. They might keep a reference to your object forever, for example, preventing __del__ from running.
If you have an object that requires special cleanup, you need to make this requirement explicit. You need to give users with functionality and an explicit close or similar method, to let users control when the cleanup occurs. You can't hide the cleanup requirement inside a __del__ method. You might want to implement __del__ too, as a safety measure, but you can't use __del__ in place of with or an explicit close.
With that said, Python makes no promises that __del__ will run, ever. The standard implementation will run __del__ when an object's refcount drops to 0, but that might not happen if a reference survives to the end of the script, or if the object is in a reference cycle. Other implementations don't use refcounting, making __del__ even less predictable. | 1 | 32 | 0 | I have searched and I'm unable to come up with any good reason to use python's __enter__ /__exit__ rather than __init__ (or __new__ ?) / __del__ .
I understand that __enter__ / __exit__ are intended for use with the with statement as context managers, and the with statement is great. But the counterpart to that is that any code in those blocks is only executed in that context. By using these instead of __init__ / __del__ I appear to be creating an implicit contract with callers that they must use with, yet there's no way to enforce such a contract, and the contract is only communicated via documentation (or reading the code). That seems like a bad idea.
I seem to get the same effect using __init__ / __del__ inside of a with block. But by using them rather than the context management methods my object is also useful in other scenarios.
So can anybody come up with a compelling reason why I would ever want to use the context management methods rather than the constructor/destructor methods?
If there's a better place to ask a question like this, please let me know, but it seems like there's not much good information about this out there.
Follow Up:
This question was based on a bad (but likely common) assumption because I always used with to instantiate a new object, in which case __init__/__del__ come very close to the same behavior as __enter__/__exit__ (except that you can't control when or if __del__ will be executed, it's up to garbage collection and if the process is terminated first it may never be called). But if you use pre-existing objects in with statements they are of course quite different. | Python __enter__ / __exit__ vs __init__ (or __new__) / __del__ | 0.197375 | 0 | 0 | 16,693 |
40,538,429 | 2016-11-10T22:53:00.000 | 0 | 1 | 0 | 0 | python,arduino,raspberry-pi,temperature | 40,538,698 | 2 | true | 0 | 0 | You can actually to it all with the Raspberry Pi. Control you fan with a PWM value that will then turn on and off a MOSFET.
If you really have to do it with the Arduino, the "simple noob" way of doing this is to send a PWM pulse output from the Pi and analog read it from the Arduino. From there, do the same as you would do with the Pi. | 2 | 0 | 0 | I am working on a school project that includes both my raspberry pi and aruduino, I have made a temperature script that allows me to monitor the pi's current temperature and I am wondering if I can read this output from my arduino in order to control a fan's speed. | How to read raspberry pi temperature from script on an Arduino to control fan | 1.2 | 0 | 0 | 267 |
40,538,429 | 2016-11-10T22:53:00.000 | 0 | 1 | 0 | 0 | python,arduino,raspberry-pi,temperature | 40,539,016 | 2 | false | 0 | 0 | You could also use serial communication. If you have to use the Arduino, then Dat Ha is right, PWM is the easiest. | 2 | 0 | 0 | I am working on a school project that includes both my raspberry pi and aruduino, I have made a temperature script that allows me to monitor the pi's current temperature and I am wondering if I can read this output from my arduino in order to control a fan's speed. | How to read raspberry pi temperature from script on an Arduino to control fan | 0 | 0 | 0 | 267 |
40,539,039 | 2016-11-10T23:51:00.000 | 1 | 1 | 1 | 0 | python,openstack-neutron | 40,539,058 | 1 | true | 0 | 0 | Did you make sure that the folder that contains the file you are trying to import has an __init__.py file? | 1 | 0 | 0 | I have a python file abc.py which was part of a repo (say repo-old). This abc.py was imported by xyz.py (which is also part of repo-old).
Now due to some reason abc.py is removed from repo-old, but I need to test code in repo-old, so I manually copied it back.
Now when the import statement in xyz.py is hit, it fails, saying "not found".
What could I be missing?
P.S - new to python packaging. | Python - import fails to load a module | 1.2 | 0 | 0 | 29 |
40,539,438 | 2016-11-11T00:35:00.000 | 0 | 0 | 1 | 0 | python,spyder | 40,539,516 | 1 | true | 0 | 0 | Projects that have been deleted in Spyder can't be restored at the moment, i.e. they are completely removed from the operating system.
This will be fixed in Spyder 3.0.2 (not released yet) because for this version we added the following functionality:
Projects deleted through Spyder won't be removed from the operating system.
We added a Delete project action to the Projects menu to do that. That action will delete the current project from Spyder but won't remove it from the operating system.
We show a warning when users try to delete a project in the Project Explorer.
We don't allow a project to be removed from the File Explorer.
We hide the project configuration directory (called .spyproject) from the Project and File explorers to prevent its removal. | 1 | 1 | 0 | I deleted a project from my project explorer in spyder.I checked recycle bin or its existing directory whether I can bring it back but it isn't there.Is it possible to bring the project back or know where it is? | How to restore deleted projects in spyder | 1.2 | 0 | 0 | 3,394 |
40,547,622 | 2016-11-11T12:02:00.000 | 0 | 0 | 1 | 0 | python,pip,egg | 40,779,958 | 2 | false | 0 | 0 | If you want to reinstall a particular version you can specify it with pip install xxx==1.0. | 2 | 4 | 0 | I installed a package xxx-1.0 using pip and it installed the egg file /usr/local/lib/python2.7/dist-packages/xxx-1.0-py2.7.egg.
After upgrading the package, the file xxx-1.0-py2.7.egg was replaced by xxx-2.0-py2.7.egg.
But the upgrade failed. When I tried upgrade again, pip saw the file xxx-1.0-py2.7.egg and complained that it's already installed.
By removing the egg file manually, I can upgrade but I don't want to do that.
Is there a way to let pip restore the original egg file when an upgrade fails? | How to restore old egg file when upgrade fails? | 0 | 0 | 0 | 202 |
40,547,622 | 2016-11-11T12:02:00.000 | 0 | 0 | 1 | 0 | python,pip,egg | 40,785,427 | 2 | false | 0 | 0 | If you have an .egg, you effectively have a distribution. So,
remove the 2.0 installation
what is needed to do that depends on how the upgrade "failed"
then pip install the old .egg
not needed if the upgrade "failed" in such a manner that it's still listed as installed | 2 | 4 | 0 | I installed a package xxx-1.0 using pip and it installed the egg file /usr/local/lib/python2.7/dist-packages/xxx-1.0-py2.7.egg.
After upgrading the package, the file xxx-1.0-py2.7.egg was replaced by xxx-2.0-py2.7.egg.
But the upgrade failed. When I tried upgrade again, pip saw the file xxx-1.0-py2.7.egg and complained that it's already installed.
By removing the egg file manually, I can upgrade but I don't want to do that.
Is there a way to let pip restore the original egg file when an upgrade fails? | How to restore old egg file when upgrade fails? | 0 | 0 | 0 | 202 |
40,548,608 | 2016-11-11T13:01:00.000 | 1 | 0 | 1 | 0 | python,database,dataset,zodb,object-oriented-database | 40,549,472 | 2 | false | 0 | 0 | You must store the object on the filesystem and add reference to it in the zodb like using a regular database. | 1 | 2 | 0 | I was using Zodb for large data storage which was in form of typical dictionary format (key,value).
But while storing in ZODB i got following warning message:
C:\python-3.5.2.amd64\lib\site-packages\ZODB\Connection. py:550:
UserWarning: The object
you're saving is large. (510241658 bytes.)
Perhaps you're storing media which should be stored in blobs.
Perhaps you're using a non-scalable data structure, such as a
PersistentMapping or PersistentList.
Perhaps you're storing data in objects that aren't persistent at all.
In cases like that, the data is stored in the record of the containing
persistent object.
In any case, storing records this big is probably a bad idea.
If you insist and want to get rid of this warning, use the
large_record_size option of the ZODB.DB constructor (or the
large-record-size option in a configuration file) to specify a larger
size.
warnings.warn(large_object_message % (obj.class, len(p)))
please suggest how can i store large data in ZODB or suggest any other library for this purpose | ZODB or other database for large data storage in python | 0.099668 | 1 | 0 | 901 |
40,550,207 | 2016-11-11T14:34:00.000 | 0 | 0 | 1 | 0 | python,32bit-64bit,conda | 40,661,895 | 1 | true | 0 | 0 | I found that I have to delete the package caches first. Or force conda to install Python with -f option. | 1 | 0 | 0 | I installed both 32-bit conda and 64-bit conda for different projects. I created a new environment and specified python 3 in
conda create -name ..name.. python=3
The command picked up Python 3.5.2 but in 64-bit, rather than 32. But when I changed the command to
conda create -name ..name.. python=3.4
it picked up the 32-bit python correctly. My question is how to force conda to pickup 32-bit python 3.5.2? so I can use some of the packages that only support python 3.5.
Here's what I did and none of them work:
installed both 32-bit and 64-bit pythons
installed both 32-bit and 64-bit condas
set 32-bit Miniconda to come before 64-bit Miniconda in PATH
launched 32-bit conda prompt
set CONDA_FORCE_32BIT=1
Thanks! | conda 32-bit keep installing 64-bit python 3.5.2 | 1.2 | 0 | 0 | 1,344 |
40,550,998 | 2016-11-11T15:21:00.000 | 0 | 0 | 0 | 0 | python,sql,testing | 40,551,282 | 1 | false | 0 | 0 | I do a lot of this. And although it sounds like a cop out, the answer is "it depends":
If the dataset is very large I would keep referring back to the database as loading it in to memory could be a resource issue.
If the dataset is not to large then loading it in to a memory and referring to it can really improve performance.
I tend to test and see what the performance is like. | 1 | 1 | 0 | I am using sql to pull in values from 'lookup' table. I will use
cursor and fetchall and then loop through values and place them into
dictionary. I do not see reason to keep querying database(open conn,
query, close conn) for every lookup performed when a dictionary of
subset of data should suffice. Is this 'standard' practice to use dictionary in-lieu of table ?
Is there a way to test this with different sets of values without connecting to database? I would prefer at least unit testing without connecting to data store. Some framework or some pattern? Not sure what to investigate. | Python Testing without sql connection | 0 | 1 | 0 | 56 |
40,552,453 | 2016-11-11T16:41:00.000 | 0 | 0 | 1 | 0 | python,email,pandas,import | 43,399,098 | 1 | false | 0 | 0 | I also had the same problem today. You're missing a specific path. Found that if you start your python interpreter and do import os, you can do os.environ. You'll notice that there are several paths set in the PATH variable. Copy/paste the entire PATH line into your script. That worked for me. Also, remember to remove string single quotes (e.g., '). | 1 | 0 | 1 | I've been working with anaconda3/python3.5 and pandas for over a year and all of sudden when I run my script outside the console, I get a import error for pandas particularly the dependency email.parser. I get No module named 'email.parser';'email' is not a package. However importing in the console works fine. I'm not running any other environment | Python - Importing pandas in console works but not when running script | 0 | 0 | 0 | 831 |
40,552,937 | 2016-11-11T17:10:00.000 | 0 | 0 | 0 | 0 | python-2.7,scikit-learn,cluster-analysis | 40,555,741 | 1 | false | 0 | 0 | Rather than recycling clustering for this, treat it as a regular optimization problem. You don't want to "discover structure", but optimize cost.
Beware that earth is not flat, and Euclidean distance (i.e. k-means) is a bad idea. 1 degree north is only at the equator approximately the same distance to 1 degree east. If your data is e.g. in New York, you have a non-neglibile distortion, and your solution will not even be a local optimum.
If you absolutely insist on abusing kmeans, it's easy to do.
Choose n-1 centers at random and the predefined one.
Then run 1 iteration of k-means only. Then replace that center with the desired center again. Repeat with the next iteration. | 1 | 0 | 1 | I have a table of shipment destinations in lat, long. I have one fixed origination point (also lat, long). I would like to find other optimal origin locations using clustering. In other words, I want to assign one cluster centroid (keep it fixed) and find 1, 2, 3 . . . N other cluster centroids. Is this possible with the scikit learn cluster module? | Input one fixed cluster centroid, find N others (python) | 0 | 0 | 0 | 512 |
40,553,249 | 2016-11-11T17:30:00.000 | 1 | 0 | 1 | 0 | python,mysql | 40,553,308 | 1 | false | 0 | 0 | It might be the problem with your PyCharm Interpreter. Go to PyCharm-Preferences-Project Interpreter, choose Python 2.7, which is the version that you installed. Make sure the package that you are looking for is on the list of installed packages below. | 1 | 1 | 0 | I am writing a program that imports MySQLdb in PyCharm, but it reports that "No Module Named MySQLdb".
I googled and tried many solutions within stackoverflow. However, it has not been solved.
When I run the command "pip install MySQL-python", the terminal reports "Requirement already satisfied: MySQL-python in /usr/local/lib/python2.7/site-packages", and MySQL-python indeed exits there.
How could the PyCharm successfully find and import MySQLdb ? | Requirement already satisfied: mysql-python in /usr/local/lib/python2.7/site-packages | 0.197375 | 0 | 0 | 874 |
40,554,446 | 2016-11-11T18:56:00.000 | 0 | 0 | 1 | 0 | ipython,jupyter-notebook,jupyter | 56,153,455 | 4 | false | 0 | 0 | Put in comment (highlight and press Ctrl-/) the instruction(s) responsible for running -- or, faster, comment the whole cell -- and re-run the cell (Ctrl-Enter). This will stop running and of course the output. You can then un-comment the affected part.
This is much less painful than killing and restarting the kernel.
(Note that just clearing the output with [Esc]+'O' won't stop it.) | 3 | 19 | 0 | Suppose I executed all cells in a Jupyter Notebook, and want to interrupt the computation in the middle. How can I stop the execution of all cells?
"Kernel interrupt" only interrupts the execution of the current cell, but then immediately continues with all remaining cells. Instead, I do not want to execute all remaining cells after hitting "Kernel interrupt". How can I do that?
I am running version 4.2.3 with Python 3.5.2, packaged by conda-forge | How to stop execution of all cells in Jupyter Notebook | 0 | 0 | 0 | 28,901 |
40,554,446 | 2016-11-11T18:56:00.000 | -3 | 0 | 1 | 0 | ipython,jupyter-notebook,jupyter | 61,571,975 | 4 | false | 0 | 0 | One simple trick to get rid of this problem, is to press "ctrl+a" to select all the code of that particular cell you want to stop the execution of and press "ctrl+x" to cut the entire cell code.
Now the cell is empty and just the empty cell is executed.
Afterwards just paste the code by "ctrl+v" and now your problem would be solved. | 3 | 19 | 0 | Suppose I executed all cells in a Jupyter Notebook, and want to interrupt the computation in the middle. How can I stop the execution of all cells?
"Kernel interrupt" only interrupts the execution of the current cell, but then immediately continues with all remaining cells. Instead, I do not want to execute all remaining cells after hitting "Kernel interrupt". How can I do that?
I am running version 4.2.3 with Python 3.5.2, packaged by conda-forge | How to stop execution of all cells in Jupyter Notebook | -0.148885 | 0 | 0 | 28,901 |
40,554,446 | 2016-11-11T18:56:00.000 | 0 | 0 | 1 | 0 | ipython,jupyter-notebook,jupyter | 70,905,803 | 4 | false | 0 | 0 | I'm using exit() since to me it is cleaner than raising an exception | 3 | 19 | 0 | Suppose I executed all cells in a Jupyter Notebook, and want to interrupt the computation in the middle. How can I stop the execution of all cells?
"Kernel interrupt" only interrupts the execution of the current cell, but then immediately continues with all remaining cells. Instead, I do not want to execute all remaining cells after hitting "Kernel interrupt". How can I do that?
I am running version 4.2.3 with Python 3.5.2, packaged by conda-forge | How to stop execution of all cells in Jupyter Notebook | 0 | 0 | 0 | 28,901 |
40,555,625 | 2016-11-11T20:22:00.000 | 0 | 0 | 1 | 1 | python,python-2.7,python-3.x,centos,anaconda | 48,596,296 | 2 | false | 0 | 0 | If you are looking to change the python interpreter in anaconda from 3.5 to 2.7 for the user, try the command conda install python=2.7 | 1 | 3 | 0 | Without root access, how do I change the default Python from 3.5 to 2.7 for my specific user? Would like to know how to run Python scripts with Python 2 as well.
If I start up Python by running simply python then it runs 3.5.2. I have to specifically run python2 at the terminal prompt to get a version of python2 up.
If I run which python, then /data/apps/anaconda3/bin/python gets returned and I believe Python 2.7 is under /usr/bin/python.
This is on CentOS if that helps clarify anything | How do I change default Python version from 3.5 to 2.7 in Anaconda | 0 | 0 | 0 | 6,310 |
40,555,711 | 2016-11-11T20:29:00.000 | 1 | 0 | 1 | 0 | python,python-2.7 | 40,575,952 | 3 | false | 0 | 0 | You really want to run an interactive python shell. That is really what you are doing in MATLAB, all the scripts are running inside the same MATLAB shell. That is why the variable persist across runs, because the shell is preserving them.
The difference with python is that it has two ways to run scripts: in a python shell or standalone. You are running them standalone in python, which isn't an option in MATLAB.
If you want something as close as possible to what you are doing in MATLAB, just wrap the code in your script in a function, fire up a python shell, and run the function.
However, this isn't usually the best approach for long-running, repeated code like you are doing. The much better approach would be to use the IPython/Jupyter notebook. This interface allows you to run parts of your code selectively, reorganize parts, and many other useful features. It also has a feature that lets you automatically preserve specific variables across sessions. | 1 | 1 | 0 | I wrote some code in Matlab where the scope of variables executed in scripts is retained in the "workspace". The data I'm working on is very large, and so each execution of a script requires 20-30 minutes to produce the necessary results. The advantage of Matlab was that after execution, if I desired to add code to the end of the script, I can execute it to the result variables rather that have to re-run the code from the start.
How can I do something similar in Python where the values of variables are retained from one script execution to the next? | Retaining namespace and variable values after script execution | 0.066568 | 0 | 0 | 92 |
40,557,678 | 2016-11-11T23:16:00.000 | -1 | 0 | 0 | 0 | python,web-scraping,facebook-apps | 68,362,541 | 2 | false | 1 | 0 | for using their API, you'll need to "verify" your app to get access to their "pages_read_user_content" or "Page Public Content Access"
at first using the API you might "GET" the page id / page post id / the permalink to the post in the page your own but to scrape the comments with API you'll need to verify a business account. | 1 | 1 | 0 | Is there a way to scrape Facebook comments and IDs from a Facebook page like nytimes or the guardian for analytical purposes !? | Is there a way to scrape Facebook comments and IDs from a Facebook page like nytimes or the guardian for analytical purposes? | -0.099668 | 0 | 1 | 827 |
40,560,439 | 2016-11-12T07:04:00.000 | 3 | 0 | 1 | 0 | python,django,pythonanywhere | 40,560,572 | 2 | true | 1 | 0 | On production server your print statements will output log to your webserver log files
In case of pythonanywhere there are three log files
Access log:yourusername.pythonanywhere.com.access.log
Error log:yourusername.pythonanywhere.com.error.log
Server log:yourusername.pythonanywhere.com.server.log
those logs are accessible in your web tab page.
The logs you are looking for will be in server.log | 1 | 3 | 0 | I've gotten use to using print in my python code to show contents of variable and checking the shell output.
But i have now migrated all my work onto a online server. Pythonanywhere
I don't have the foggiest idea how to do the same now?
Can someone point me in the right direction?
Print to web console? To a file? Or even to the shell session?
Thanks | Django print in prod' server | 1.2 | 0 | 0 | 1,138 |
40,562,116 | 2016-11-12T10:57:00.000 | 0 | 0 | 1 | 0 | python,macos,python-3.x,path,global-variables | 40,562,461 | 3 | false | 0 | 0 | This is not python specific, but if you want to share a config globally among your programs you could set up a environment variable like MYPROJECT_DATA_PATH and all your scripts check this variable before loading the data. Or you could write a config file which all your programs know the location. Or both, a environment variable with the path of the config file, where you can fine-tune it for your needs. | 1 | 1 | 0 | In a python project, how do I setup a project-wide "data" folder, accessible from every module? I don't have a single entry point in my program, so I cannot do something like global (dataFolderPath). I would like for every module to know where the data folder is (without hardcoding the path in every module!), so it can load and write the data it needs. I'm using python 3.5 on a mac.
Thanks! | How to set up data folder to be accessible from everywhere in a python project | 0 | 0 | 0 | 710 |
40,565,660 | 2016-11-12T17:31:00.000 | 0 | 0 | 0 | 0 | python,html,web-scraping,yahoo-finance,google-finance | 40,565,682 | 2 | false | 1 | 0 | It's always best to use the provided API if you get all the information you need from it. If the API doesn't exist or is not good enough, then you go on the scraping path and it usually is more work than using API.
So I would definitely use try using APIs first. | 1 | 0 | 0 | i'm relatively new to python, hence the perhaps low level of my question. Anyway, i am trying to create a basic program for just displaying a couple of key statistics for different stocks (beta-value, 30-day high/low, p/e, p/s etc...). I have the GUI finished, but i'm not sure how to proceed with my project. Have been researching for a few hours but can't seem to decide which way to go.
Would you recommend HTML-scraping or yahoo/google finance API or anything else for downloading the data? After i have it downloaded i am pretty much just going to print it on the GUI. | Creation of basic "stock-program" | 0 | 0 | 1 | 204 |
40,567,190 | 2016-11-12T20:14:00.000 | 0 | 0 | 0 | 0 | wxpython,wxwidgets | 40,645,047 | 1 | false | 0 | 1 | I'm afraid RTL support simply is not implemented under Mac. I'll update the documentation to at least mention this for now, but this is all I can do. | 1 | 0 | 0 | On Windows with wxPython 3.0.2 SetLayoutDirection works as expected in htmlWindow. SetLayoutDirection(2) will allow me to display RTL texts. However, with the same version on Mac it does not work. It is always the default LTR text flow. Plus GetLayoutDirection always returns 0 even after SetLayoutDirection(2) is supposedly set.
Was this feature missed on the Mac build? Is there a fix or workaround? | wxPython SetLayoutDirection Doesn't Work On Mac | 0 | 0 | 0 | 54 |
40,570,092 | 2016-11-13T02:55:00.000 | 0 | 1 | 0 | 0 | python,heroku | 40,571,033 | 1 | true | 1 | 0 | Define a "worker" process type in your Procfile that invokes your script. | 1 | 0 | 0 | Basically, I have a python script which using python-twitter api fetches tweets particular hashtag and store it in a database. Script doess this after every 30 seconds. How do i deploy the script to run it on heroku. | How to deploy a python client script on heroku? | 1.2 | 0 | 0 | 45 |
40,574,548 | 2016-11-13T13:49:00.000 | 0 | 0 | 0 | 0 | python,svg | 41,472,508 | 1 | false | 0 | 0 | The svgwrite package only creates svg. It does not read a svg file. I have not tried any packages to read and process svg files. | 1 | 1 | 0 | I want to read an existing SVG file, traverse all elements and remove them if they match certain conditions (e.g. remove all objects with red border).
There is the svgwrite library for Python2/3 but the tutorials/documentation I found only show how to add some lines and save the file.
Can I also manipulate/remove existing elements inside an SVG document with svgwrite? If not - is there an alternative for Python? | manipulating SVGs with python | 0 | 0 | 1 | 414 |
40,578,207 | 2016-11-13T20:00:00.000 | 0 | 1 | 0 | 0 | python,x509,verification,pyopenssl | 40,578,492 | 1 | true | 0 | 0 | But I was wondering if the function also checks the signatures along the certificate chain
Of course it does. Otherwise what is the purpose of chain verification? From the OpenSSL documentation (man 1ssl verify on linux):
The final operation is to check the validity of the certificate chain. The validity period is checked against the current system time and the notBefore and notAfter dates in the certificate. The certificate signatures are also checked at this point. | 1 | 0 | 0 | I use PyOpenSSL verify_certificate() to verify certificate chains. My code seems to work. But I was wondering if the function also checks the signatures along the certificate chain. Lets assume we have the chain ca_cert -> i_ca_cert -> s_cert. Thus ca_cert signed i_ca_cert and i_ca_cert signed s_cert. Does verify_certificate() check whether the signer's (RSA) key was used to sign the certificate and whether the signature is correct, for every certificate along the chain? | Does PyOpenSSL verify_certificate() do signature verification | 1.2 | 0 | 0 | 559 |
40,578,278 | 2016-11-13T20:08:00.000 | 0 | 0 | 0 | 0 | python,tkinter | 40,579,156 | 1 | true | 0 | 1 | No, there is nothing specifically available to make the window border look different. Your only choice is to remove the border completely (eg: root.overrideredirect(True)) and draw the border yourself using a canvas and/or images. | 1 | 0 | 0 | I would like to make the frame of my window look like the old windows 95 style.
Right now when i create a window tkinter automatically adopts the style of my os (windows 10).
Is there a way to change this? | Change tkinter window border style | 1.2 | 0 | 0 | 2,232 |
40,579,173 | 2016-11-13T21:43:00.000 | 1 | 0 | 1 | 0 | python,share,packages,conda,multi-user | 40,600,218 | 1 | true | 0 | 0 | Option 1 is probably the standard/best option. Shouldn't be a problem doing this on a server as long as you have access to the Internet. If you want it in a specific directory, you can specify this with the --path parameter in your call to conda create. | 1 | 1 | 0 | I have 30 users that have Anaconda installed. What is the best way to create share directories that contain a group of packages (with their dependenccies) ? Let say that for some project we need to have a common list of packages on top of the standard Anaconda version. What is the best way to achieve that ?
conda list --export > package-list.txt
and reinstall packages from an export file:
conda create -n myenv --file package-list.txt
but each user will need to install these packages on their PC
and it could be an issue to do that on a server for example
use channel and put all our packages but I didn't find an automatic way to do that and to put a list of python packages with their dependencies.
I am not sure that conda env will help since I want the installation to be done on a specific directory (on a share disk)
Is there a better option ? I never did that before so maybe I am not looking in the right direction. The other constraint is that we are using Windows 7. | How to share a group of python module using Conda | 1.2 | 0 | 0 | 495 |
40,582,281 | 2016-11-14T05:17:00.000 | 4 | 0 | 1 | 0 | python,string | 40,582,316 | 2 | true | 0 | 0 | One liner list comprehension
['%s %s' % (a[:i], a[i:]) for i in range(1, len(a))] | 1 | 3 | 0 | Say I have the string "BigJon".
Is there way to iterate through and slice it into two different words like
B igJon, Bi gJon, Big Jon, Bigj on and so on. And then all these separate pieces be a list? | String slicing, iterations and list question | 1.2 | 0 | 0 | 54 |
40,592,116 | 2016-11-14T15:19:00.000 | 1 | 0 | 1 | 1 | python,virtual-machine,ipc,virtualization | 41,398,029 | 1 | true | 0 | 0 | Host<->VM communication on Windows host can be implemented in several ways, independently of hypervisor you are using:
Host Only network - just assign static IP for host and machine, and use sockets api to transfer your data via virtual network. Very good for large amount of data, but require a little bit time for configuration.
Virtual COM ports - if you don't want to use sockets api and want to write data to files(on linux VM)/named pipes(on windows host). This can be simpler because require almost zero configuration, but it will not work very well with large amount of data.
Choose what will fit your needs. | 1 | 0 | 0 | How can I setup a virtualized Ubuntu on real Windows so I can have two apps communicating simple messages between them? VM can be offline, no internet access. Real system probably offline too. | Python communicate into VM windows app | 1.2 | 0 | 0 | 688 |
40,595,961 | 2016-11-14T19:03:00.000 | 209 | 0 | 1 | 0 | python,themes,spyder | 40,684,400 | 18 | true | 0 | 0 | If you're using Spyder 3, please go to
Tools > Preferences > Syntax Coloring
and select there the dark theme you want to use.
In Spyder 4, a dark theme is used by default. But if you want to select a different theme you can go to
Tools > Preferences > Appearance > Syntax highlighting theme | 9 | 106 | 0 | I've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark? | How to change the Spyder editor background to dark? | 1.2 | 0 | 0 | 309,596 |
40,595,961 | 2016-11-14T19:03:00.000 | 0 | 0 | 1 | 0 | python,themes,spyder | 56,964,742 | 18 | false | 0 | 0 | 1.Click Tools
2.Click Preferences
3.Select Syntax Coloring | 9 | 106 | 0 | I've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark? | How to change the Spyder editor background to dark? | 0 | 0 | 0 | 309,596 |
40,595,961 | 2016-11-14T19:03:00.000 | -5 | 0 | 1 | 0 | python,themes,spyder | 41,008,022 | 18 | false | 0 | 0 | Yes, that's the intuitive answer. Nothing in Spyder is intuitive. Go to Preferences/Editor and select the scheme you want. Then go to Preferences/Syntax Coloring and adjust the colors if you want to.
tcebob | 9 | 106 | 0 | I've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark? | How to change the Spyder editor background to dark? | -1 | 0 | 0 | 309,596 |
40,595,961 | 2016-11-14T19:03:00.000 | 2 | 0 | 1 | 0 | python,themes,spyder | 46,890,965 | 18 | false | 0 | 0 | I tried the option: Tools > Preferences > Syntax coloring > dark spyder
is not working.
You should rather use the path:
Tools > Preferences > Syntax coloring > spyder
then begin modifications as you want your editor to appear | 9 | 106 | 0 | I've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark? | How to change the Spyder editor background to dark? | 0.022219 | 0 | 0 | 309,596 |
40,595,961 | 2016-11-14T19:03:00.000 | 1 | 0 | 1 | 0 | python,themes,spyder | 50,761,910 | 18 | false | 0 | 0 | On mine it's Tools --> Preferences --> Editor and "Syntax Color Scheme" dropdown is at the very bottom of the list. | 9 | 106 | 0 | I've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark? | How to change the Spyder editor background to dark? | 0.011111 | 0 | 0 | 309,596 |
40,595,961 | 2016-11-14T19:03:00.000 | 2 | 0 | 1 | 0 | python,themes,spyder | 52,020,233 | 18 | false | 0 | 0 | I think some of the people answering this question don’t actually try to do what they recommend, because there is something wrong with way the Mac OS version handles the windows.
When you choose the new color scheme and click OK, the preferences window looks like it closed, but it is still there behind the main spyder window. You need to switch windows with command ~ or move the main spyder window to expose the preferences window. Then you need to click Apply to get the new color scheme. | 9 | 106 | 0 | I've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark? | How to change the Spyder editor background to dark? | 0.022219 | 0 | 0 | 309,596 |
40,595,961 | 2016-11-14T19:03:00.000 | 0 | 0 | 1 | 0 | python,themes,spyder | 56,276,123 | 18 | false | 0 | 0 | At First click on preferences(Ctrl+Shift+alt+p) then click the option of syntax coloring and change the scheme to "Monokai".Now apply it and you will get the dark scheme. | 9 | 106 | 0 | I've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark? | How to change the Spyder editor background to dark? | 0 | 0 | 0 | 309,596 |
40,595,961 | 2016-11-14T19:03:00.000 | 1 | 0 | 1 | 0 | python,themes,spyder | 58,119,453 | 18 | false | 0 | 0 | I've seen some people recommending installing aditional software but in my opinion the best way is by using the built-in skins, you can find them at:
Tools > Preferences > Syntax Coloring | 9 | 106 | 0 | I've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark? | How to change the Spyder editor background to dark? | 0.011111 | 0 | 0 | 309,596 |
40,595,961 | 2016-11-14T19:03:00.000 | 0 | 0 | 1 | 0 | python,themes,spyder | 60,023,463 | 18 | false | 0 | 0 | In Spyder 4.1, you can change background color from:
Tools > Preferences > Appearance > Syntax highlighting scheme | 9 | 106 | 0 | I've just updated Spyder to version 3.1 and I'm having trouble changing the colour scheme to dark. I've been able to change the Python and iPython console's to dark but the option to change the editor to dark is not where I would expect it to be. Could anybody tell me how to change the colour scheme of the Spyder 3.1 editor to dark? | How to change the Spyder editor background to dark? | 0 | 0 | 0 | 309,596 |
40,597,058 | 2016-11-14T20:15:00.000 | 1 | 0 | 0 | 0 | python,selenium | 40,597,145 | 1 | false | 0 | 0 | Try installing it through PyCharm:
File -> Settings -> Project:your_project -> Project Interpreter -> green '+' -> find 'selenium' -> install | 1 | 0 | 0 | i'm using pycharm community edition 2.2 with python 2.7
i have installed selenium web driver through pip install selenium command, but whenever importing (from selenium import web driver) selenium module i'm hitting this error : "from selenium import web driver
ImportError: No module named selenium"
please help me.. | i have getting this error 'from selenium import webdriver ImportError: No module named selenium" even though i have installed selenium module | 0.197375 | 0 | 1 | 1,262 |
40,602,640 | 2016-11-15T05:28:00.000 | 0 | 0 | 0 | 0 | python,django,mongodb,sqlite,mongoengine | 40,752,618 | 2 | true | 1 | 0 | I found a solution, It's very simple. If you want your model to use mongoDB database, just create model class witch Document parameter (or EmbeddedDocument, for example class Magazine(Document):). but if you prefer default database type defined, just create class, like in django documentation (example class Person(models.Model):). | 1 | 0 | 0 | I'm developing some project in Django, something to manage assets in warehouses. I want to use two databases to this. First is sqlite database, which contains any data about users. Second is mongoDB database,in which want to store all the data related to assets. The question is, how to tell to my model classes, which database they should use (models responsible for user registration etc - sqlite, models responsible for managing assets data - mongoDB)? I read about DATABASE_ROUTERS and using Meta classes, but it's solutions for supported databases by Django (or maybe I don't know something), I dont know if it's good and possible to integrate it with mongoengine.
Thanks for any tip! | Managing databases in Django models, sqlite and mongoengine | 1.2 | 1 | 0 | 1,468 |
40,602,894 | 2016-11-15T05:49:00.000 | 3 | 0 | 0 | 1 | python,glob,snakemake | 40,606,186 | 2 | false | 0 | 0 | With the upcoming release 3.9.0, you can see the corresponding log files for all output files when invoking snakemake --summary. | 1 | 0 | 0 | is there a way to programmatically list log-files created per rule from within the Snakefile? Will I have to tap into the DAG and if yes, how?
Background: I'd like to bundle up and remove all created log-files (only cluster logs are in a separate folder; some output files have correspondingly called log files). For this I want to be specific and exclude log files that might have been created by run programs and that coincidentally match a log glob.
Are there alternatives, e.g. would parsing shellcmd_tracking files be easier?
Thanks,
Andreas | Access to log files created by snakemake rules | 0.291313 | 0 | 0 | 1,476 |
40,609,078 | 2016-11-15T11:46:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,class,pygame | 40,609,219 | 2 | false | 0 | 0 | Posting the code would be really useful here but based on the information provided I can give the following answer.
Give the property xPosition to your Allies class. This would be updated every frame then you can simply check through the AlliedList with Allies.xPosition to return their x coordinate.
I hope this helps, again if you post code we can help further! | 1 | 0 | 0 | I have a class called allies and all of the allies join the the group AlliedList. How do you check that whole group to see if x < 200. So that then you can change a variable. | Check if any sprites in a list have gone past a certain x coordinate? | 0 | 0 | 0 | 31 |
40,609,839 | 2016-11-15T12:26:00.000 | 0 | 0 | 0 | 1 | python,airflow | 52,749,395 | 2 | false | 0 | 0 | This means that the dag_concurrency is set to a smaller number of concurrent task than you are trying to use. there is a file called airflow.cfg where you can change some execution configurations, one is the dag_concurrency, it is probably set to '6' increase it to your necessity, just make sure it doesn't exceed the number of parallelism that may cause problems. | 1 | 2 | 0 | As soon as Airflow starts, None of the dag runs for a particular dag are executed and reaches maximum active dags. Even after setting the dag run state to success, Scheduler doesn't seem to move the newly scheduled task to execution state. All the active dag runs remain in 'running' state. This happens for some dags alone. Can someone please help on this? | aiflow max active dag runs reached | 0 | 0 | 0 | 4,999 |
40,612,573 | 2016-11-15T14:38:00.000 | 3 | 0 | 0 | 0 | python,wxpython,wx.textctrl | 40,635,172 | 1 | true | 0 | 1 | This is not supported by the wx.TextCtrl out of the box. What you need to understand is that most of the core widgets are actually using the operating system's widgets and not drawing them itself. So if the native widget doesn't support this sort of thing, then wxPython's core widgets won't either.
You would need to create a custom widget that you draw yourself to get this functionality. Check out the wxPython demo for examples of custom widgets. All of the widgets in AGW are custom, for example. | 1 | 1 | 0 | i want to change the shape of wx.TextCtrl widget used in wxpython. normal by default shape is a square box but i want to make all the corner having round curve. | How to change shape of wx.TextCtrl widget in wxpython? | 1.2 | 0 | 0 | 246 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.