Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
41,824,077 | 2017-01-24T09:07:00.000 | 1 | 0 | 1 | 0 | python-3.x,path,pycharm | 41,824,866 | 2 | false | 0 | 0 | I user various methods in my python scripts.
set the working directory as first step of your code using os.chdir(some_existing_path)
This would mean all your other paths should be referenced to this, as you hard set the path. You just need to make sure it works from any location and your specifically in your IDE. Obviously, another os.chdir() would change the working directory and os.getcwd() would return the new working directory
set the working directory to __file__ by using os.chdir(os.path.dirname(__file__))
This is actually what I use most, as it is quite reliable, and then I reference all further paths or file operations to this. Or you can simply refer to as os.path.dirname(__file__) in your code without actually changing the working directory
get the working directory using os.getcwd()
And reference all path and file operations to this, knowing it will change based on how the script is launched. Note: do NOT assume that this returns the location of your script, it returns the working directory of the shell !!
[EDIT based on new information]
By "interactive session" I mean being able to run each line
individually in a Python/IPython Console
By running interactively line-by-line in a Python console, the __file__ is not defined, afterall: you are not executing a file. Hence you cannot use os.path.dirname(__file__) you will have to use something like os.chdir(some_known_existing_dir) to reference a path. As a programmer you need to be very aware of working directory and changes to this, your code should reflect that.
By "running from command line" I mean creating a script my_script.py
and running python path_to_myscript/my_script.py (I actually press the
Run button at PyCharm, but I think it's the same).
This, both executing a .py from command line as well as running in your IDE, will populate the __file__, hence you can use os.path.dirname(__file__)
HTH | 1 | 1 | 0 | When running an interactive session, PyCharm thinks of os.getcwd() as my project's directory. However, when I run my script from the command line, PyCharm thinks of os.getcwd() as the directory of the script.
Is there a good workaround for this? Here is what I tried and did not like:
going to Run/Edit Configurations and changing the working directory manually. I did not like this solution, because I will have to do it for every script that I run.
having one line in my code that "fixes" the path for the purposes of interactive sessions and commenting it out before running from command line. This works, but feels wrong.
Is there a way to do this or is it just the way it is supposed to be? Maybe I shouldn't be trying to run random scripts within my project?
Any insight would be greatly appreciated.
Clarification:
By "interactive session" I mean being able to run each line individually in a Python/IPython Console
By "running from command line" I mean creating a script my_script.py and running python path_to_myscript/my_script.py (I actually press the Run button at PyCharm, but I think it's the same).
Other facts that might prove worth mentioning:
I have created a PyCharm project. This contains (among other things) the package Graphs, which contains the module Graph and some .txt files. When I do something within my Graph module (e.g. read a graph from a file), I like to test that things worked as expected. I do this by running a selection of lines (interactively). To read a .txt file, I have to go (using os.path.join()) from the current working directory (the project directory, ...\\project_name) to the module's directory ...\\project_name\\Graphs, where the file is located. However, when I run the whole script via the command line, the command reading the .txt file raises an Error, complaining that no file was found. By looking on the name of the file that was not found, I see that the full file name is something like this:
...\\project_name\\Graphs\\Graphs\\graph1.txt
It seems that this time the current working directory is ...\\project_name\\Graphs\\, and my os.path.join() command actually spoils it. | PyCharm project path different from interactive session path | 0.099668 | 0 | 0 | 773 |
41,827,464 | 2017-01-24T11:47:00.000 | 0 | 0 | 1 | 0 | python,pip,centos7,pycparser | 43,140,645 | 3 | false | 0 | 0 | same solution worked for me
pip install setuptools==33.1.1
and then sudo pip install -r requirements.txt | 1 | 8 | 0 | I am seeing the following error while setting up pyparser on CentOS 7 via pip
/usr/bin/python2 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-PMzCYU/pycparser/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-0bpBrX-record/install-record.txt --single-version-externally-managed --compile
Traceback (most recent call last):
File "", line 1, in init.py", line 12, in
import setuptools.version
File "/usr/lib/python2.7/site-packages/setuptools/version.py", line 1, in
import pkg_resources
File "/usr/lib/python2.7/site-packages/pkg_resources/init.py", line 72, in
import packaging.requirements
File "/usr/lib/python2.7/site-packages/packaging/requirements.py", line 59, in
MARKER_EXPR = originalTextFor(MARKER_EXPR())("marker")
TypeError: call() takes exactly 2 arguments (1 given) | python pycparser setup error | 0 | 0 | 0 | 6,122 |
41,831,529 | 2017-01-24T15:06:00.000 | 0 | 0 | 0 | 0 | python,pymc | 41,834,290 | 1 | true | 0 | 0 | There is currently no way to plot them without being saved to disk. I would recommend only plotting a few diagnostic parameters, and specifying plot=False for the others. That would at least cut down on the volume of plots being generated. There probably should be a saveplot argument, however, I agree. | 1 | 0 | 1 | I recently started experimenting with pymc and only just realised that the images being produced by pymc.Matplot.plot, which I use to diagnose whether the MCMC has performed well, are being saved to disk. This results in images appearing wherever I am running my scripts from, and it is time consuming to clear them up. Is there a way to stop figures being saved to disk? I can't see anything clearly in the documentation. | Stop images produced by pymc.Matplot.plot being saved | 1.2 | 0 | 0 | 60 |
41,832,838 | 2017-01-24T16:06:00.000 | 20 | 0 | 0 | 0 | python,python-3.x,pip,python-requests | 41,832,839 | 2 | true | 0 | 0 | Run
sudo python3 -m pip install "requests[security]"
or
sudo python -m pip install "requests[security]"
to fix this issue. | 1 | 11 | 0 | I have been having this error when trying to make web requests to various hosts. After debugging a bit I have found the solution is updating the requests[security] through pip. | Python Error 104, connection reset by peer | 1.2 | 0 | 1 | 30,212 |
41,833,790 | 2017-01-24T16:48:00.000 | 3 | 1 | 0 | 1 | python,oracle,amazon-web-services,lambda,cx-oracle | 41,837,986 | 1 | true | 0 | 0 | If you can limit yourself to English error messages and a restricted set of character sets (which does include Unicode), then you can use the "Basic Lite" version of the instant client. For Linux x64 that is only 31 MB as a zip file. | 1 | 1 | 0 | I must load the Oracle "instant client" libraries as part of my AWS lambda python deployment zip file.
Problem is, many of the essential libraries (libclntsh.so.12.1 is 57MB libociei.so is 105MB) and Amazon only allows deployment zip files under 50MB.
I tried: my script cannot connect to Oracle using cx_Oracle without that library in my local ORACLE_HOME and LD_LIBRARY_PATH.
How can I get that library into Lambda considering their zip file size limitation? Linux zip just doesn't compress them enough. | AWS python Lambda script that can access Oracle: Driver too big for 50MB limit | 1.2 | 1 | 0 | 846 |
41,833,928 | 2017-01-24T16:54:00.000 | 0 | 0 | 1 | 0 | java,python,eclipse,plugins,pydev | 42,008,120 | 1 | false | 0 | 0 | Take a look at com.python.pydev.refactoring.refactorer.refactorings.renamelocal.RefactoringLocalTestBase.applyRenameRefactoring(RefactoringRequest, boolean) -- used from com.python.pydev.refactoring.refactorer.refactorings.renamelocal.RenameBuiltinTest.testRename3() (you can try to do a debug session there to understand more about how it works). | 1 | 0 | 0 | I'm trying to call the pydev refactoring dialog and put the new/old string for refactoring and the file, but did not find in the tests of pydev source. | How to put strings in Pydev refactoring dialog from eclipse plugin? | 0 | 0 | 0 | 109 |
41,834,141 | 2017-01-24T17:04:00.000 | 0 | 0 | 1 | 0 | python-3.x,ipython | 41,849,001 | 1 | false | 0 | 0 | Shame on me, it was just a typo: the correct module is named sklearn.ensemble. | 1 | 0 | 1 | When I launch ipython -i script_name or load the script with %load, it fails loading sklearn.ensamble.
But it succeed in loading and I am able to use it when I launch ipython alone and then from sklearn.ensamble import *.
Why? | ipython can't load module when using magic %load, but succeed when loading interactively | 0 | 0 | 0 | 21 |
41,836,727 | 2017-01-24T18:50:00.000 | 0 | 0 | 0 | 0 | python,pandas | 41,836,728 | 1 | true | 0 | 0 | np.inf is treated the same way np.NaN.
I replaced the all the values of np.inf with np.NaN and the results were exactly the same. If there is some subtle differences, please let me know. I was looking for an answer on this and couldn't find one anywhere so I figured I would post this here. | 1 | 1 | 1 | What will happen when I use df.corr()? Will np.inf effect my results some how? | I have a DataFrame with some values of np.inf. How does .corr() work? | 1.2 | 0 | 0 | 72 |
41,838,726 | 2017-01-24T20:48:00.000 | 0 | 0 | 0 | 0 | python-2.7,module,scikit-learn,grid-search | 41,839,189 | 1 | false | 0 | 0 | I just found the answer. the 0.18 sklearn has seen a number of updates. you may update your sklearn by typing "conda update scikit-learn" in your windows command line.
If it still didn't code you might want to update your conda/Anaconda as well:
"conda update conda" and "conda update Anaconda" | 1 | 0 | 1 | I try to run this line:
from sklearn.model_selection import GridSearchCV
but I get an ImportError (i.e. No module named model_selection) although I have installed sklearn and I can import other packages. here is my python version :
2.7.12 |Anaconda 4.2.0 (64-bit)| (default, Jun 29 2016, 11:07:13) [MSC v.1500 64 bit (AMD64)]
is there a way to use "sklearn.model_selection" on my current version? | can't install model_selection on python 2.7.12 | 0 | 0 | 0 | 238 |
41,838,779 | 2017-01-24T20:52:00.000 | 4 | 0 | 1 | 0 | python,python-2.7,python-3.x,dsx,data-science-experience | 41,845,926 | 2 | true | 0 | 0 | While the method presented in another answer (look for specific environment variables) works today, it may stop working in the future. This is not an official API that DSX exposes. It will obviously also not work if somebody decides to set these environment variables on their non-DSX system.
My take on this is that "No, there is no way to reliably determine whether the notebook is running on DSX".
In general, (in my opinion) notebooks are not really designed as artifacts that you can arbitrarily deploy anywhere; there will always need to be someone wearing the "application developer" hat and transform them - how to do that, you could put into a markdown cell inside the notebook. | 1 | 1 | 0 | How can I programmatically determine if the python code in my notebook is running under DSX?
I'd like to be able to do different things under a local Jupyter notebook vs. DSX. | Programmatically determine if running in DSX | 1.2 | 0 | 0 | 106 |
41,843,266 | 2017-01-25T04:14:00.000 | 73 | 0 | 1 | 0 | windows,visual-studio,pycrypto,python-3.6 | 41,843,310 | 9 | true | 0 | 0 | The file include\pyport.h in Python installation directory does not have #include < stdint.h > anymore. This leaves intmax_t undefined.
A workaround for Microsoft VC compiler is to force include stdint.h via OS environment variable CL:
Open command prompt
Setup VC environment by runing vcvars*.bat (choose file name depending on VC version and architecture)
set CL=-FI"Full-Path\stdint.h" (use real value for Full-Path for the environment)
pip install pycrypto | 2 | 42 | 0 | pip install pycrypto works fine with python3.5.2 but fails with python3.6 with the following error:
inttypes.h(26): error C2061: syntax error: identifier 'intmax_t' | Microsoft Windows Python-3.6 PyCrypto installation error | 1.2 | 0 | 0 | 54,800 |
41,843,266 | 2017-01-25T04:14:00.000 | 2 | 0 | 1 | 0 | windows,visual-studio,pycrypto,python-3.6 | 60,353,443 | 9 | false | 0 | 0 | Uninstall your current Python version
Install Python for amd64 architecture
Follow the other accepted solutions:
open "x86_x64 Cross-Tools Command Prompt for VS 2017"
Add the new enviroment varible for your Visual Studio MSVC install path
set CL=-FI"%VCINSTALLDIR%Tools\MSVC\14.11.25503\include\stdint.h"
pip install pycrypto | 2 | 42 | 0 | pip install pycrypto works fine with python3.5.2 but fails with python3.6 with the following error:
inttypes.h(26): error C2061: syntax error: identifier 'intmax_t' | Microsoft Windows Python-3.6 PyCrypto installation error | 0.044415 | 0 | 0 | 54,800 |
41,845,657 | 2017-01-25T07:23:00.000 | 5 | 0 | 1 | 0 | python,session,flask,shelve | 41,957,026 | 2 | true | 1 | 0 | I recommend take a look at NoSQL storage engines like Memcached or Redis.
They give you several advantages:
Place at a separate machine, so if you need to scale your app you'll be able to do it.
Extra interface to check what is stored in them.
Ability to flush if you really need once.
You can connect other apps to the these programs, so you can share sessions across several apps (however it's not recommended for big fast developing apps and keeping complicated structures). | 1 | 3 | 0 | Writing a web based flask api application with several modules. I would like to incorporate different permissions and privileges for different user logins for the different modules. Query is whether these privileges should be stored as session dictionaries or as shelve values? Which is more efficient and faster? Why would there be a preference of one over the other in this context? | Should we store session privileges in python shelve or as session variables? | 1.2 | 0 | 0 | 226 |
41,846,630 | 2017-01-25T08:26:00.000 | 0 | 0 | 0 | 0 | python,matlab,python-2.7,parameter-passing,language-interoperability | 41,846,775 | 2 | false | 0 | 0 | Depending on what you want to do and your type of data, you could write it to a file and read from it in the other language. You could use numpy.fromfile for that in the python part. | 1 | 0 | 1 | Hello Friends,
I want to pass data between MATLAB and Python, One way would be to use matlab.engine in Python or Call Python Libraries from MATLAB. But this approach requires MATLAB 2014 Version unlike mine which is MATLAB R2011b.
So I request you to please guide for a different Approach in order to comunicate between Python and MATLAB R2011b Version.
Thanks in advance | Pass data between MATLAB R2011b and Python (Windows 7) | 0 | 0 | 0 | 110 |
41,850,349 | 2017-01-25T11:22:00.000 | 0 | 0 | 1 | 0 | python,scikit-learn | 41,851,421 | 2 | false | 0 | 0 | I'm not sure if there is a single method of treating class_weight for all the algorithms.
The way Decision Trees (and Forests) deals with this is by modifying the weights of each sample according to its class.
You can consider weighting samples as a more general case of oversampling all the minority class samples (using weights you can "oversample" fractions of samples). | 1 | 0 | 1 | I would like to know how scikit-learn put more emphasis on a class when we use the parameter class_weight. Is it an oversampling of the minority sampling ? | How class_weight emphasis a class in in scikit-learn | 0 | 0 | 0 | 420 |
41,850,411 | 2017-01-25T11:25:00.000 | 1 | 0 | 0 | 0 | python,neo4j,authorization,graph-databases,py2neo | 42,622,083 | 2 | true | 0 | 0 | At the moment it is not possible to write procedures for custom roles to implement subgraph access control using Python. It is only possible in Java.
A workaround might be to indirektly implement it using phyton by adding properties to nodes and relationship storing the security levels for these nodes and relationships. Checking the secutiry level of a user it might be possible to use a phyton visualization that checks the properties to only display nodes and relationships that are in agreement with the user security level. | 1 | 2 | 0 | I have some employee data in which there are 3 different roles. Let's say CEO, Manager and Developer.
CEO can access the whole graph, managers can only access data of some people (their team) and developers can not access employee data.
How should I assign subgraph access to user roles and implement this using Python?
There are good solutions and comprehensive libraries and documentations but only in Java! | Authorization (subgraph access control) in Neo4j with python driver | 1.2 | 1 | 1 | 304 |
41,856,693 | 2017-01-25T16:28:00.000 | 1 | 0 | 1 | 1 | python,homebrew,uninstallation,removeall | 41,857,222 | 1 | true | 0 | 0 | To remove it there are 2 changes:
remove the /Users/user/anaconda2 directory
change your path to not use any /Users/user/anaconda2 directories.
However I suggest you download Anaconda again and use environments rather than your root folder for everything. Use conda to install packages when possible (most of the time really) and use conda-environments on a per project basis to install packages (Instead of cluttering up your main environment).
This way if you have this problem again you can delete the conda environment and all will be well. | 1 | 1 | 0 | I had Python 2.7 for few months on my Mac and after installing one module - every other module got corrupted. Tried several hours of different ways to repair but did not work. Virtual-env also does now work now.
I would like to remove ALL Python modules from my Mac along with Python and reinstall it with Brew (or other recommended tool).
Packages are here: /Users/user/anaconda2/lib/python2.7/site-packages/
How do I do that?
Should I remove this whole folder above or what is the proper way?
(after reinstalling Python with just brew - it did not remove this folder and therefore same problem show up). | Delete ALL Python 2.7 modules and Python from Mac | 1.2 | 0 | 0 | 840 |
41,856,832 | 2017-01-25T16:34:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,operating-system,locks,readerwriterlock | 41,869,149 | 1 | true | 0 | 0 | Writer:
Upload a file W. If this fails, wait and try again.
Upload a file R. If this fails, wait and try again.
Do as many writes as desired.
Remove W.
Remove R.
Reader:
Upload a file R. If this fails, wait and try again.
Check for the existence of a file W. If it exists, remove R and return to step 1.
Do one read. If multiple reads are needed, return to step 2.
Remove R.
You can use the Python module ftplib (or for SFTP, paramiko) to implement the above operations. | 1 | 0 | 0 | For this question simplictly , I'm having two types of computers: type A and B.
There is one computer of type A, and many of type B.
B are type of hosts which can write and read from ftp.
A is a computer which can just read from ftp.
As you might already guess, ftp is the shared area which need to be protected by readers-writers lock solution.
Does anybody knowns of an already existing python package which handle this scenario, if not, do anybody has an example how can it be implemented for such need?
I guess that some locks should be implemented as files on ftp, since we are dealing with processes from different hosts.
Thanks | Reader writer lock with preference to writers | 1.2 | 0 | 0 | 460 |
41,857,126 | 2017-01-25T16:49:00.000 | 0 | 0 | 0 | 1 | python,amazon-web-services,heroku,cron | 50,078,814 | 3 | false | 1 | 0 | Update: AWS does now support Python 3.6. Just select Python 3.6 from the runtime environments when configuring. | 1 | 0 | 0 | Say I have a file "main.py" and I just want it to run at 10 minute intervals, but not on my computer. The only external libraries the file uses are mysql.connector and pip requests.
Things I've tried:
PythonAnywhere - free tier is too limiting (need to connect to external DB)
AWS Lambda - Only supports up to Python 2.7, converted my code but still had issues
Google Cloud Platform + Heroku - can only find tutorials covering deploying applications, I think these could do what I'm looking for but I can't figure out how.
Thanks! | How can I run a simple python script hosted in the cloud on a specific schedule? | 0 | 0 | 0 | 549 |
41,857,884 | 2017-01-25T17:27:00.000 | 1 | 0 | 0 | 0 | python,python-2.7,pdf,base64,imaplib | 41,858,705 | 1 | true | 1 | 0 | email.header.decode_header() was exactly what I needed. Thanks so much!
Added the following lines:
filename = part.get_filename()
if decode_header(filename)[0][1] is not None:
filename = str(decode_header(filename)[0][0]).decode(decode_header(filename)[0][1])
filename_zero, fileext = os.path.splitext(filename)
filename = str(var_seq) + "_" + filename_zero + fileext | 1 | 0 | 0 | I have a program that will pull file attachments into a network share. I have this working in both single part and multi-part email messages.
I have recently received a mutlipart that is showing as :
Content-Type: application/pdf
Content-Disposition: attachment; filename="=?utf-8?B?SW52b2ljZShzKS5wZGY=?="
Content-Transfer-Encoding: base64
This is failing at the file write due to the filename.
Here is the code where I'm writing the PDF:
fp = open(os.path.join(path, filename), 'wb')
fp.write(part.get_payload(decode=True))
fp.close()
However I think that part is working properly and it's failing on the file write... I'm not sure how to translate that filename into readable text. Here is the code I'm using to determine the filename:
filename = part.get_filename()
filename_zero, fileext =os.path.splitext(filename)
filename = str(var_seq) + "_" + filename_zero + fileext
Any insight into what I'm missing is greatly appreciated. | PDF Attachment Downloader not working with base64 Encoding | 1.2 | 0 | 0 | 405 |
41,862,312 | 2017-01-25T21:51:00.000 | 2 | 0 | 0 | 0 | python,google-cloud-storage,boto,google-developer-tools | 41,862,332 | 2 | false | 0 | 0 | Look into gcs-fuse: Makes like a lot easier since you then can use the GCS as just a standard file system. | 1 | 0 | 0 | I'm trying to figure out how to download a file from google cloud storage bucket.
My use-case is to run a scheduled script which downloads a .csv file once a day and save it to a SQL DB.
I considered doing it using python and the google SDK but got lost with all the options and which one is the right for me.
Could someone can explain the difference between cloud storage client, boto, gsutil, and google cloud SDK?
Thanks! | Programatically download file from google cloud storage bucket | 0.197375 | 1 | 0 | 736 |
41,863,814 | 2017-01-25T23:51:00.000 | 5 | 0 | 0 | 0 | python,statistics,tensorflow,entropy | 41,864,069 | 7 | false | 0 | 0 | I'm not sure why it's not implemented, but perhaps there is a workaround. The KL divergence is defined as:
KL(prob_a, prob_b) = Sum(prob_a * log(prob_a/prob_b))
The cross entropy H, on the other hand, is defined as:
H(prob_a, prob_b) = -Sum(prob_a * log(prob_b))
So, if you create a variable y = prob_a/prob_b, you could obtain the KL divergence by calling negative H(proba_a, y). In Tensorflow notation, something like:
KL = tf.reduce_mean(-tf.nn.softmax_cross_entropy_with_logits(prob_a, y)) | 1 | 15 | 1 | I have two tensors, prob_a and prob_b with shape [None, 1000], and I want to compute the KL divergence from prob_a to prob_b. Is there a built-in function for this in TensorFlow? I tried using tf.contrib.distributions.kl(prob_a, prob_b), but it gives:
NotImplementedError: No KL(dist_a || dist_b) registered for dist_a type Tensor and dist_b type Tensor
If there is no built-in function, what would be a good workaround? | Is there a built-in KL divergence loss function in TensorFlow? | 0.141893 | 0 | 0 | 17,205 |
41,867,109 | 2017-01-26T04:52:00.000 | 0 | 0 | 1 | 0 | python,coala,coala-bears | 45,793,120 | 1 | false | 0 | 0 | Hey the AnnotationBear yields, HiddenResult which are results meant to be used by other bears, and not be directly viewed by the user. If you are trying to test coala, you should check bears which actually give results, for eg: PyFlakesBear | 1 | 0 | 0 | when i tried to check the bear results for a python file by using
coala --bears AnnotationBear -f add.py --save
and when it asked for setting language- give "python", then on checking .coafile i didn't find any result that AnnotationBear has to give
so, how to check result? | Checking result by applying bear on file | 0 | 0 | 0 | 49 |
41,868,290 | 2017-01-26T07:01:00.000 | 0 | 0 | 1 | 0 | python,regex | 41,868,352 | 2 | false | 1 | 0 | Your + is at the wrong position; your regexp, as it stands, would demand /John /Adam /Will /Newman, with a trailing space.
r'((/)((\w)+(\s))+)' is a little better; it will accept /John Adam Will, with a trailing space; won't take Newman, because there is nothing to match \s.
r'((/)(\w+(\s\w+)*))' matches what you posted. Note that it is necessary to repeat one of the sequences that match a name, because we want N-1 spaces if there are N words.
(As Ondřej Grover says in comments, you likely have too many unneeded capturing brackets, but I left that alone as it hurts nothing but performance.) | 1 | 0 | 0 | I had a pdf in which names are written after a '/'
Eg: /John Adam Will Newman
I want to extract the names starting with '/',
the code which i wrote is :
names=re.compile(r'((/)((\w)+(\s)))+')
However, it produces just first name of the string "JOHN" and that too two times not the rest of the name. | Incorrect output due to regular expression | 0 | 0 | 0 | 43 |
41,869,435 | 2017-01-26T08:30:00.000 | 1 | 0 | 1 | 0 | python-2.7,contextmenu,python-idle | 41,891,633 | 2 | true | 0 | 0 | I got the solution...
At time of saving python code, there will be two extensions displayed .py and .pyw.I saved my file using .pyw extension i.e. instead of saving as my.py saved as my.pyw. After that again right click on file and there is an option for Edit with IDLE. | 2 | 1 | 0 | Recently I have installed Python 2.7.13 in my Windows 10.
When I try to open my python program by right clicking on it, but I can not find option Edit with IDLE.
I have tried solutions that are given in other stack overflow questions but still not working.
Answers that I have found on other Stack Overflow questions are....
Answer 1
Right Click on file and select default program for .py file **python.exe**
Answe 2
Change in Registry key
But they didn't worked for me. | "Edit with IDLE" not in context menu | 1.2 | 0 | 0 | 477 |
41,869,435 | 2017-01-26T08:30:00.000 | 0 | 0 | 1 | 0 | python-2.7,contextmenu,python-idle | 70,979,522 | 2 | false | 0 | 0 | BOYS. I DID IT. I really don't know anything about coding or whatnot, but I had the same problem. In my case, I had idle 3.10 downloaded, but "edit with idle" only appeared with Idle 2.7. I also had "edit with Idle - 3.5 (32-bit)" but this didn't open.
Anyways, so I went to the registry
Computer\HKEY_CLASSES_ROOT\Python.File\Shell\
...and I saw the 2 folders. So, I made a new folder, or "key" I guess, and titled it "Edit with IDLE 3.10". I'm pretty much following the keys and codes in the first "Edit with Idle" (2.7) key box. So, in this folder, I made the key "command", and located where "pythonw.exe" for 3.10 was. I found it in my appdata folder, for some reason; it wasn't where 2.7 was, as 2.7 was just in my C:drive. Anyways, I copied this entire path and pasted the value for Idle 2.7 into the newly made command folder for 3.10 in my case.
I replaced the location paths of 2.7 where I found the specific file the value put for python with my 3.10 version.
Thus; I changed two parts in the code, separated by its spaces.
"C:\Users...\Python310\pythonw.exe" "C:\Users...\Python310\Lib\idlelib\idle.pyw" -e "%1"
Then, the context menu I created in registry showed up in my folder, and it opened the 3.10 version of IDLE! I hope this helps anyone else who didn't figure it out yet. I'm surprised it worked for me, man. | 2 | 1 | 0 | Recently I have installed Python 2.7.13 in my Windows 10.
When I try to open my python program by right clicking on it, but I can not find option Edit with IDLE.
I have tried solutions that are given in other stack overflow questions but still not working.
Answers that I have found on other Stack Overflow questions are....
Answer 1
Right Click on file and select default program for .py file **python.exe**
Answe 2
Change in Registry key
But they didn't worked for me. | "Edit with IDLE" not in context menu | 0 | 0 | 0 | 477 |
41,870,827 | 2017-01-26T09:55:00.000 | 0 | 0 | 1 | 1 | python,windows,macos,compatibility | 41,870,988 | 2 | false | 0 | 0 | You should use #!/usr/bin/env python as your first line in the script.
It will be applied when you make the script executable and run it like ./script.py | 2 | 0 | 0 | Iv'e written simple python script in the windows version.
written in python 2.7, code compatible to 3.4
runs as script with #! /usr/bin/python
Will it run as is on mac?
Would like to know this before i distribute it to mac users and don't have a mac machine to test it. | Running python script written in windows on mac | 0 | 0 | 0 | 2,469 |
41,870,827 | 2017-01-26T09:55:00.000 | 0 | 0 | 1 | 1 | python,windows,macos,compatibility | 41,871,000 | 2 | false | 0 | 0 | Short answer: It might run.
Long answer: OS compatibility is a tricky issue. When writing code, make sure that it is portable as much as possible. Most of the basic operations in python are portable between OSes. When it comes to file reading, writing, enconding handling etc. stuff might go horribly wrong. Use the provided packages (e.g. import os) to do platform dependent stuff.
In general, there is no way around a test. In many cases, code that runs on one system might not on another depending on hardware configuration etc. p.p.
(I think of multithreading, pyopenCL and the like) | 2 | 0 | 0 | Iv'e written simple python script in the windows version.
written in python 2.7, code compatible to 3.4
runs as script with #! /usr/bin/python
Will it run as is on mac?
Would like to know this before i distribute it to mac users and don't have a mac machine to test it. | Running python script written in windows on mac | 0 | 0 | 0 | 2,469 |
41,870,983 | 2017-01-26T10:04:00.000 | 1 | 0 | 1 | 0 | python,linux,ubuntu | 41,871,906 | 1 | false | 0 | 0 | ls -la /usr/bin/python* should show you all of your python executables. | 1 | 0 | 0 | How to list installed python interpretators?
For example which python give me only the current python interpretator, but I need all of them.
If it matters I using Ubuntu 12.04. | How to list installed python interpretators? | 0.197375 | 0 | 0 | 36 |
41,871,919 | 2017-01-26T10:54:00.000 | 1 | 0 | 1 | 0 | ipython,prompt-toolkit | 41,872,216 | 2 | true | 0 | 0 | On my own Belgian Mac keyboard, fnshift↑ does the job. But I cannot tell whether this also works for other locales. | 2 | 1 | 0 | IPython 5 is a big release. One of its features is real multi-line editing with prompt_toolkit. The up arrow key now moves to the previous input line instead of the previous input command (block of lines).
This is awesome, but when my previous command spans many lines, and I need to reach the command before that, I find myself wishing I could go up one command at a time. Is there a way to do that? The shortcut ctrlp has the exact same behaviour as the up arrow key, so it does not provide a solution. | Go up one command instead of one line in IPython 5 | 1.2 | 0 | 0 | 215 |
41,871,919 | 2017-01-26T10:54:00.000 | 1 | 0 | 1 | 0 | ipython,prompt-toolkit | 45,620,822 | 2 | false | 0 | 0 | The PageUp and PageDown keys do exactly what you want without any chorded hotkeys involved; they work on an entry-oriented basis, rather than the arrow keys' line oriented approach. | 2 | 1 | 0 | IPython 5 is a big release. One of its features is real multi-line editing with prompt_toolkit. The up arrow key now moves to the previous input line instead of the previous input command (block of lines).
This is awesome, but when my previous command spans many lines, and I need to reach the command before that, I find myself wishing I could go up one command at a time. Is there a way to do that? The shortcut ctrlp has the exact same behaviour as the up arrow key, so it does not provide a solution. | Go up one command instead of one line in IPython 5 | 0.099668 | 0 | 0 | 215 |
41,876,074 | 2017-01-26T14:46:00.000 | 0 | 0 | 0 | 0 | python,grid,tiles | 41,876,242 | 1 | false | 0 | 1 | You just divide the tile number by the number of tiles in a row w: row = t//w
To get the column do: col = t%w
I assume both row and column start at zero. If not, just add 1 where you need. | 1 | 0 | 0 | Say I have a grid that looks something like this:
0 1 2 3
4 5 6 7
Imagine that we have a w by h grid, where the tiles are numbered starting at 1 in the top left corner. Imagine someone else has stored the values w (for width) and h (for height), that they have read in from a text file. You have access to these stored values, as long as you call them w and h. Write a program to return: the row number of a tile number given by the user. Use integer division, //, which returns only a whole number and truncates any remainder. Start counting rows at row 0.
sum = ((t - 1) // h) + 1 answers the question if the tiles start at 1, but I can't figure it out when the tiles start at 0. | Figuring out a Row of a Tile number in Python | 0 | 0 | 0 | 808 |
41,884,981 | 2017-01-26T23:42:00.000 | 1 | 0 | 1 | 0 | python,twitter,twitter-oauth | 41,887,118 | 4 | true | 0 | 0 | Hope below works
import pprint
users = api.friends()
pprint.pprint([u.name for u in users], width=1) | 1 | 1 | 0 | I am using the following code to print the list of friends in twitter
users = api.friends()
print([u.name for u in users])
I get the following output
['The New York Times', 'BBC Breaking News', 'Oprah Winfrey']
But I want something like this
['The New York Times',
'BBC Breaking News',
'Oprah Winfrey'
] | How to insert a line break within the for loop of a print statement using python | 1.2 | 0 | 0 | 2,997 |
41,886,791 | 2017-01-27T04:04:00.000 | 2 | 0 | 0 | 0 | python,excel,python-3.x,module,xlrd | 41,904,256 | 1 | true | 0 | 0 | xlrd only reads excel files. To write them, look up xlwt, xlutils, xlsxwriter, or openpyxl - all of these packages can write binary files excel can read. Excel can also read csv files, which the csv package (included with Python) can write (and read). | 1 | 1 | 0 | How can I write an excel file(xls/xlsx) using xlrd module alone?
I tried from xlrd import xlsx, but couldn't find anything that will really help me. | How to use xlrd for writing an excel file | 1.2 | 1 | 0 | 4,503 |
41,889,588 | 2017-01-27T08:24:00.000 | 1 | 0 | 1 | 0 | python,user-interface,model-view-controller,interface | 41,889,758 | 1 | true | 0 | 1 | Well maybe have the function in the Core module return some specifier that such a thing has happened (found multiple) along with the given names, then display the choice to the user and call a function in the Core module that returns relevant information about that file.
Bear in mind you do not have to be dogmatic regarding such restrictions, there are some situations where having code in the GUI is much less of a hassle than having to integrate some way of it to work in between modules.
This is where you make a decision how to go about writing the code, bearing in mind how important this feature is to you, how testable/maintainable you need it to be. | 1 | 0 | 0 | Hi I know this is a pretty basic design question. But I don't realy get it....
I write it in Python with PySide, but I think this is more a language unrelated question.
A simplified example what I want to do:
I Have a Gui with a button that opens a file dialog.
In this one I choose a folder.
The code scans the suffixes of the files in the folder and returns the 3 needed one. lets say .mp3, .txt and .mov and shows them in the gui.
To this point the seperation should be no problem I would have a Gui class that runs the code of the core class, gets the three files as return values and sets up the gui.
What I am wondering about is what happens when there are more then one files matching the .mp3 suffix. I would want to have a pop up with a combobox to select the one I want to use. But I don't realy get how to implement it without adding gui code to the core class. | clean divide Code and Gui | 1.2 | 0 | 0 | 48 |
41,895,868 | 2017-01-27T14:08:00.000 | 0 | 1 | 1 | 0 | python,python-3.x,ubuntu,importerror,pycrypto | 41,896,166 | 2 | false | 0 | 0 | Is python defined properly on your machine?
Make sure PATH environment variable has python's installation folder in it | 1 | 0 | 0 | I am getting ImportError: No module named 'Crypto' error when trying to run. I have installed pycrypto using pip install pycrypto and updated it also. Everything I have tried to far has been unsuccessful.
Tried:
reinstalling pycrypto,
updating both python and pycrypto
Any suggestions? | Getting ImportError: No module named 'Crypto' after installation | 0 | 0 | 0 | 922 |
41,898,424 | 2017-01-27T16:18:00.000 | 11 | 0 | 1 | 0 | python,python-wheel,software-packaging,python-packaging | 55,031,535 | 3 | false | 0 | 0 | Wheel provides the wheel command in addition to just setup.py bdist_wheel. Use wheel unpack [file.whl] to open the wheel, edit what you will, and then use wheel pack [directory] to put it back together again. | 2 | 6 | 0 | I have a python wheel package, when extracted I find some python code, I'd like to edit this code and re-generate the same .whl package again and test it to see the edits .. How do I do that? | How to edit a wheel package (.whl)? | 1 | 0 | 0 | 8,816 |
41,898,424 | 2017-01-27T16:18:00.000 | 0 | 0 | 1 | 0 | python,python-wheel,software-packaging,python-packaging | 54,548,346 | 3 | false | 0 | 0 | you can open the whl file using 7zip or something alike, track the file you wish to change, open in edit mode, save it, next 7zip will popup a message saying something was modified and if you want the change to be saved, press yes and youre good to go.
remember to backup your original whl before doing it.. | 2 | 6 | 0 | I have a python wheel package, when extracted I find some python code, I'd like to edit this code and re-generate the same .whl package again and test it to see the edits .. How do I do that? | How to edit a wheel package (.whl)? | 0 | 0 | 0 | 8,816 |
41,899,011 | 2017-01-27T16:49:00.000 | 2 | 0 | 0 | 0 | python,machine-learning,scikit-learn,real-time | 41,911,859 | 2 | false | 0 | 0 | With most algorithms training is slow and predicting is fast. Therefore it is better to train offline using training data; and then use the trained model to predict each new case in real time.
Obviously you might decide to train again later if you acquire more/better data. However there is little benefit in retraining after every case. | 2 | 2 | 1 | I have a real time data feed of health patient data that I connect to with python. I want to run some sklearn algorithms over this data feed so that I can predict in real time if someone is going to get sick. Is there a standard way in which one connects real time data to sklearn? I have traditionally had static datasets and never an incoming stream so this is quite new to me. If anyone has sort of some general rules/processes/tools used that would be great. | Real time data using sklearn | 0.197375 | 0 | 0 | 2,016 |
41,899,011 | 2017-01-27T16:49:00.000 | 2 | 0 | 0 | 0 | python,machine-learning,scikit-learn,real-time | 43,380,457 | 2 | false | 0 | 0 | It is feasible to train the model from a static dataset and predict classifications for incoming data with the model. Retraining the model with each new set of patient data not so much. Also breaks the train/test mode of testing a ML model.
Trained models can be saved to file and imported in the code used for real time prediction.
In python scikit learn, this is via the pickle package.
R programming saves to an rda object. saveRDS
yay... my first answering a ML question! | 2 | 2 | 1 | I have a real time data feed of health patient data that I connect to with python. I want to run some sklearn algorithms over this data feed so that I can predict in real time if someone is going to get sick. Is there a standard way in which one connects real time data to sklearn? I have traditionally had static datasets and never an incoming stream so this is quite new to me. If anyone has sort of some general rules/processes/tools used that would be great. | Real time data using sklearn | 0.197375 | 0 | 0 | 2,016 |
41,899,083 | 2017-01-27T16:52:00.000 | 4 | 0 | 0 | 0 | python,django,orm,model | 41,899,169 | 1 | true | 1 | 0 | This is unnecessarily complex.
There is no performance overhead to having a many-to-many relationship. This is represented by an intermediary table in the database; there's no actual field in the humans table. If an item doesn't have any m2m members, then no data is stored. | 1 | 0 | 0 | I'm setting up my Models and I'm trying to avoid using ManyToMany Relationships.
I have this setup:
Model: Human
Some Humans (a small percentage) need to have M2M relationships with other Humans. Let's call this relationship "knows" (reverse relationship called "is_known_by").
To avoid setting a ManyToManyField in Humans, I made a Model FamousHumans.
FamousHumans are a special class of Human and have a OneToOneField(Human)
They also have a ManyToManyField(Humans) to represent the "knows" relationship
Here is my question:
Since Django creates reverse relationships, I assume that all Humans will have a reverse "is_known_by" relationship to FamousHumans, so there is still a M2M relationship. Is there any performance benefit to my setup?
The dataset will be rather large and only a few Humans will need the M2M relationship. My main concern is performance. | Performance impact of reverse relationships in Django | 1.2 | 0 | 0 | 200 |
41,899,930 | 2017-01-27T17:41:00.000 | 0 | 0 | 1 | 0 | python,algorithm | 41,908,820 | 2 | false | 0 | 0 | We can do it in the following manner: first get all the (x,y) tuples (indices) of the matrix A where A[x,y]=1. Let there be k such indices. Now roll a k-sided unbiased dice M times (we can simulate by using function randint(1,k) drawing sample from uniform distribution). If you want samples with replacements (same position of the matrix can be chosen multiple times) then it can be done with M invocations of the function. Otherwise for samples with replacements (without repetitions allowed) you need to keep track of positions already selected and delete those indices from the array before throwing the dive next time. | 1 | 1 | 1 | There is a 0-1 matrix, I need to sample M different entries of 1 value from this matrix. Are there any efficient Python implements for this kind of requirement?
A baseline approach is having M iterations, during each iteration, randomly sample 1, if it is of value 1, then keep it and save its position, otherwise, continue this iteration until find entry with value 1; and continue to next iteration. It seems not a good heuristic at all. | sample entries from a matrix while satisfying a given requirement | 0 | 0 | 0 | 49 |
41,902,587 | 2017-01-27T20:35:00.000 | 1 | 0 | 1 | 0 | python,shell,python-idle | 41,903,870 | 2 | false | 0 | 0 | No, not at present.
Shell currently uses tabs to indent, and tabs are fixed in tk as 8 'spaces'. I believe at least part of the reason is so that follow-up lines are visually indented in spite of the >>> prompt.
I don't like this either. In the future, I would like to move the prompt into a sidebar so that entered code starts flush left, as in the editor, and can use the same user-set indent as in the editor. | 1 | 0 | 0 | I'm not asking about Options > Configure IDLE > Fonts/Tabs > Indentation Width and setting that to 2. This only sets the indentation width within a file and not the indentation width for the interactive shell.
What Python IDLE file do I need to change to get 2-space spacing in the interactive shell?
I like to code with 2 spaces instead of 4 so not having the interactive shell also indent by the same spacing slows me down when transferring out of shell and into a file for example. | Is there any way to set Python IDLE's interactive shell indentation width to 2 spaces instead of its default? | 0.099668 | 0 | 0 | 445 |
41,909,158 | 2017-01-28T11:19:00.000 | 0 | 0 | 0 | 1 | python,hadoop | 41,911,541 | 1 | true | 0 | 0 | The only way you can do it is if files B and C are very small so that you can put them into the distcache and fetch them in all your Job. There is no partitioner Job in Hadoop. Partitioners run as part of map jobs, so it's the every mapper that has to read all 3 files A,B and C.
The same applies to the reducer part. If B and C files are very large then you have to examine you data-flow and combine A,B,C in separate jobs. Can't explain how do it unless you share more details about your processing | 1 | 0 | 0 | Say I have 3 input files A, B, C. I want that
the mapper only gets records from A
the partitioner gets input from both the mapper and files B and C
the reducer gets input from the mapper (which has been directed by the partitioner) and file C.
Is this possible to do in Hadoop?
P.S. - I am using Python and Hadoop Streaming | Give specific input files to mapper and reducer hadoop | 1.2 | 0 | 0 | 63 |
41,910,758 | 2017-01-28T14:13:00.000 | 0 | 0 | 0 | 0 | python,c,linux,gcc,shared-libraries | 42,325,888 | 1 | false | 0 | 1 | Your problem has nothing to do with exporting symbols, but with the dynamic linker locating PyType_GenericNew. If libpython3.5m.so is in your library path, and your dynamic linker doesn't find it, running strace ldd ./program will provide a hint as to where it's looking, and a well-placed symbolic link may sort you out. | 1 | 0 | 0 | I am working on embedding some python code into into a kdb database by creating c extensions. To do this I need to compile my code into a shared library and within my kdb q script load the shared library. This issue I am having is when I try to import the numpy module. I get an error saying the PyType_GenericNew is undefined. This occurs at runtime not compile time.
The shared library I am building is linked with libpython3.5m.so but I guess this does not export the symbols golbally. When I made a test executable which imports numpy in the main() it runs fine. I was able to fix this issue in the shared library by calling dlopen("libpython3.5m.so", RTLD_NOW | RTLD_GLOBAL). I don't however really like this solution since it is not very robust. Say for instance I changed my compile options to like against libpython3.4m. Then I would need to change the source code as well so dlopen opens libpython3.4m.
Is there a way to tell gcc when I link using the -lpython3.5m option to export all the symbols globally? This way I can skip the dlopen.
Otherwise is there something in the python c api which can tell me the path to the python shared library of which I am currently using? Ie something like dlopen(Py_GetLibraryPath(), RTLD_NOW | RTLD_GLOBAL) | Get full path to shared object with python c api | 0 | 0 | 0 | 220 |
41,912,691 | 2017-01-28T17:29:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,pyqt,pycharm | 41,919,317 | 1 | true | 0 | 1 | Since you do seem to have PyQt installed my guess is that you have multiple Python versions installed (version 3.4 and version 3.6) and that PyQt is only installed under 3.6, but that PyCharm and the Designer are configured to use 3.4.
I don't know how to change the Python interpreter in the Qt Designer as I never use it. However in PyCharm open the settings and look for the "Project Interpeter" tab. There you can configure the default Python interpreter that is used for your project. It even shows the installed packages for that interpreter.
When you run a Python program from PyCharm, the first line in the output shows which Python interpreter was used. This way you can check if it is as expected. If it is still not correct, it can be that you have overridden it in your Run Configuration. Select "Edit Configuration" from the "Run" menu. This will open a dialog with Run Configuration settings for the Python script that you last executed. Check the "Python Interpreter" there and change it if needed. | 1 | 1 | 0 | I am a beginner and have 2 issues, which may be related to each other.
1. I am using PyCharm, and when I put
"from PyQt4 import QtCore, QtGui, uic"
I get a red line under each word (except from & import) saying "unresolved reference".
I have PyQ4/Designer installed (I know it is because I have made a GUI), but when I click 'view code' for the GUI, it says "unable to launch C:/Python34/Lib/site-packages/PyQt4\uic"
Maybe a path issue??? Like I said, I am very new to Python/Qt and really do not know how to check the path and/or change it if it is wrong. I downloaded Python 3.6.0, PyChamr2016.3.2, Qt4.8.7 | PyCharm not recognizing PyQT4 and PyQt4 not allowing me to 'view code' | 1.2 | 0 | 0 | 1,774 |
41,913,345 | 2017-01-28T18:34:00.000 | 1 | 0 | 0 | 0 | python,flask,python-import,file-not-found | 42,589,534 | 9 | false | 1 | 0 | Please follow these steps:
Make sure you have already done with [pip install --editable . ]. where '.' represent the location of directory where your app is installed. e.g(flask_app)
Run python
It will open command line python interpreter
Try to import the flask app
If its there error, you will get the detailed error.
Try to fix that error.
I do ran into the same problem and followed the steps above and found that there is error in running code. Interpreter is showing compile error. | 4 | 21 | 0 | I use export FLASK_APP=flask_app and then do flask run but I get the error:
Error: The file/path provided (flask_app) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
However, the file does exist and is even in the present working directory. Using the complete path to the file does not work either. | Flask "Error: The file/path provided does not appear to exist" although the file does exist | 0.022219 | 0 | 0 | 25,530 |
41,913,345 | 2017-01-28T18:34:00.000 | 0 | 0 | 0 | 0 | python,flask,python-import,file-not-found | 47,385,908 | 9 | false | 1 | 0 | The werkzeug version is not suitable for flask. To address this problem, you need to upgrade the werkzeug, use:
$pip install werkzeug --upgrade | 4 | 21 | 0 | I use export FLASK_APP=flask_app and then do flask run but I get the error:
Error: The file/path provided (flask_app) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
However, the file does exist and is even in the present working directory. Using the complete path to the file does not work either. | Flask "Error: The file/path provided does not appear to exist" although the file does exist | 0 | 0 | 0 | 25,530 |
41,913,345 | 2017-01-28T18:34:00.000 | 5 | 0 | 0 | 0 | python,flask,python-import,file-not-found | 54,899,607 | 9 | false | 1 | 0 | This could be many reasons.
python2 vs python3 issue,
pip2 install Flask vs pip3 install Flask issue,
and (venv) virtual environment vs local environment issue.
In my case, had to do the following to solve the problem:
python3 -m venv venv
. venv/bin/activate
pip3 install Flask
export FLASK_APP=flask_app
flask run | 4 | 21 | 0 | I use export FLASK_APP=flask_app and then do flask run but I get the error:
Error: The file/path provided (flask_app) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
However, the file does exist and is even in the present working directory. Using the complete path to the file does not work either. | Flask "Error: The file/path provided does not appear to exist" although the file does exist | 0.110656 | 0 | 0 | 25,530 |
41,913,345 | 2017-01-28T18:34:00.000 | 7 | 0 | 0 | 0 | python,flask,python-import,file-not-found | 51,108,431 | 9 | false | 1 | 0 | This message will occur if you issue flask run on the command line. Instead use python -m flask run after setting export FLASK_APP and export FLASK_ENV variables. I ran into this issue while following the Flask Tutorial when creating The Application Factory. The instruction does not specify to preface flask run with python -m. | 4 | 21 | 0 | I use export FLASK_APP=flask_app and then do flask run but I get the error:
Error: The file/path provided (flask_app) does not appear to exist. Please verify the path is correct. If app is not on PYTHONPATH, ensure the extension is .py
However, the file does exist and is even in the present working directory. Using the complete path to the file does not work either. | Flask "Error: The file/path provided does not appear to exist" although the file does exist | 1 | 0 | 0 | 25,530 |
41,914,139 | 2017-01-28T19:51:00.000 | -1 | 0 | 1 | 0 | python,anaconda | 52,901,116 | 2 | false | 0 | 0 | One might also consider conda update anaconda. The anaconda meta-package links together certain versions of packages that Continuum Analytics has figured out all play nice together. | 1 | 112 | 0 | How do I reset the root environment of anaconda? There has to be a simple conda reset command that does this.
I don't want to reinstall anaconda all over again. I have other virtualenvs that I don't want to overwrite and that will happen if I install anaconda again. | How to reset anaconda root environment | -0.099668 | 0 | 0 | 130,558 |
41,916,404 | 2017-01-29T00:29:00.000 | 1 | 0 | 0 | 0 | python,postgresql,memory,redis,memcached | 41,916,446 | 1 | true | 0 | 0 | I've had tremendous luck with MySQL and SQLAlchemy. 50k writes per day is nothing. I write my logs to it, I log my threads (think about that, I write logs to it and log each thread) and I process 2.5 million records per day, each generating about 100 logs each. | 1 | 0 | 0 | Hi Stack Overflow community, making some architectural decisions & trying to figure out the best strategy to store locations of 50k users who are moving around, in an environment where we care about read & write speed a lot, but don't mind occasionally losing data.
Should one
use an in-memory datastore like Redis or Memcached, or
use Postgres, with an index on the user_id so that it's fast to insert &
remove, or
use the filesystem directly, have a file for each
user_id, and write to it or read from it to store new locations, or
just store the locations in memory, in a Python program which
maintains an ordered list of (user_id, location) tuples
What are the advantages/ disadvantages of each? | Best database to store locations of users as they move, priority given to read & write speed? | 1.2 | 1 | 0 | 180 |
41,916,497 | 2017-01-29T00:44:00.000 | -1 | 0 | 1 | 0 | python,vim,python-mode | 41,919,557 | 1 | false | 0 | 0 | Append # noqa to the def-line of your function. | 1 | 0 | 0 | Title pretty much says it, but just to be sure:
I'm not looking for a way of turning off the mccabe check for all callables (functions, methods), just for specific callables I've decided have a good reason to be complex (like a merge sort).
Thanks! | How can I turn off vim python-mode's mccabe completely check for just one function? | -0.197375 | 0 | 0 | 317 |
41,917,716 | 2017-01-29T04:52:00.000 | 2 | 0 | 1 | 0 | python,type-hinting,mypy | 41,925,579 | 1 | false | 0 | 0 | I was missing the file <my-package>/__init__.py, so technically wasn't actually a Python package. It did have an __main__.py file, which was why the command python -m <my-package> still worked. | 1 | 4 | 0 | I am in the directory containing my python package, and running mypy -p <package-name>, but it just errors out with "Can't find package".
What am I doing wrong? | Why isn't mypy finding my package? | 0.379949 | 0 | 0 | 2,644 |
41,918,099 | 2017-01-29T06:00:00.000 | 0 | 0 | 1 | 0 | python,influxdb,influxdb-python | 41,990,301 | 1 | false | 0 | 0 | So there didn't seem to be a way of doing it (and no answers here). But I solved the problem by performing a query first and if no record found perform an insert.
Basically i had to make the scripts figure it out. | 1 | 1 | 0 | I know influx is for measurement type data. But I'm also using it for annotations on certain events.
I have scripts that run every minute, that it would be difficult for it to realise an event has already happened. Is there something I can do on the insert to only insert if a new record rather than every time. | Python to influxdb - insert if no record | 0 | 0 | 0 | 202 |
41,918,526 | 2017-01-29T07:15:00.000 | 2 | 1 | 1 | 0 | python-2.7,localization,internationalization,translation,globalization | 41,918,548 | 2 | false | 0 | 0 | It simply depends on the information of who is the debug info for?
In case of devs only, any dev should know english I think.
In case of end users and a multilanguage app, it might be worth the translation. But what are the endusers gonna do with the debug info anyway?
In case of multicultural large project, it might be useful, but my experience, we always agree on a common dev language and that includes debug info. | 2 | 0 | 0 | I know. Maybe this question isn't in its correspondig area, but as long it's linked with programming (In this case, python2.7), it seems quite logical to me to post it here...
In fact, That's the main question.
Should it be necessary to translate debug info to other languages?
It's quite a trivial question, but it's something I've faced recently, and I don't know if I should do it or not.
P.S: By "Debug Info" I refer to text like "[TimeStamp] Handshake completed!" or "[TimeStamp] Download progress: %64" | Does debug info deserves to be translated? | 0.197375 | 0 | 0 | 61 |
41,918,526 | 2017-01-29T07:15:00.000 | 2 | 1 | 1 | 0 | python-2.7,localization,internationalization,translation,globalization | 41,926,272 | 2 | true | 0 | 0 | I would base the answer on the following simple question: Is the information to be interpreted by the end user (→ translate) or by the developer (→ don't translate).
Normally, debug messages fall squarely into the second category (if they don't, it might be worth looking at the UI design) – but that's for you to decide.
Even if it is the end user who will be expected to relay debug messages to the developers, I would refrain from translating them as long as the user is not expected to interpret and act based on the content of the messages. This will simplify both the localisation (and localisation update) process and, perhaps more importantly, the interpretation of user-submitted logs. | 2 | 0 | 0 | I know. Maybe this question isn't in its correspondig area, but as long it's linked with programming (In this case, python2.7), it seems quite logical to me to post it here...
In fact, That's the main question.
Should it be necessary to translate debug info to other languages?
It's quite a trivial question, but it's something I've faced recently, and I don't know if I should do it or not.
P.S: By "Debug Info" I refer to text like "[TimeStamp] Handshake completed!" or "[TimeStamp] Download progress: %64" | Does debug info deserves to be translated? | 1.2 | 0 | 0 | 61 |
41,918,757 | 2017-01-29T07:53:00.000 | 4 | 0 | 1 | 0 | ios,python-3.x,pyqt5 | 41,990,062 | 1 | true | 0 | 1 | We'll, I'm answering my own question now that I've had some success with pyqtdeploy and iOS. I got to the stage of signing the app but don't have a developer certificate at the moment so it of course failed. I have not yet deployed to the iphone simulator because pyqtdeploy does not prepare the correct architecture when deploying to the iPhone simulator.
As a beginner with pyqtdeploy the tutorial was an inadequate starting point. I had to skip to the chapter about building the sysroot. So I would say this is mostly a documentation issue. | 1 | 5 | 0 | I have been using PyQt for years and would love to be able to use for an iOS app now that both apparently support it. However, I have never had any luck with pyqtdeploy. The tutorial is hard to follow and the build errors difficult to read.
Has anyone had any success with this? Or possibly with another PyQt5 deployment method for iOS?
Thank you!
EDIT
I installed everything with homebrew: [email protected], [email protected] [email protected]; and pyqtdeploy-1.3.1 installed with pip3. I also tried the pyqtdeploy-1.3.2.dev1612281206 snapshot installed from source. I ran into so many problems that I wasn't interested in getting into troubleshooting a specific problem. The wording of the tutorial is just difficult to follow, it's hard to tell which file he is talking about and in what directory, and where the SYSROOT variable should point, where the qmake symlinks go, etc. There are also lots of build errors for missing files which I was unable to track down, for example "/$SYSROOT/lib/python3.5/_bootlocale.py", or some arc file which I can't pull up right now. It also seems to top out at python3.5 and doesn't work with python3.6 which is all the hombres offers right now. It just seems like such a mess that I would simply ask if anyone has actually had success with it and start from there. | Has anyone had success deploying pyqt to iOS with pyqtdeploy? | 1.2 | 0 | 0 | 5,869 |
41,918,828 | 2017-01-29T08:02:00.000 | 1 | 0 | 0 | 0 | java,python-3.x,selenium,memory-management | 44,546,508 | 3 | false | 1 | 0 | Don't forget drive.close() in your code , if you don't close your driver, you will have a lot instance of Chrome. | 1 | 3 | 0 | I'm using selenium on python 3.5 with chrome webdriver on a ububtu vps, and when I run a very basic script (navigate to site, enter login fields, click), memory usage goes up by ~400mb,and cpu usage goes up to 100%. Are there any things I can do to lower this, or if not, are there any alternatives?
I'm testing out selenium in python but I plan to do a project with it in java, where memory usage is a critical factor for me, so the same question applies for java as well. | Selenium using too much memory | 0.066568 | 0 | 1 | 7,993 |
41,919,127 | 2017-01-29T08:50:00.000 | 0 | 0 | 1 | 0 | python-3.x,pygame | 41,920,172 | 2 | false | 0 | 1 | Installed pygame "pygame-1.9.3-cp35-cp35m-win32.whl" using 'pip' resolved this. Pip installer comes with python 3 and runs with windows command prompt(not in python shell). | 1 | 0 | 0 | I have installed Python 3.5.2 (32-bit) and Python 3.3 pygame -1.9.2a0 on Windows 10. When I run IDLE (python shell) and try to import pygame i get 'No module named pygame' error. I have found sys.path in the shell but pygame is not there. How can I set the variable for pygame OR how I can resolve this problem. | Pygame No module named pygame error in windows 10 | 0 | 0 | 0 | 1,603 |
41,920,673 | 2017-01-29T12:03:00.000 | 0 | 0 | 1 | 0 | python,string,python-3.x,input,line | 41,920,859 | 2 | false | 0 | 0 | To continue processing of the same command to the next line, add a backslash ('\') before a '\n' | 1 | 1 | 0 | When I am entering a string input that contains line breaks, Python thinks that I have pressed Enter and continues to the next commands.
How can I put \n when Enter is pressed, to send the rest to an extra line? | Python 3.5 changing line when there is an Enter in a string | 0 | 0 | 0 | 655 |
41,925,527 | 2017-01-29T20:21:00.000 | 0 | 0 | 0 | 0 | python,node.js,multithreading,sockets,zeromq | 42,099,793 | 1 | false | 0 | 0 | There's no obligation to use a single socket for the two way comms. Two is perfectly fine.
This means you can have PUB/SUB to broadcast from your NodeJS to your Python code. That's the easy part. Then have a separate PUSH/PULL socket back the other way - the Python does the pushing, the NodeJS does the pulling. One PUSH socket per Python thread, and just one PULL socket in the NodeJS (it will pull from any of the push sockets). Thus whenever one of the Python threads wants to send something to the NodeJS, it simply sends it through the PUSH socket.
AFAIK the NodeJS can 'bind' it's PULL socket, and the Python can 'connect' it's PUSH socket, which is something you can do if one wants to feel that the NodeJS is the 'server'. Though this is unecessary - either end can bind, so long as the other end connects. Remember though that ZeroMQ is Actor model programming; there is no clients or servers, there's just actors. 'Bind' and 'connect' are only mentioned at all because it's all implemented on top of tcp (or similar), and it's the tcp transport that has to be told who's binding and connecting.
Have each thread responsible for its own socket. Though given that Python threads aren't real threads you're not going to get a speed-up through having them (unless they've gone and got rid of the global interpretter lock since I last looked). The ZeroMQ context itself sets up a thread(s) which marshalls all the actual message transfers in the background, so the IO is already significantly overlapped. | 1 | 0 | 0 | I'm learning about the ZeroMQ patterns, and I need to implement the following:
NodeJS will send messages to many python threads, but it doesn't need to wait for the answers synchronously, they can come in any order. I know that the publish/subscribe pattern solves it in one way: it can send to many, but how do the python workers send the reply back?
Also, in order for the python threads to receive the message, which is the better design: the python process receives the message and sends to the appropriate thread (don't know how to do it), or each thread is responsible to receive its own messages? | ZeroMQ: publish to many, receive replys in any order | 0 | 0 | 0 | 68 |
41,926,293 | 2017-01-29T21:43:00.000 | 0 | 1 | 1 | 1 | python,windows,raspberry-pi3 | 41,926,309 | 3 | false | 0 | 0 | Yes! Python code is mostly platform independent. Only some specific libs must be compiled in the Maschine. These should be installed using pip (if needed). More info in Google. | 2 | 0 | 0 | I would like to use my Raspberry Pi for some programming. (never done it before, I want to get into Python.) If I can transfer my programs yo my Windows 8.1 computer and run them there also, that would be perfect. Can I do that? Thanks! | Will Python programs created on a Raspberry Pi running Raspbian work on a Windows 8.1 machine? | 0 | 0 | 0 | 34 |
41,926,293 | 2017-01-29T21:43:00.000 | 0 | 1 | 1 | 1 | python,windows,raspberry-pi3 | 41,927,152 | 3 | false | 0 | 0 | Short answer: mostly yes, but it depends.
Obviously, the Raspberry Pi specific libraries for controlling its peripherals won't work on ms-windows.
Your Pi is probably running a Linux distribution that has package management and comes with a functioning toolchain. That means that installing (python) packages and libraries will be a breeze. Tools like pip and setup.py scripts will mostly Just Work.
That is not necessarily the case on ms-windows.
Installing python libraries that contain extensions (compiled code) or require external shared libraries is a frustrating epxerience for technical reasons pertaining to the microsoft toolchain. On that OS it is generally easier to use a python distribution like Anaconda that has its own package manager, and comes with packages for most popular libraries.
Furthermore, if you look into the documentation for Python's standard library you will see that sometimes a function is only available on UNIX or only on ms-windows. And due to the nature of how ms-windows creates new processes, there are some gotchas when you are using the multiprocessing module.
It would be a good idea to use the same Python version on both platforms. Currently that would be preferably 3.6 or 3.5. | 2 | 0 | 0 | I would like to use my Raspberry Pi for some programming. (never done it before, I want to get into Python.) If I can transfer my programs yo my Windows 8.1 computer and run them there also, that would be perfect. Can I do that? Thanks! | Will Python programs created on a Raspberry Pi running Raspbian work on a Windows 8.1 machine? | 0 | 0 | 0 | 34 |
41,926,321 | 2017-01-29T21:46:00.000 | 0 | 0 | 1 | 0 | python,python-idle | 41,982,964 | 1 | true | 0 | 0 | I am not aware that IDLE ever didn't restart when running a editor file, so that would have to have been several years ago. I will think about it as a new feature though.
EDIT: Added in June 2019: On the editor Run menu, Run... Customized opens a dialog with [X] Restart. Uncheck that box and the restart is skipped.
END EDIT
In the meanwhile, you can do this for the specific scenario you gave. Load env1.py into an editor window and run it. When >>> appears, enter or paste the def statement for f7 and run it. (Paste after loading the file with f7 and copy.) Test by calling f7. To edit the definition of f7, recall it to the current >>> line. Either click on the previous definition and hit Enter or use the history keyboard shortcuts (for me on Windows, Alt-P for Previous, Alt-N for Next). In either case, edit and re-run. Do the same with test statements. I recall and edit statements routinely. | 1 | 1 | 0 | It appears that, in the past, IDLE did not restart (clean the environment) when you ran a script (module). Today, however, this is the case. But for prototyping I would like the environment (assigned variables, imported modules, functions, ...) to survive running different modules (files).
Example: I am working on a function, let's call it f7(), that requires a certain environment. The environment is built in another script (file), say, env1.py. After env1.py has been run, I can built on all imported modules, defined functions and assigned variables, when working at the command line of IDLE. But I cannot run another file, where my f7() resides! I would have to redefine f7() at the interpreter's command line. Which I of course do not do, because f7() is very lengthy. The only thing that remains is to include f7() in env1.py. And restart it after every change to f7(). As a consequence, I have to wait each time until env1.py has finished. Which is a waste of time, because every time it runs, it does the same. I only change f7()...
Can I tell IDLE not to restart (clean environment) each time I run a module (file) in IDLE? If not, what alternatives to IDLE are capable of something like this??
It seems IDLE behaves the same on Windows, Ubuntu, Raspbian. I am using Python 3.X on each of these systems. | Run a module in IDLE (Python 3.4) without Restart | 1.2 | 0 | 0 | 1,950 |
41,927,996 | 2017-01-30T01:45:00.000 | -1 | 0 | 0 | 0 | python,django,sqlite | 41,928,825 | 2 | false | 1 | 0 | Each Django model is a class which you import in your app to be able to work with them. To connect models together you can use foreign keys to define relationships, i.e. your Page class will have a foreign key from Book. To store lists in a field, one of the ways of doing it is to convert a list to string using json module and define the field as a text field. json.dumps converts the list to a string, json.loads converts the string back to a list. Or, if you are talking about other "lists" in your question, then maybe all you need is just django's basic queryset that you get with model.objects.get(). Queryset is a list of rows from a table. | 1 | 1 | 0 | I am currently trying to implement a book structure in Django in the model.
The structure is as follows
Book Class:
title
pages (this is an array of page objects)
bibliography (a dictionary of titles and links)
Page Class:
title
sections (an array of section objects)
images (array of image urls)
Section Class:
title:
text:
images (array of image urls)
videos (array of video urls)
I am pretty new to Django and SQL structuring. What my question specifically is, what would be the best method in order to make a db with books where each entry has the components listed above? I understand that the best method would be to have a table of books where each entry has a one to many relationship to pages which in turn has a one to many relationship with sections. But I am unclear on connecting Django models together and how I can enable lists of objects (importantly these lists have to be dynamic). | Book Structure in Django | -0.099668 | 0 | 0 | 296 |
41,933,520 | 2017-01-30T10:22:00.000 | -3 | 0 | 1 | 0 | python,bitcoin | 43,216,915 | 1 | false | 0 | 0 | had this same problem when I was trying initially using minerd with stratum+tcp://wemineltc.com:3333
changed to litcoinpool.org and the problem was solved. | 1 | 0 | 0 | cpuminer (version 2.4.5 win32) error "json-rpc call failed:[-1,"'L' format requires 0 <=number<= 4294967295",null]" how to solve it ?
other parameters are "--scrypt -o stratum+tcp://global.wemineltc.com:3333" | cpuminer (version 2.4.5 win32) error "json-rpc call failed:[-1,"'L' format requires 0 <=number<= 4294967295",null]" how to solve it? | -0.53705 | 0 | 0 | 1,361 |
41,934,355 | 2017-01-30T11:05:00.000 | 0 | 0 | 0 | 0 | python,html,json,django,rest | 41,934,786 | 1 | false | 1 | 0 | I don't see the point of using Django HTML Templates in your API endpoint since the whole point of using a REST API is to have the server side and the client side completely independent from one another. So yes, the FAQ items should be delivered as JSON and displayed as you want on the client side. | 1 | 0 | 0 | I'm working on a project that uses Django on the server side and I have a REST(ish) API going.
One thing I'm wondering about. Is it considered ok practice to deliver Django HTML templates via the API endpoints? For example, by going to www.rooturl.com, an API endpoint is called and the HTML delivered. Then, when user clicks on, say FAQ, a GET request is made to www.rooturl.com/faq and an HTML template delivered again? Or should the FAQ items be delivered as JSON? Or maybe give both alternatives through content negotiation? At which point is all the HTML content usually delivered?
I couldn't find a satisfying answer with my google-fu. | Django REST Framework and HTML pages | 0 | 0 | 0 | 214 |
41,935,280 | 2017-01-30T11:53:00.000 | 0 | 0 | 0 | 0 | python,mongodb,optimization,query-optimization,bigdata | 45,958,112 | 3 | false | 0 | 0 | You can still take advantage of RAM based lookup,and still having extra functionalities that specialized databases provide as compared to a plain hashmap/array in RAM.
Your objective with ram based lookups is faster lookups, and avoid network overhead. However both can be achieved by hosting the database locally, or network would not even be a overhead for small data payloads like names.
By the RAM array method, the apps' resilience decreases as you have a single point of failure, no easy snapshotting i.e. you would have to do some data warming everytime your app changes or restarts, and you will always be restricted to single querying pattern and may not be able to evolve in future.
Equally good alternatives with reasonably comparable throughput can be redis in a cluster or master-slave configuration, or aerospike on SSD machines. You get advantage of constant snapshots,high throughput,distribution and resilience through sharding/clustering i.e. 1/8 of data in 8 instances so that there is no single point of failure. | 2 | 16 | 0 | How beneficial will it be to use Python/PHP Nonpersistent array for storing 6GB+ data with 800+ million rows in RAM, rather than using MySQL/MongoDB/Cassandra/BigTable/BigData(Persistence Database) database when it comes to speed/latency in simple query execution?
For example, finding one name in 800+ million rows within 1 second: is it possible? Does anyone have experience of dealing with a dataset of more than 1-2 billion rows and getting the result within 1 second for a simple search query?
Is there a better, proven methodology to deal with billions of rows? | Persistence Database(MySQL/MongoDB/Cassandra/BigTable/BigData) Vs Non-Persistence Array (PHP/PYTHON) | 0 | 1 | 0 | 472 |
41,935,280 | 2017-01-30T11:53:00.000 | 4 | 0 | 0 | 0 | python,mongodb,optimization,query-optimization,bigdata | 41,935,572 | 3 | false | 0 | 0 | It should be very big different, around 4-5 orders of magnitude faster. The database stores records in 4KB blocks (usually), and has to bring each such block into memory it needs some milliseconds. Divide the size of your table with 4KB and get the picture. In contrast, corrresponding times for in-memory data are usually nanoseconds. There is no question that memory is faster, the real question is if you have enough memory and how long you can keep your data there.
However, the above holds for a select * from table query. If you want a select * from table where name=something, you can create an index on the name, so that the database does not have to scan the whole file, and the results should be much, much better, probably very satisfying for practical use. | 2 | 16 | 0 | How beneficial will it be to use Python/PHP Nonpersistent array for storing 6GB+ data with 800+ million rows in RAM, rather than using MySQL/MongoDB/Cassandra/BigTable/BigData(Persistence Database) database when it comes to speed/latency in simple query execution?
For example, finding one name in 800+ million rows within 1 second: is it possible? Does anyone have experience of dealing with a dataset of more than 1-2 billion rows and getting the result within 1 second for a simple search query?
Is there a better, proven methodology to deal with billions of rows? | Persistence Database(MySQL/MongoDB/Cassandra/BigTable/BigData) Vs Non-Persistence Array (PHP/PYTHON) | 0.26052 | 1 | 0 | 472 |
41,935,567 | 2017-01-30T12:08:00.000 | 0 | 0 | 1 | 0 | python,msbuild,azure-cloud-services,msbuild-task,azure-pipelines | 44,496,191 | 2 | false | 0 | 0 | I faced with this error when I build new project which have some function have not implement yet. Such as: throw new NotImplementedException();. I just implement this function and the error is throw away. | 2 | 1 | 0 | I'm trying to create build definition for azure cloud services (Microsoft Azure Cloud Service Project) to automate my build process, but I'm getting the below error on the build step in TFS Online.
Error WAT070 The referenced assembly was not found D:\a\1\s\Python\WebRole1\WebRole1.exe". Please make sure to build the role project that produces this assembly before building this Microsoft Azure Cloud Service Project.
I am trying to host python API using azure Flask.
I did manually deployment from VS2015 (in local machine). It's working fine.
But, I had checked the build order it was also fine. It has the web role first and cloud service at next. Still I'm getting the same error.
Note: I have two cloud service projects in single solution. | Error : WAT070 : The referenced assembly was not found. Please make sure to build the role project that produces this assembly before building | 0 | 0 | 0 | 1,788 |
41,935,567 | 2017-01-30T12:08:00.000 | 2 | 0 | 1 | 0 | python,msbuild,azure-cloud-services,msbuild-task,azure-pipelines | 57,997,953 | 2 | true | 0 | 0 | I had build errors that for some reason did not appear in the errors list window. I checked the output window and there were two errors towards the end of the build. As soon as these were addressed, the build could complete. | 2 | 1 | 0 | I'm trying to create build definition for azure cloud services (Microsoft Azure Cloud Service Project) to automate my build process, but I'm getting the below error on the build step in TFS Online.
Error WAT070 The referenced assembly was not found D:\a\1\s\Python\WebRole1\WebRole1.exe". Please make sure to build the role project that produces this assembly before building this Microsoft Azure Cloud Service Project.
I am trying to host python API using azure Flask.
I did manually deployment from VS2015 (in local machine). It's working fine.
But, I had checked the build order it was also fine. It has the web role first and cloud service at next. Still I'm getting the same error.
Note: I have two cloud service projects in single solution. | Error : WAT070 : The referenced assembly was not found. Please make sure to build the role project that produces this assembly before building | 1.2 | 0 | 0 | 1,788 |
41,941,537 | 2017-01-30T17:08:00.000 | 0 | 0 | 0 | 0 | javascript,java,python,google-authenticator,authenticator | 41,942,437 | 1 | false | 1 | 0 | So I dug a little deeper. This, however, requires I disable and remove the current 2FA from my account.
Go disable/remove current 2FA
Go enable it again, but remember to grab the secret (it's listed somewhere in the request or on the page) and save it somewhere
Find any secret -> One time password "generator"
Now I have the secrets synced on my PC and on my phone. Pretty neat. Requires a lot of work, as I need to disable all my authenticators, but it does work actually. | 1 | 0 | 0 | I'm trying to know if this is possible at all. So far it doesn't look that great. Let's imagine I wanted to list all my current Google Authenticator passwords somewhere. That list would update once there's a new set. Is this possible at all?
I remember back when Blizzard made their authenticator. You would basically have to enter the recovery key/password from their app into a program, which could then show your authenticator on the screen and on your phone or physical device (yeah they sold those). I imagine they used TOTP just like Google Authenticator does.
So my real question is: I have my x amount of Google Authenticator passwords, which refreshes every 30 seconds. Can I pull these out and show them in another program? Java? Python? Anything? I assume "reverse engineering the algorithm" and brute forcing the keys (like grab 100 keys and work out the next key) would be impossible, as these are server-client based.. right? | Google Authenticator passwords duplicated somewhere else? | 0 | 0 | 1 | 75 |
41,942,522 | 2017-01-30T18:03:00.000 | 0 | 0 | 0 | 0 | python-2.7,traveling-salesman,ant-colony | 53,916,150 | 1 | false | 0 | 0 | you can start from different nodes and will update the pheromone every time. | 1 | 1 | 0 | If we have 5 cities and 5 ants. Does all ants have to start from the same city? What is the difference if they start from different cities.
I am placing the ants at different cities as starting points randomly.
I tried using both cases but my results are same. I want to know if it's correct or there is a problem with my code. | Ant colony algorithm | 0 | 0 | 0 | 947 |
41,942,799 | 2017-01-30T18:19:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,copy,paste | 70,075,958 | 2 | false | 0 | 0 | This is a frustrating issue. The workaround I settled on was to paste into a text editor, then used sed to convert tabs into \t characters. Then copy and paste that into the python interactive shell.
For example:
Copy and paste 111.222.3.44[tab]80 into a text file that preserves the tabs, and save that file as temp.
Run a sed command to convert tabs into \t.
sed 's/\t/\\t/g' temp
Copy and the result into the python interactive shell:
111.222.3.44\t80 | 1 | 4 | 0 | Today, I tried to paste "ip[tab]port" in a interpreter, the result is "ipport".
Example: Copy 111.222.3.44 80(using spaces, here, in lieu of tab) from another source, e.g. Notepad, and paste it into the interactive shell. Unfortunately, when I try this, the [tab] doesn't
paste, and the result is:111.222.3.4480
I would like to be able to paste the IP & Port with the [tab] so that they are properly separated when pasted.
Python 3.6, Windows OS.
Does anyone know a way to do this? | How to paste Tab character into Python interactive shell | 0 | 0 | 1 | 603 |
41,943,823 | 2017-01-30T19:21:00.000 | 1 | 0 | 0 | 0 | python,tkinter,progress-bar | 41,944,019 | 3 | false | 0 | 1 | Tkinter does not have any support for circular progress bars. You will have to draw your own using a series of images, or a drawing on a canvas. | 1 | 4 | 0 | I want to add a Circular progress bar to my Python GUI using Tkinter, but I didn't find any documentation for Circular progress bars with Tkinter.
How can I create a Circular progress bar in Tkinter or is this not possible? | Circular progress bar using Tkinter? | 0.066568 | 0 | 0 | 6,856 |
41,946,758 | 2017-01-30T22:45:00.000 | 1 | 0 | 0 | 0 | python,pandas,matplotlib,ipython,jupyter-notebook | 41,947,311 | 3 | false | 0 | 0 | To plot only the portion of df1 whose index lies within the index range of df2, you could do something like this:
ax = df1.loc[df2.index.min():df2.index.max()].plot()
There may be other ways to do it, but that's the one that occurs to me first.
Good luck! | 1 | 0 | 1 | Found on S.O. the following solution to plot multiple data frames:
ax = df1.plot()
df2.plot(ax=ax)
But what if I only want to plot where they overlap?
Say that df1 index are timestamps that spans 24 hour and df2 index also are timestamps that spans 12 hours within the 24 hours of df1 (but not exactly the same as df1).
If I only want to plot the 12 hours that both data frames covers. What's the easies way to do this? | Pandas plot ONLY overlap between multiple data frames | 0.066568 | 0 | 0 | 3,052 |
41,951,160 | 2017-01-31T06:38:00.000 | 0 | 0 | 1 | 0 | python,python-3.x,pandas | 63,090,168 | 2 | false | 0 | 0 | You can also use
SeriesName.map('{:,}'.format) | 1 | 1 | 1 | I have a Series with Name as the index and a number in scientific notation such as 3.176154e+08. How can I convert this number to 317,615,384.61538464 with a thousands separator? I tried:
format(s, ',')
But it returns TypeError: non-empty format string passed to object.format
There are no NaNs in the data.
Thanks for your help! | Converting scientific notation in Series to commas and thousands separator | 0 | 0 | 0 | 6,217 |
41,951,447 | 2017-01-31T06:59:00.000 | 3 | 0 | 0 | 0 | python,django,django-models,django-forms | 41,951,490 | 1 | true | 1 | 0 | Are only the model fields explicitly declared visible in the
ModelForm?
Yes, generally you don't want to mess with this field, if the user inputs a value for the id field it's very likely to be duplicated so this is something you want django to take care of for you. | 1 | 3 | 0 | I have a model where I didn't specify a primary key and Django generated one for me. Now I create a ModelForm for the model and I have specified id in the fields section of ModelForm. However, in my ModelForm object, the id field is not present.
Are only the model fields explicitly declared visible in the ModelForm? | Include auto generated primary key in Django ModelForm fields | 1.2 | 0 | 0 | 356 |
41,951,904 | 2017-01-31T07:26:00.000 | 2 | 0 | 1 | 0 | python,visual-studio-2017 | 41,992,828 | 4 | false | 0 | 0 | I'm afraid it is not possible as for now. Microsoft removed python tools from VS 2017 a week ago or so. They have written that Python Tools should be available as an add-on "soon". | 2 | 6 | 0 | Does anybody know if it's possible to open old(VS2015) Python solutions and projects in Visual Studio 2017 RC ? VS 2017 cannot load my project, just saying it's incompatible. I was looking for some Python Tools in Modify Visual Studio option but couldn't find anything about it there. | Python Tools Visual Studio 2017 RC | 0.099668 | 0 | 0 | 5,258 |
41,951,904 | 2017-01-31T07:26:00.000 | 3 | 0 | 1 | 0 | python,visual-studio-2017 | 42,065,559 | 4 | true | 0 | 0 | Microsoft said on January 27, 2017 (build 26127.00)
Release Date: January 27, 2017 (build 26127.00)
Summary of Updates in this Release
Removed the Data Science and Python Development workloads as some of
the components weren’t meeting the release requirements, such as
translation to non-English languages. They will be available soon as
separate downloads. F# is still available in the .NET Desktop and .NET
Web development workloads. Upgrading to current version will remove
any previously installed Python and Data Science workloads/components. | 2 | 6 | 0 | Does anybody know if it's possible to open old(VS2015) Python solutions and projects in Visual Studio 2017 RC ? VS 2017 cannot load my project, just saying it's incompatible. I was looking for some Python Tools in Modify Visual Studio option but couldn't find anything about it there. | Python Tools Visual Studio 2017 RC | 1.2 | 0 | 0 | 5,258 |
41,958,566 | 2017-01-31T13:14:00.000 | 0 | 0 | 0 | 0 | python-3.x,neural-network,keras,pruning | 56,069,261 | 4 | false | 0 | 0 | If you set an individual weight to zero won't that prevent it from being updated during back propagation? Shouldn't thatv weight remain zero from one epoch to the next? That's why you set the initial weights to nonzero values before training. If you want to "remove" an entire node, just set all of the weights on that node's output to zero and that will prevent that nodes from having any affect on the output throughout training. | 1 | 21 | 1 | I'm trying to design a neural network using Keras with priority on prediction performance, and I cannot get sufficiently high accuracy by further reducing the number of layers and nodes per layer. I have noticed that very large portion of my weights are effectively zero (>95%). Is there a way to prune dense layers in hope of reducing prediction time? | Pruning in Keras | 0 | 0 | 0 | 9,448 |
41,958,927 | 2017-01-31T13:32:00.000 | 1 | 1 | 0 | 0 | python,python-2.7 | 41,958,983 | 1 | true | 0 | 0 | How about creating a parent class to all Analysis that will have common attributes (maybe static) and methods?
This way when you implement a new AnalysisType you inherit all the parameters and you can change them in a single place. | 1 | 1 | 0 | I am building a program to run several different analyses on a dataset. The different kinds of analysis are each represented by a different kind of analysis tool object (e.g. "AnalysisType1" and "AnalysisType2"). The analysis tools share many of the same parameters. The program is operated from a GUI, in which all the parameters are set by the user. What I'm trying to figure out, is what is the most elegant/best way to share the parameters between all the components of the program. Options I can think of include:
Keep all the parameters in the GUI, and pass to each analysis tool when it is executed.
Keep parameters in each of the tools, and update the parameters in all the tools every time they are changed in the GUI. Then they are ready to go whenever an analysis is executed.
Create a ParameterSet object that holds all the parameters for all the components. Give a reference to this ParameterSet object to every component that needs it, and update its parameters whenever they are changed in the GUI.
I've already tried #1, followed by #2, and as the complexity is growing, I'm considering moving to #3. Are there any reasons not to take this approach? | Sharing Parameters between Objects | 1.2 | 0 | 0 | 52 |
41,961,680 | 2017-01-31T15:48:00.000 | 2 | 0 | 0 | 0 | python-3.x,bokeh | 41,967,371 | 1 | false | 0 | 0 | Markers (e.g. Triangle) are really meant for use as "scatter" plot markers. With the exception of Circle, they only accept screen dimensions (pixles) for size. If you need triangular regions that scale with data space range changes, your options are to use patch or patches to draw the triangles as polygons (either one at a time, or "vectorized", respectively) | 1 | 1 | 1 | I am plotting both wedges and triangles on the same figure. The wedges scale up as I zoom in (I like this), but the triangles do not (I wish they did), presumably because wedges are sized in data units (via radius property) and traingles are in screen units (via size property).
Is it possible to switch the triangles to data units, so everything scales up during zoom in?
I am using bokeh version 0.12.4 and python 3.5.2 (both installed via Anaconda). | scaling glyphs in data units (not screen units) | 0.379949 | 0 | 0 | 49 |
41,964,500 | 2017-01-31T18:05:00.000 | 0 | 0 | 1 | 0 | python,pip,python-3.5,pymysql,pyc | 41,965,456 | 2 | false | 0 | 0 | Use cx_freeze, pyinstaller or virtualenv.
Or copy code and put in your. Read python import | 1 | 1 | 0 | I'm making a program that uses PyMySql and I'd like people to be able to run my program without going through the manual installation of PyMySql, is there a way I can achieve that?
I've already tried compiling to .pyc but that doesn't seem to work, in fact when I uninstall PyMySql it doesn't work anymore.
PS: There probably are better languages to do that but it's a homework assignment for school and can't use anything but python, also sorry for my bad english | If I install modules with pip, how can I make sure other people can run my program without having that module installed? | 0 | 0 | 0 | 75 |
41,964,509 | 2017-01-31T18:06:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,python-multiprocessing | 41,964,789 | 1 | false | 0 | 0 | You can do this as follows:
Use an instance of the threading.Lock class. Call method acquire to claim exclusive access to your queue from a certain thread and call release to grant other threads access.
Since you want to keep gathering your input, copying the whole queue would be probably be to expensive. Probably the fastest way is to first collect data in one queue, than swap it for another and use the old one to read data from into your application by a different thread. Protect the swapping with a Lock instance, so you can be sure that whenever the writer acquires the lock, the current 'listener' queue is ready to receive data.
If only recent data is important, use two circular buffer instead of queues, allowing old data to be overwritten. | 1 | 1 | 0 | I want to use the Python 2.7 multiprocessing package to operate on an endless stream of data. A subprocess will constantly receive data via TCP/IP or UDP packets and immediately place the data in a multiprocessing.Queue. However, at certain intervals, say, every 500ms, I only want to operate on a user specified slice of this data. Let's say, the last 200 data packets.
I know I can put() and get() on the Queue, but how can I create that slice of data without a) Backing up the queue and b) Keeping things threadsafe?
I'm thinking I have to constantly get() from the Queue with another subprocess to prevent the Queue from getting full. Then I have to store the data in another data structure (such as a list) to build the user specified slice. But the data structure would probably not be thread safe, so it does not sound like a good solution.
Is there some programming paradigm that achieves what I am trying to do easily? I looked at the multiprocessing.Manager class, but wasn't sure it would work. | Handling endless data stream with multiprocessing and Queues | 0 | 0 | 0 | 767 |
41,965,253 | 2017-01-31T18:48:00.000 | 3 | 0 | 0 | 1 | python,dask | 41,965,766 | 1 | true | 0 | 0 | Correct, if a task is allocated to one worker and another worker becomes free it may choose to steal excess tasks from its peers. There is a chance that it will steal a task that has just started to run, in which case the task will run twice.
The clean way to handle this problem is to ensure that your tasks are idempotent, that they return the same result even if run twice. This might mean handling your database error within your task.
This is one of those policies that are great for data intensive computing workloads but terrible for data engineering workloads. It's tricky to design a system that satisfies both needs simultaneously. | 1 | 5 | 1 | I'm using the Dask distributed scheduler, running a scheduler and 5 workers locally. I submit a list of delayed() tasks to compute().
When the number of tasks is say 20 (a number >> than the number of workers) and each task takes say at least 15 secs, the scheduler starts rerunning some of the tasks (or executes them in parallel more than once).
This is a problem since the tasks modify a SQL db and if they run again they end up raising an Exception (due to DB uniqueness constraints). I'm not setting pure=True anywhere (and I believe the default is False). Other than that, the Dask graph is trivial (no dependencies between the tasks).
Still not sure if this is a feature or a bug in Dask. I have a gut feeling that this might be related to worker stealing... | Repeated task execution using the distributed Dask scheduler | 1.2 | 0 | 0 | 864 |
41,967,226 | 2017-01-31T20:49:00.000 | 0 | 0 | 0 | 0 | python,data-modeling | 41,968,970 | 2 | true | 0 | 0 | I think that the hard part of the problem is that you'll probably want the stimulus (tune) data formatted differently for different queries. What I would think about doing is making a relatively simple data structure for your stimuli (tunes) and add a unique identifier to each unique tune. You could probably get away with using your dictionary structures here if your structure can fit into memory.
Then I would put your trials into a relational database with the corresponding stimulus IDs. Each trial entry in the database would have complete subject and session information.
Your for each analysis permutation you will do two steps to get the relevant data:
Filter the stimuli using the stimulus data structure and get a list of their corresponding IDs.
Preform a query on your trials database to get the trials with this list IDs. You can add other parameters to your query, obviously, to filter based on subject, session, etc.
I hope that helps | 1 | 0 | 1 | I'm a researcher studying animal behavior, and I'm trying to figure out the best way to structure my data. I present short musical tunes to animals and record their responses.
The Data
Each tune consists of 1-10 notes randomly chosen from major + minor scales spanning several octaves. Each note is played for a fixed duration but played randomly within some short time window.
I then record the animal's binary response to the tune (like / dislike).
I play >500 tunes to the animal each day, for >300 days. I also combine data from >10 animals.
I also need to store variables such as trial number on each day (was it the 1st tune presented? last? etc.), and date so that I know what data points to exclude due to external issues (e.g. animal stopped responding after 100 trials or for the entire day).
The Analysis
I'm trying to uncover what sorts of musical structure in these randomly generated tunes will lead to likes/dislikes from the animal. I do this in a mostly hypothesis-driven manner, based on previous research. The queries I need to perform on my dataset are of the form: "does having more notes from the same octave increase likeability of the tune?"
I'm also performing analysis on the dataset throughout the year as data is being accumulated.
What I've tried
I combine data from all animals into a single gigantic list containing dicts. Each dict represents a single trial and its associated:
animal ID#
session ID#
trial ID#
binary response (like/dislike)
tune, which is defined by a dict. The keys are simply the notes played, and the values denote when the note is played. E.g. {'1A#':[30,100]} means a tune with just a single note, A# from 1st octave, played from 30ms to 100ms.
I save this to a single pickle file. Every day after all the animals are done, I update the pickle file. I run my data analysis roughly once per week by loading the updated pickle file.
I've been looking to re-structure my data into a database or Pandas DataFrame format because of speed of 1) serializing data and 2) querying, and 3) possible cleaner code instead of dealing with nested dicts. I initially thought that my data would naturally lend itself well to some table structure because of the trial-by-trial structure of my experiment. Unfortunately, the definition of tunes within the table seems tricky, as the tunes don't really have some fixed structure.
What would be possible alternatives in structuring my data? | Database design for complex music analytics | 1.2 | 0 | 0 | 65 |
41,967,742 | 2017-01-31T21:24:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,google-cloud-logging | 45,255,813 | 1 | true | 1 | 0 | Google has in the mean time update the cloud console and debugger, which now does contain full stack traces for Python. | 1 | 2 | 0 | Previously when an error occurred in my application I could find a trace of the entire code to where it happened ( file, line number ). In the Google Cloud console.
Right now I only receive a request ID and a timestamp, with no indication of a trace or line number in the code when in the 'logging' window in the Google Cloud Console. Selecting a 'log event' only shows some sort of JSON structure of a request, but not anything about the code or any helpful information what went wrong with the application.
What option should be selected in the google cloud console to show a stack trace for Python App Engine applications? | Google Stackdriver does not show trace | 1.2 | 0 | 0 | 264 |
41,969,597 | 2017-01-31T23:47:00.000 | 0 | 0 | 0 | 1 | python,django,nginx,redis,wsgi | 42,014,392 | 2 | false | 1 | 0 | Figured this out after a few days. We were using a django app called django-health-check. It has a component called health_check_celery3 that was in the installed apps. This was having trouble loading while celery was running, and thus causing the whole app to stall. After removing it, celery runs as it should. | 1 | 0 | 0 | I have a django app configured to run behind nginx using uWSGI. On a separate machine I am running celery, and pushing long running tasks from the webserver to the task machine. The majority of the task I/O is outbound http requests, which go on for an hour or more. The task broker is redis.
When the tasks run for more than a minute or two, the webserver becomes unresponsive (503 errors).
There are no errors raised anywhere within the python app. The tasks complete normally, after which the webserver continues handling requests.
Has anyone experienced this before, and if so, how did you deal with it? Thanks | Nginx non-responsive while celery is running | 0 | 0 | 0 | 233 |
41,970,230 | 2017-02-01T01:01:00.000 | 0 | 0 | 0 | 0 | python,scikit-learn,random-forest,countvectorizer | 47,134,942 | 1 | false | 0 | 0 | Well name is a unique thing and an id kind of use sklearn.preprocessing.LabelEncoder after storing the original to a separate list. It will automatically convert the names to a serial number.
Also, note if it's a unique thing you should remove names during predicting. | 1 | 1 | 1 | I have a dataframe containing 13 columns. Among 13 three columns are string. One string column is simple male and female which I converted to 1 and 0 using
pd.get_dummies()
2nd column contains three different types of string so, easily converted to array using
from sklearn.feature_extraction.text import CountVectorizer
No issue at all. Problem is my third and last column contains large number of names. if I try to convert using Countvectorizer it converts the names into long unreadable strings.
df['name']=Countvectorizer.fit_transform(df.name)
if I try to convert back it to dataframe as shown in other examples on stackoverflow page in this case I get this
245376 (0, 14297)\t1\n (1, 5843)\t1\n (1, 13365)...
245377 (0, 14297)\t1\n (1, 5843)\t1\n (1, 13365)...
Name: supplier_name, dtype: object
and this next code results Memory Error
df['name'] =pd.DataFrame(CV.fit_transform(df.name).toarray(),columns=CV.get_feature_names())
I have looked that issue as well.
Question: is there any way best to use this name column in numeric forms except above mentioned. Or any other idea how to improve this so that data perfectly fit in Randomforest classifier. As, Dataframe is quit large containing 123790 rows. Thank you in advance for help or suggestion. | How to deal with name column in Scikitlearn randomforest classifier. python 3 | 0 | 0 | 0 | 498 |
41,970,630 | 2017-02-01T01:46:00.000 | 2 | 0 | 1 | 1 | python-3.x,anaconda,jupyter-notebook | 42,047,557 | 1 | true | 0 | 0 | Looks like this was fixed in the newest build of anaconda (4.3.0 .1). Unfortunately looks like it requires uninstall and reinstall as the locations seems to have changed drastically (from some subsubsub folder off of AppData to something higher up, under user directory).
(But that might be the effect of testing 4.3.0.1 on a different machine.)
For example, ipython is now:
C:\Users\user_name\Anaconda3\python.exe C:\Users\user_name\Anaconda3\cwp.py C:\Users\user_name\Anaconda3 "C:/Users/user_name/Anaconda3/python.exe" "C:/Users/user_name/Anaconda3/Scripts/ipython-script.py"
Here is changelog for 4.3.0.1:
In this “micro” patch release, we fixed a problem with the Windows installers which was causing problems with Qt applications when the install prefix exceeds 30 characters. No new Anaconda meta-packages correspond to this release (only new Windows installers). | 1 | 3 | 0 | After installing anaconda 4.3 64-bit (python 3.6) on windows, and choosing "install for current user only" and "add to path":
I noticed that the anaconda program shortcuts don't work on my start menu--they are cut off at the end. Does anyone know how the correct entries should read? (or instead, how to repair the links?) thanks.
UPDATE: I reproduced the problem on two other machines, Windows 10 (x64) and windows 8.1 (x64), that were "clean" (neither one had a prior installation of python).
This is what they are after a fresh install (under "Target" in "Properties" in the "Shortcut" tab for each shortcut item):
JUPYTER NOTEBOOK:
C:\Users\user_name\AppData\Local\Continuum\Anaconda3\python.exe C:\Users\user_name\AppData\Local\Continuum\Anaconda3\cwp.py C:\Users\user_name\AppData\Local\Continuum\Anaconda3 "C:/Users/user_name/AppData/Local/Continuum/Anaconda3/python.exe" "C:/Users/user_name/AppData/Loc
JUPYTER QTCONSOLE:
C:\Users\user_name\AppData\Local\Continuum\Anaconda3\pythonw.exe C:\Users\user_name\AppData\Local\Continuum\Anaconda3\cwp.py C:\Users\user_name\AppData\Local\Continuum\Anaconda3 "C:/Users/user_name/AppData/Local/Continuum/Anaconda3/pythonw.exe" "C:/Users/user_name/AppData/L
SPYDER:
C:\Users\user_name\AppData\Local\Continuum\Anaconda3\pythonw.exe C:\Users\user_name\AppData\Local\Continuum\Anaconda3\cwp.py C:\Users\user_name\AppData\Local\Continuum\Anaconda3 "C:/Users/user_name/AppData/Local/Continuum/Anaconda3/pythonw.exe" "C:/Users/user_name/AppData/L
RESET SPYDER:
C:\Users\user_name\AppData\Local\Continuum\Anaconda3\python.exe C:\Users\user_name\AppData\Local\Continuum\Anaconda3\cwp.py C:\Users\user_name\AppData\Local\Continuum\Anaconda3 "C:/Users/user_name/AppData/Local/Continuum/Anaconda3/python.exe" "C:/Users/user_name/AppData/Loc
NAVIGATOR:
C:\Users\user_name\AppData\Local\Continuum\Anaconda3\pythonw.exe C:\Users\user_name\AppData\Local\Continuum\Anaconda3\cwp.py C:\Users\user_name\AppData\Local\Continuum\Anaconda3 "C:/Users/user_name/AppData/Local/Continuum/Anaconda3/pythonw.exe" "C:/Users/user_name/AppData/L
IPYTHON:
C:\Users\user_name\AppData\Local\Continuum\Anaconda3\python.exe C:\Users\user_name\AppData\Local\Continuum\Anaconda3\cwp.py C:\Users\user_name\AppData\Local\Continuum\Anaconda3 "C:/Users/user_name/AppData/Local/Continuum/Anaconda3/python.exe" "C:/Users/user_name/AppData/Loc | Anaconda 4.3, 64-bit (python 3.6), leaves incorrect truncated paths in windows Start menu | 1.2 | 0 | 0 | 1,261 |
41,970,795 | 2017-02-01T02:07:00.000 | 0 | 0 | 1 | 0 | python | 41,970,835 | 13 | false | 0 | 0 | I think you're right, a for loop would get the job done but might not be the most elegant solution. I've never programmed in Python so I don't know the exact syntax but I could give you a psuedo code rundown of a class that would get the job done. | 1 | 8 | 0 | I was thinking about making a deck of cards for a card game. I could make a list of all of the cards (I don't really care about the suits), but I was wondering if there was a much easier way to do this.
cards = ['1','1','1','1'....]
I'm positive you could make a for loop to create 4 cards of the same value and add it to a list, but I was wondering if that was the best solution. I am not advanced enough to know about or create a Class which I have seen to be offered as other solutions, but I am open to explanations.
I have already made a dictionary defining the card values. | What is the best way to create a deck of cards? | 0 | 0 | 0 | 56,842 |
41,973,955 | 2017-02-01T07:14:00.000 | 1 | 1 | 0 | 0 | python-2.7,amazon-web-services,amazon-dynamodb,amazon-dynamodb-streams | 42,009,940 | 2 | false | 1 | 0 | Option #1 and #2 are almost the same- both do a Scan operation on the DynamoDB table, thereby consuming maximum no. of RCUs.
Option #3 will save RCUs, but restoring becomes a challenge. If a record is updated more than once, you'll have multiple copies of it in the S3 backup because the record update will appear twice in the DynamoDB stream. So, while restoring you need to pick the latest record. You also need to handle deleted records correctly.
You should choose option #3 if the frequency of restoring is less, in which case you can run an EMR job over the incremental backups when needed. Otherwise, you should choose #1 or #2. | 1 | 3 | 0 | We are looking for a solution which uses minimum read/write units of DynamoDB table for performing full backup, incremental backup and restore operations. Backup should store in AWS S3 (open to other alternatives). We have thought of few options such as:
1) Using python multiprocessing and boto modules we were able to perform Full backup and Restore operations, it is performing well, but is taking more DynamoDB read/write Units.
2) Using AWS Data Pipeline service, we were able to perform Full backup and Restore operations.
3) Using Dynamo Streams and kinesis Adapter/ Dynamo Streams and Lambda function, we were able to perform Incremental backup.
Are there other alternatives for Full backup, Incremental backup and Restore operations. The main limitation/need is to have a scalable solution by utilizing minimal read/write units of DynamoDb table. | How to perform AWS DynamoDB backup and restore operations by utilizing minimal read/write units? | 0.099668 | 1 | 0 | 1,135 |
41,974,959 | 2017-02-01T08:21:00.000 | 1 | 0 | 0 | 0 | python,django,apache,ssh | 41,976,000 | 1 | true | 1 | 0 | Your question is confusing. If you deployed it with Apache, it's running through Apache and not through runserver. You might have additionally started runserver, but that is not what is serving your site. | 1 | 0 | 0 | I recently deployed a Django site on a DigitalOcean droplet through Apache. I did python manage.py runserver through ssh and now the Django site is running. However, it stayed on even after the ssh session expired (understandable because it's still running on the remote server) but how do I shut it down if I need to?
Also, due to this, I don't get error messages on the terminal if something goes wrong like I do when I develop locally. What would be a fix for this? | Is it normal that the Django site I recently deployed on Apache is always on? | 1.2 | 0 | 0 | 48 |
41,975,993 | 2017-02-01T09:20:00.000 | 0 | 0 | 1 | 1 | windows,pycharm,python-idle | 41,976,142 | 1 | false | 0 | 0 | Check os.environ['PATH'] and os.system("echo $PATH"), they should be the same. | 1 | 0 | 0 | In my python script, there is os.system('cmd.exe').
The same script opens a new cmd console when executed with Python IDLE, but not when executed in PyCharm.
Any help on this? | why os.system('cmd.exe') in pycharm does not open a new console | 0 | 0 | 0 | 658 |
41,977,176 | 2017-02-01T10:20:00.000 | 0 | 0 | 0 | 0 | python,http,header,wsgi | 42,309,756 | 1 | true | 1 | 0 | Yeah, so the problem is, the header is called "If-None-Match", which is not plural. | 1 | 0 | 0 | I'm using Python & WSGI to create a web application.
Currently I'm loading the server with wsgiref.simple_server.make_server .
However, I'm running into the problem that not all request headers are given to my application. Specifically the header "If-None_matches".
The browser is sending it, but I don't get an environment variable like "HTTP_IF_NONE_MATCHES" for some reason. Anyone knows what is going on?
Thanks you guys. | Python WSGI missing request header 'If-None-Matches' | 1.2 | 0 | 0 | 103 |
41,987,133 | 2017-02-01T18:34:00.000 | 7 | 0 | 0 | 0 | python,amazon-web-services,encryption,amazon-s3 | 41,987,427 | 1 | true | 1 | 0 | The "server-side" encryption you have enabled turns on encryption at rest. Which means the file is encrypted while it's sitting on S3. But S3 will decrypt the file before it sends you the data when you download the file.
So there is no change to how you handle the file when downloading it if the file is encrypted or not.
This type of encryption does not protect the file if the file is downloaded via valid means, such as when using the API. It only protects the file from reading if someone were to circumvent the S3 data center or something like that.
If you need to protect the file, such that it must be decrypted when downloaded, then you need to encrypt it client-side, before uploading it to S3.
You can use any client-side encryption scheme you deem worthy: AES256, etc. But S3 won't do it for you. | 1 | 1 | 0 | I'm using S3 instead of KMS to store essentially a credentials file, and Python to read the file's contents.
I manually set the file encrypted by clicking on it in S3, going to Properties - Details - Server Side Encryption:AES-256
And in my Python script, I read the key without making changes from when I read the file when it was unencrypted. And I was also able to download the file and open it without having to do anything like decrypting it. I was expecting to have to decrypt it, so I'm a little confused.
I'm just unable to understand what server-side encryption protects against. Would anyone already with access to S3 or the S3 bucket with the key/file be able to read the file? Who wouldn't be able to open the file? | Do I ever have to decrypt S3-encrypted files? | 1.2 | 1 | 0 | 3,317 |
41,987,619 | 2017-02-01T19:03:00.000 | 1 | 0 | 1 | 0 | python,django,python-2.7,python-3.x | 41,987,877 | 1 | true | 1 | 0 | In a word, no. An app built for Django 1.4 will almost certainly not work on Django 1.9.
Django does usually offer backwards compatibility, but only on revision numbers of the minor version. That is, you might expect 1.4.22 to run code written for any 1.4.x without any change necessary, but a 1.5 release would introduce backwards-incompatible changes. | 1 | 1 | 0 | Currently i am working on django project with python 3.5 and Django 1.9.2. I want to integrated one app(Module) which was build with python 2.7 and Django 1.4 from different django project in my latest project.Can i run two different app with different python and Django in single Django project. | How to run two different python version in single Django project? | 1.2 | 0 | 0 | 82 |
41,988,762 | 2017-02-01T20:08:00.000 | 0 | 1 | 0 | 0 | python,selenium,automated-tests | 71,752,003 | 2 | false | 0 | 0 | Anybody using this guide (as of April 2022) will need to update to the following:
https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url={YOUR_SITE_URL}/&filter_third_party_resources=true&locale=en_US&screenshot=false&strategy=desktop&key={YOUR_API_KEY}
The difference is the "/v2/" needs to be replaced with "/v5/" | 1 | 1 | 0 | Is there a way to automate checking Google Page Speed scores? | How to automate Google PageSpeed Insights tests using Python | 0 | 0 | 1 | 2,435 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.