Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
47,478,918
2017-11-24T19:13:00.000
2
0
1
0
python,kivy,cython,pypy
47,487,341
1
false
0
1
When it comes to responsiveness, make sure your python code is optimised. That means doing things like not loading Screens or other widgets until you need them (or even doing it in the background as much as possible). For speeding up python itself, the main supported method is cython. python-for-android does not support pypy. kivy can be a little slower and bigger than most android applications because it includes python and the interpreter. A basic Kivy-using APK is about 7MB. And the delay from starting the interpreter manifests largely during the startup of the application, which can take a few seconds, especially on older devices.
1
1
0
I am wondering what options I have to speed up kivy after I package it. I was looking into cython and pypy because those should speed up python. Also, I was reading that kivy can be a little slower and bigger than most android applications because it includes python and the interpreter. I usually just search around forever until i find an answer but it can be hard to find things about kivy. Can anyone with experience recommend something to get better speeds out of this framework? I'm dealing with a lot of code so it could be cumbersome testing a lot of this stuff out. edit: 132137 asked Nov 24 '17 at 19:13 I have a lot of this application packaged now. I wouldn't so much worry about cython until you are packaging it. I would also try to package the application incrementally to make sure everything works. More than anything, I was just really worried about how things would go when I started packaging it. I should have started earlier. The size hasn't been too much of an issue. I would try and write it on ubuntu or a linux distribution(buildozer doesn't work with windows) and not everything will run the same cross all platforms(I had some issues with some of the modules I was working with). I love kivy this is like an eli5 thing I wish I'd known at the time. After messing around with it some I got it down to 16mb. So I'm really happy with the framework. I guess i didn't need to include the buildozer folder in the build. I'm new to programming but I'm pretty happy with how everything turned out.
What are my options to speed up and reduce the size of my kivy android application
0.379949
0
0
1,947
47,479,541
2017-11-24T20:19:00.000
0
0
0
1
python,django,docker,kubernetes
47,479,929
2
false
1
0
You should disable CSRF for every instance, and manage the CSRF security from the API Gateway
2
1
0
Can someone example to me how CSRF works in the cluster setup? I have a kubernetes cluster hosting a django website, and I'm having some occasional issues with 403 errors. I have multiple instances of the site load balanced in kubernetes. How does CSRF work when a POST is sent from 1 instance and handled by another? Does CSRF site work if the docker images are updated during the time the form is being filled out? Thanks!
Django CSRF in a cluster
0
0
0
723
47,479,541
2017-11-24T20:19:00.000
1
0
0
1
python,django,docker,kubernetes
47,479,741
2
true
1
0
Can someone example to me how CSRF works in the cluster setup? Exactly the same way it usually ought not to (CSRF is Cross Site Request Forgery, i.e. the attack). To protect against it, you hand out secret tokens to your clients which they must include with subsequent requests. Your backend must validate that the tokens are valid, applicable and were, in fact, issued by a trusted source. There's a few ways to do that bit: You can use MACs for that (in which case you have something pretty close to JSON WebTokens). You can save your tokens to some trusted store and query that store on subsequent requests. That is pretty much all there is to it. Since your CSRF protection emerges from the combination of choices you made above, how to make it work in a distributed setup also depends on the specific implementation of the CSRF protection scheme. Going by the Django docs, the default way to do it uses a 'secret' which is reset every time a user logs in. That means if hitting a different server for two subsequent requests triggers a new log in, all old CSRF tokens are effectively invalidated. So based on that: You need to adapt your Django project to make sure different instances can resume working with the same session, and a re-login is not triggered All your Django instances need to be able to access the same per log-in secret, so that any one of them can validate a CSRF token issued by any other.
2
1
0
Can someone example to me how CSRF works in the cluster setup? I have a kubernetes cluster hosting a django website, and I'm having some occasional issues with 403 errors. I have multiple instances of the site load balanced in kubernetes. How does CSRF work when a POST is sent from 1 instance and handled by another? Does CSRF site work if the docker images are updated during the time the form is being filled out? Thanks!
Django CSRF in a cluster
1.2
0
0
723
47,483,933
2017-11-25T08:17:00.000
-1
0
1
0
python,visual-studio-code,pylint
47,483,956
2
false
0
0
To solve this issue, you can set the path to your python installation in preferences-> settings python.pythonPath.
1
2
0
I downloaded Visual Studio Code the other day and I decided to use it with pylint. For some odd reason I couldn't set the python interpreter to python 3 from the palette (Shift + Ctrl + P) but I set the path to it from the settings and it seems to have done the job. However, pylint underlines almost everything. All of my imports are underlined with the error message: [pylint] E0401:Unable to import 'my.import' I read a couple of threads on this topic and the main suggestions are to set the correct python path and path to pylint, which I have done with 0 success. I tried removing it pip3 remove pylint and reinstalling it, however, it still did not fix my issue.
Using pylint with Visual Studio Code
-0.099668
0
0
2,424
47,487,324
2017-11-25T14:58:00.000
0
0
1
0
python,python-3.x,python-idle
47,489,094
1
false
0
0
Your file is probably corrupted, which is why it appears to have lost its contents.
1
0
0
So I've been working on a python file in Idle 3.5 (64 bit) and this morning I went to open it and it is completely blank in Idle. When I opened it with Notepad++ it was just a long line that said NULL. In the directory it still says the file is 19KB. What gives? I have a backed up version but I'm not sure why this happened.
Python file is now completely empty
0
0
0
108
47,487,749
2017-11-25T15:48:00.000
1
1
1
0
python,python-2.7
47,487,805
2
false
0
0
Calling os.system("second.py") or using subprocess.Popen from you first script should work for you.
2
1
0
I have a .py file and I want to make it so I can type it's name in another .py file and have it run all the code from the first file. Remember, this is in Python 2.7 on a Raspberry Pi 3. Thank you!
Python 2.7 Running External .py Files in Program
0.099668
0
0
156
47,487,749
2017-11-25T15:48:00.000
2
1
1
0
python,python-2.7
47,487,886
2
true
0
0
Well you can use execfile() or os.system() to solve your problem. But I think, the correct way to tackle your problem is to import the file in your current script and call the imported file's functions or main function directly from your script.
2
1
0
I have a .py file and I want to make it so I can type it's name in another .py file and have it run all the code from the first file. Remember, this is in Python 2.7 on a Raspberry Pi 3. Thank you!
Python 2.7 Running External .py Files in Program
1.2
0
0
156
47,487,870
2017-11-25T16:00:00.000
1
0
0
1
python-3.x
47,487,950
1
false
0
0
See python --help. It mentions an environment variable called PYTHONSTARTUP which looks like it could help you get where you want.
1
0
0
I have a shortcut which runs python3 in a terminal window. I would like to add some import commands to a python script which is to be run when python starts. How can I do this? eg; I have xfce4-terminal -e python3 which starts a graphical terminal session with python3 running. I want to add something to this to make python3 execute a script, however I do not want python to exit at the end of the script, which is the default behaviour if a filename is given immediatly following the python3 command.
python3: How can I run a python script when python starts?
0.197375
0
0
45
47,491,757
2017-11-26T00:07:00.000
0
0
0
0
wxpython
47,568,957
1
false
0
1
You probably mean TextCtrl, don't you? You can use a validator, there is an example in the wxPython demo, which is a part of the Docs and Demo package.
1
0
0
I have an issue with StaticText, and I do not know how to make it accept only the following data = '0123456789.' . Of course, when you type any letter, the letter is automatically deleted. Can you help me please
wxpython make StaticText limit
0
0
0
35
47,494,142
2017-11-26T08:04:00.000
7
0
1
1
python,node.js,npm,gulp
47,526,432
2
true
0
0
For those who encounter this problem in future, you can save yourself some time and let npm install all necessary programs by running: npm install --global --production windows-build-tools Note: you need to run this command as administrator
2
1
0
I am getting the following error when I attempt to install gulp-converter-tjs using npm install -g gulp-converter-tjs It looks like I am missing python but the path is correct for python.exe . I can even run python command from my cmd and also enviroment variables are set. Any suggestions? C:\Users\themhz\AppData\Roaming\npm\node_modules\gulp-converter-tjs\node_modules\iconv>if not defined npm_config_node_gy p (node "C:\Users\themhz\AppData\Roaming\npm\node_modules\npm\bin\node-gyp-bin....\node_modules\node-gyp\bin\node-gyp .js" rebuild ) else (node "" rebuild ) gyp ERR! configure error gyp ERR! stack Error: Can't find Python executable "C:\Users\themhz\AppData\Local\Programs\Python\Python36-32\python.exe ", you can set the PYTHON env variable. gyp ERR! stack at PythonFinder.failNoPython (C:\Users\themhz\AppData\Roaming\npm\node_modules\npm\node_modules\node- gyp\lib\configure.js:483:19) gyp ERR! stack at PythonFinder. (C:\Users\themhz\AppData\Roaming\npm\node_modules\npm\node_modules\node-g yp\lib\configure.js:509:16) gyp ERR! stack at C:\Users\themhz\AppData\Roaming\npm\node_modules\npm\node_modules\graceful-fs\polyfills.js:284:29 gyp ERR! stack at FSReqWrap.oncomplete (fs.js:152:21) gyp ERR! System Windows_NT 10.0.15063 gyp ERR! command "C:\Program Files\nodejs\node.exe" "C:\Users\themhz\AppData\Roaming\npm\node_modules\npm\nod e_modules\node-gyp\bin\node-gyp.js" "rebuild" gyp ERR! cwd C:\Users\themhz\AppData\Roaming\npm\node_modules\gulp-converter-tjs\node_modules\iconv gyp ERR! node -v v8.9.1 gyp ERR! node-gyp -v v3.6.2 gyp ERR! not ok npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! [email protected] install: node-gyp rebuild npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the [email protected] install script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! C:\Users\themhz\AppData\Roaming\npm-cache_logs\2017-11-25T15_20_09_146Z-debug.log My machine is running windows 10 npm -v 5.5.1 node -v v8.9.1 Python3
Can't find Python executable when installing gulp-converter-tjs
1.2
0
0
3,675
47,494,142
2017-11-26T08:04:00.000
-1
0
1
1
python,node.js,npm,gulp
47,513,927
2
false
0
0
I installed the Python 2.7 version and then I installed the visual studio 2017 community edition in order to get the c++ and the .net framework libraries.
2
1
0
I am getting the following error when I attempt to install gulp-converter-tjs using npm install -g gulp-converter-tjs It looks like I am missing python but the path is correct for python.exe . I can even run python command from my cmd and also enviroment variables are set. Any suggestions? C:\Users\themhz\AppData\Roaming\npm\node_modules\gulp-converter-tjs\node_modules\iconv>if not defined npm_config_node_gy p (node "C:\Users\themhz\AppData\Roaming\npm\node_modules\npm\bin\node-gyp-bin....\node_modules\node-gyp\bin\node-gyp .js" rebuild ) else (node "" rebuild ) gyp ERR! configure error gyp ERR! stack Error: Can't find Python executable "C:\Users\themhz\AppData\Local\Programs\Python\Python36-32\python.exe ", you can set the PYTHON env variable. gyp ERR! stack at PythonFinder.failNoPython (C:\Users\themhz\AppData\Roaming\npm\node_modules\npm\node_modules\node- gyp\lib\configure.js:483:19) gyp ERR! stack at PythonFinder. (C:\Users\themhz\AppData\Roaming\npm\node_modules\npm\node_modules\node-g yp\lib\configure.js:509:16) gyp ERR! stack at C:\Users\themhz\AppData\Roaming\npm\node_modules\npm\node_modules\graceful-fs\polyfills.js:284:29 gyp ERR! stack at FSReqWrap.oncomplete (fs.js:152:21) gyp ERR! System Windows_NT 10.0.15063 gyp ERR! command "C:\Program Files\nodejs\node.exe" "C:\Users\themhz\AppData\Roaming\npm\node_modules\npm\nod e_modules\node-gyp\bin\node-gyp.js" "rebuild" gyp ERR! cwd C:\Users\themhz\AppData\Roaming\npm\node_modules\gulp-converter-tjs\node_modules\iconv gyp ERR! node -v v8.9.1 gyp ERR! node-gyp -v v3.6.2 gyp ERR! not ok npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! [email protected] install: node-gyp rebuild npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the [email protected] install script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! C:\Users\themhz\AppData\Roaming\npm-cache_logs\2017-11-25T15_20_09_146Z-debug.log My machine is running windows 10 npm -v 5.5.1 node -v v8.9.1 Python3
Can't find Python executable when installing gulp-converter-tjs
-0.099668
0
0
3,675
47,495,285
2017-11-26T10:48:00.000
9
0
0
0
python,tkinter,size,pixels
47,622,396
2
false
0
1
Ok, I figured it out. We must call widget.update() first before calling widget.winfo_height().
1
1
0
If I define for example tkinter.Button widget with parameters (width=10, height=1)(in characters) and then I want to retrieve it's size in pixels, how do I do it? EDIT: I tried widget.winfo_height() and widget.geometry(), but all these functions return height defined in number of characters. I think it would be possible to create the same widget in a frame and then write frame.winfo_height() which would return size in pixels, but this is not so elegant solution.
Get tkinter widget size in pixels
1
0
0
5,427
47,495,484
2017-11-26T11:12:00.000
0
0
1
1
python,python-2.7,gimp,pythonpath
47,496,643
1
false
0
0
See /usr/lib/gimp/2.0/interpreters/pygimp.interp (or wherever this is in your Gimp installation directory). You can tweak this to use another Python instance (but it still has to be 2.7.x). On the other hand Gimp's own Python interpreter is the standard cpython, you can use it directly in the command line and can likely tweak pip to install modules for it (or add the modules installed on the other instance to it).
1
0
0
I am developing a custom GIMP plugin on Mac OS X. For programming reasons, I need to use some python modules that I have installed in system's default python environment (such as OpenCV, MoviePy, and some others), but I am unable to import them on the Python-Fu console because it uses the GIMP's app built-in python environment. My question is. Is there a way to tell GIMP to use another python interpreter? I've searched on GIMP preferences but I haven't found anything such as Python-Fu path or something similar. GIMP's Python interpreter is found here on my system: /Applications/GIMP.app/Contents/MacOS/python The interpreter that I want GIMP to use is found here on my system: /usr/bin/python Thanks!
Change Gimp python interpreter path
0
0
0
839
47,497,097
2017-11-26T14:17:00.000
4
0
1
0
python-3.x,matplotlib,anaconda
48,661,572
2
false
0
0
The notebook needs to be restarted for the new installations to take effect.
2
3
1
I am running an Anaconda installation of Python3 64bit on Windows. I have no idea how to put those words in a proper sentence, but I hope it gives enough information. I am taking an Udacity course which wants me to run %matplotlib inline. This gives the following error: AttributeError: module 'matplotlib' has no attribute 'colors' I get the same error when I run from matplotlib import pylab, but i get no error from import matplotlib. I installed matplotlib as follows: conda install -n tensorflow -c conda-forge matplotlib. How do I solve this error? Kind regards Per request: conda list gives matplotlib 2.1.0 py36_1 conda-forge and a list of other modules.
Module 'matplotlib' has no attribute 'colors'
0.379949
0
0
15,407
47,497,097
2017-11-26T14:17:00.000
1
0
1
0
python-3.x,matplotlib,anaconda
53,349,221
2
false
0
0
You just need to upgrade matplotlib. pip3 install -U matplotlib
2
3
1
I am running an Anaconda installation of Python3 64bit on Windows. I have no idea how to put those words in a proper sentence, but I hope it gives enough information. I am taking an Udacity course which wants me to run %matplotlib inline. This gives the following error: AttributeError: module 'matplotlib' has no attribute 'colors' I get the same error when I run from matplotlib import pylab, but i get no error from import matplotlib. I installed matplotlib as follows: conda install -n tensorflow -c conda-forge matplotlib. How do I solve this error? Kind regards Per request: conda list gives matplotlib 2.1.0 py36_1 conda-forge and a list of other modules.
Module 'matplotlib' has no attribute 'colors'
0.099668
0
0
15,407
47,497,482
2017-11-26T15:01:00.000
0
0
0
1
python,docker
47,497,796
1
false
0
0
When it says "Python runtime," it just means the Python interpreter in a binary format and its local configuration of package dependencies. Your interpretation is correct.
1
0
0
I am starting the Docker tutorial with the Python app and would like to know what is the meaning of "Python runtime" in this context: In the past, if you were to start writing a Python app, your first order of business was to install a Python runtime onto your machine. But, that creates a situation where the environment on your machine has to be just so in order for your app to run as expected; ditto for the server that runs your app. With Docker, you can just grab a portable Python runtime as an image, no installation necessary. Then, your build can include the base Python image right alongside your app code, ensuring that your app, its dependencies, and the runtime, all travel together. So I guess what it mean is the "Python runtime" is like the configuration of your local Python.
What does the Docker tutorial mean by "grab a portable Python runtime as an image"?
0
0
0
202
47,497,552
2017-11-26T15:09:00.000
1
0
0
0
python,optimization,nonlinear-optimization
47,608,446
3
false
0
0
Surprisingly, I found a relatively ok solution using an optimizer from for a deep learning framework, Tensorflow, using basic gradient descent (actually RMSProp, gradient descent with momentum) after I changed the cost function to include the inequality constraint and the bounding constraints as penalties (I suppose this is same as lagrange method). It trains super fast and converges quickly with proper lambda parameters on the constraint penalties. I didn't even have to rewrite the jacobians as TF takes care of that without much speed impact apparently. Before that, I managed to get NLOPT to work and it is much faster than scipy/SLSQP but still slow on higher dimensions. Also NLOPT/AUGLANG is super fast but converges poorly. This said, at 20k variables, it is stillslow. Partly due to memory swapping and the cost function being at least O(n^2) from the pair-wise euclidean distance (I use (x-x.t)^2+(y-y.t)^2 with broadcasting). So still not optimal.
1
3
1
I have a non-lenear optimization problem with a constraint and upper/lower bounds, so with scipy I have to use SLSQP. The problem is clearly not convex. I got the jacobian fo both the objective and constraint functions to work correctly (results are good/fast up to 300 input vector). All functions are vectorized and tuned to run very fast. The problem is that using 1000+ input vector takes ages though I can see the minimizer is not calling my functions a lot (objective/constraint/gradients) and seems to spend most of its processing time internally. I read somewhere perf of SLSQP is O(n^3). Is there a better/faster SLSQP implementation or another method for this type of problem for python ? I tried nlopt and somehow returns wrong results given the exact same functions I use in scipy (with a wrapper to adapt to its method signature). I also failed to use ipopt with pyipopt package, cannot get working ipopt binaries to work with the python wrapper. UPDATE: if it helps, my input variable is basically a vector of (x,y) tuples or points in 2D surface representing coordinates. With 1000 points, I end up with a 2000 dim input vector. The function I want to optimize calculates optimum position of the points between each other taking into consideration their relationships and other constraints. So the problem is not sparse. Thanks...
scipy.optimize.minimize('SLSQP') too slow when given 2000 dim variable
0.066568
0
0
8,691
47,498,390
2017-11-26T16:30:00.000
4
1
0
0
python,unit-testing,pytest
61,614,421
5
false
0
0
Adding __init__.py to the package of the tests worked for me. All test are executed afterwards.
1
17
0
I'm using python pytest to run my unit tests. My project folders are: Main - contains data file: A.txt Main\Tests - the folder from which I run pytest Main\Tests\A_test - folder that contains a test file The test in A_test folder uses the file A.txt (that is in Main folder). My problem is that when I run py.test the test fails because it can't find A.txt. I found out that it is because pytest uses the path Main\Test when running the test instead of changing the path to Main\Tests\A_test (I'm using relative path when opening A.txt inside the test file) My question: is there a way to make pytest change directory to the folder of the test it executes for each test? so that relative paths inside the tests will still work? Is there some other generic way to solve it? (I don't want to change everything to absolute paths or something like this, also this is an example, in real life I have several hundreds tests). Thank you, Noam
Using pytest where test in subfolder
0.158649
0
0
21,673
47,501,275
2017-11-26T21:37:00.000
0
0
1
0
python,main,functools
55,925,583
1
true
0
0
I ended up going with my proposed answer: define those functions in utils.py with the data as explicit parameters, and then use functools.partial before generating the equations to make them implicit and with that codebase having been in production for over a year, it seems reasonable enough to me.
1
0
1
My main function loads a largish dataframe from a csv supplied by the user (along with several other data objects), and then instantiates an object that forms a bunch of equations as part of a mathematical programming problem. Many of the equations' components are returned by calls to about 5 helper functions that I define in a utils file (most importantly, outside of the class that stores the optimization problem). These helper functions make reference to the data loaded in main, but I want their calls to show up in the equations as parameterized only by a time index t (not by the dataframe), for readability. Is the best way to accomplish this to define those functions in utils.py with the data as explicit parameters, and then use functools.partial before generating the equations to make them implicit? This seems like a verbose approach to me, but the other options seem worse: to define the helper functions inside of main, or to give up on the idea of a main function loading the data, which basically means giving up on having a main function. And possibly having confusing circular imports.
Best way to make data loaded in main() an implicit argument of a function in Python
1.2
0
0
36
47,510,030
2017-11-27T11:34:00.000
1
0
0
0
android,python,kivy,android-permissions,pyjnius
63,417,403
4
false
0
1
i know this answer is a little late, but to get permissions you have to specify them before the build. E.g buildozer uses a buildozer.spec. In this file you can specify the permissions you need.
2
4
0
I found that kivy is very nice framework to build cross platform application and I am very interested in kivy just to do android application as I think is easy and comfortable in kivy. After trying few examples, I am interested to know how should handle android run time permission for the kivy app. Actually I had searched on google, but no single working example out there. Should I go back to android / java or it possible with kivy and some other python libs.
How to handle android runtime permission with kivy
0.049958
0
0
7,102
47,510,030
2017-11-27T11:34:00.000
1
0
0
0
android,python,kivy,android-permissions,pyjnius
47,522,452
4
false
0
1
python-for-android doesn't have any code for handling runtime permissions. I expect to look at it sooner rather than later, but there's no ETA for it. You can probably add the code for it yourself if you're interested and know how. If you'd like to try it, such contributions would be very welcome.
2
4
0
I found that kivy is very nice framework to build cross platform application and I am very interested in kivy just to do android application as I think is easy and comfortable in kivy. After trying few examples, I am interested to know how should handle android run time permission for the kivy app. Actually I had searched on google, but no single working example out there. Should I go back to android / java or it possible with kivy and some other python libs.
How to handle android runtime permission with kivy
0.049958
0
0
7,102
47,510,845
2017-11-27T12:18:00.000
0
0
1
0
python,macos,github,jupyter-notebook,github-for-mac
47,515,414
2
false
0
0
For this very case I simply deleted the folder on local, committed the changes, and then added back the files as I wanted them to be. By the way I would love to know a more general solution, to use when the number of files or folders is too high to handle them individually.
1
0
0
I uploaded a Jupyter notebook on github, since then I worked on it locally and then committed the changes. Now I have a .ipynb_checkpoints folder which does not appear on the local repo, and I would like to remove it from the online repo. If I check with git status, or if I try to commit, it says that everything is up-to-date. How can it be done?
Remove a folder from github which does not show locally
0
0
0
783
47,512,599
2017-11-27T13:53:00.000
1
0
1
0
python,python-3.x,python-2.7,pycharm
47,514,912
2
false
0
0
Go to File > Settings > Project > Project Interpreter (it should take you there automatically as soon as you open Settings) and select the version you want to use from the drop-menu. If it is not there, try restarting PyCharm (if it was active whilst the Python 3 installation) - else, it probably means you didn't install Python 3 properly.
1
2
0
I'm a complete newbie trying to use pycharm with python but my interpretor shows a version of 2.7 when i have installed 3.6. Totally confused and need help! On pycharm I do the following steps: Preferences > Python Console > Python Interpretor I only see Python 2.7.8 (/Library/Framework/....) and beneath this I see options beginning with (/Library/Framework.... ) <-- some of these end in bin/python3.6 I am not sure how to configure Pycharm to use the new version of Python. Being a complete newbie I am really confused as to what to do? and whether changing this makes any difference. Any help would be much appreciated. Thanks!
Pycharm shows interpretor version of 2.7 but I have downloaded 3.6?
0.099668
0
0
3,512
47,512,770
2017-11-27T14:02:00.000
2
0
1
0
python,ide,spyder
47,516,219
1
false
0
0
(Spyder maintainer here) About your questions: Can I set these colors? No Can I disable which ones to show? Yes, you can do that by going to Preferences > Editor > Code Introspection/Analysis > Analysis Can I "smart rename" a variable like in other IDEs? Not right now, but we're trying to implement this feature for Spyder 4 (our next major release). Can I list all occurances of a variable like in other IDEs? No, but it's also planned for Spyder 4.
1
1
0
I have some questions about spyder 3.2.4 ide for python: The "sidebar" (to the right of the main code-window) which marks all lines with warnings, all lines with todo etc with a small colored marker. Can I set these colors? Can I disable which ones to show? Can I "smart rename" a variable like in other IDEs? I mean not just text replace, but actually make sure i just rename the selected variable (all occurances and nothing but it) and not just text matching a string i type in like i would in "word". Can I list all occurances of a variable like in other IDEs? Lets say I have a list called "combinedAreas" and want to list all the uses of that variable, preferably also being ablo to click a line and jump to that line. Thanks!
spyder IDE sidebar, rename, find all occurances?
0.379949
0
0
2,413
47,515,644
2017-11-27T16:32:00.000
0
0
0
0
python,pandas,csv,dataframe,indexing
47,516,794
3
true
0
0
you may just reset the index at the end or define a local variable and use it in `arange' function. update the variable with the numbers of rows for each file you read.
1
0
1
Is it possible to start the index from n in a pandas dataframe? I have some datasets saved as csv files, and would like to add the column index with the row number starting from where the last row number ended in the previous file. For example, for the first file I'm using the following code which works fine, so I got an output csv file with rows starting at 1 to 1048574, as expected: yellow_jan['index'] = range(1, len(yellow_jan) + 1) I would like to do same for the yellow_feb file, but starting the row index at 1048575 and so on. Appreciate any help!
How to index a pandas data frame starting at n?
1.2
0
0
1,626
47,516,712
2017-11-27T17:33:00.000
0
0
1
0
python,python-3.x,installation,windows-xp,python-2.x
58,687,877
5
false
0
0
i tried 3.3.3 but i came up with an error message use 3.4.3/2.7.9 they are the only versions that work now sadly
2
14
0
I would like the most advanced version of Python that still works on Windows XP. I need both Python 2 and Python 3. What versions of Python will work on Windows XP?
What versions of Python will work in Windows XP?
0
0
0
13,892
47,516,712
2017-11-27T17:33:00.000
-2
0
1
0
python,python-3.x,installation,windows-xp,python-2.x
47,516,729
5
false
0
0
Any of them, python is very platform independent. Some features might not work, but that would best be found in the documentation.
2
14
0
I would like the most advanced version of Python that still works on Windows XP. I need both Python 2 and Python 3. What versions of Python will work on Windows XP?
What versions of Python will work in Windows XP?
-0.07983
0
0
13,892
47,517,484
2017-11-27T18:26:00.000
0
0
1
0
python,binding,instance,vlc,libvlc
57,161,579
1
true
0
0
Generally it is checking for libvlc.dll and other couple of dll files, which were not included in my installed version of VLC (may be due to some issues during installation or any other reason. IDK). So copying the dlls to VLC installation folder or working directory of the project solved the problem.
1
0
0
I want to make a simple media player in python using the libvlc python bindings. I have downloaded the vlc.py and tested it. It works perfectly. So I started using vlc.py as a module in my code, here is my code: import dev_vlc as vlc import time import os vlcinstance = vlc.Instance() myplayer = vlcinstance.media_player_new() media = vlcinstance.media_new('test.mp3') myplayer.set_media(media) myplayer.play() time.sleep(10) When I run the above code instead of playing the audio file it throws the following error: [034a2cb4] core libvlc error: No plugins found! Check your VLC installation. Traceback (most recent call last): File "C:/Users/krush/Documents/MyMediaPlayer/MyMediaPlayer.py", line 7, in <module> myplayer = vlcinstance.media_player_new() AttributeError: 'NoneType' object has no attribute 'media_player_new' Can anyone please help me to fix and tell me where I went wrong.
core libvlc error while creating an instance for vlc
1.2
0
0
784
47,517,930
2017-11-27T18:55:00.000
0
0
0
0
python,api,firebase,react-native
49,160,990
2
false
0
0
What I know (and I really don't know much) is to keep your frontend separate from your backend. You can create the frontend React components in React. You also create API routes to your backend using e.g. python, flask, ... (firebase is a good option). Then you make HTTP requests from your frontend to your backend API to connect them together.
1
3
0
I would like to use python as part of a react-native project which handles heavy algorithms. However, as many have suggested, it is not recommended to do that, but simply create an API for the python. Now firstly, I'm confused by the term "create an API", what does that mean? Is it to use something like firebase that deals with backend, and then simply use firebase api?
How to connect react native app to python?
0
0
0
3,179
47,517,981
2017-11-27T18:58:00.000
8
0
0
0
python,django,postgresql,django-rest-framework
47,517,982
3
false
1
0
This error turns out (usually) to be caused, ultimately, by failing to create the initial migration for a new app. the error was resolved by running $ ./manage.py makemigrations <my new app module name> && migrate NOTE: makemigrations DOES NOT AUTOMATICALLY CREATE THE INITIAL MIGRATION FOR NEW APPS
2
11
0
I'm seeing an error when running my tests, i.e. $ ./manage.py test --settings=my.test.settings django.db.utils.ProgrammingError: relation "<relation name>" does not exist This is after running ./manage.py makemigrations && migrate.
Django test runner failing with "relation does not exist" error
1
0
0
2,724
47,517,981
2017-11-27T18:58:00.000
0
0
0
0
python,django,postgresql,django-rest-framework
69,928,458
3
false
1
0
In case you faced this issue while using pytest-django package, check that your haven't created new migrations with --reuse-db option in your pytest.ini
2
11
0
I'm seeing an error when running my tests, i.e. $ ./manage.py test --settings=my.test.settings django.db.utils.ProgrammingError: relation "<relation name>" does not exist This is after running ./manage.py makemigrations && migrate.
Django test runner failing with "relation does not exist" error
0
0
0
2,724
47,522,534
2017-11-28T01:29:00.000
0
0
0
0
python,tkinter,scrollbar
47,523,573
2
false
0
1
You need to put a binding on the inner frame's <Configure> event to also reset the scrollregion.
1
2
0
I currently have a scrollbar and a canvas on the same hierarchical level. In the canvas, there is a frame created using the canvas' create_window method. I have a binding that is called when the canvas is configured that will resize the scrollregion to fit bbox("all"). It works, but ONLY when the entire window is resized (e.g. If I add more widgets to the canvas that are now not in its visible region, I have to resize the window to be able to change the canvas' scrollregion). Ideally, the scrollregion should change as soon as the new widget is added to a nonvisible location of the canvas (e.g. it's off the screen). What am I currently doing incorrectly? Any advice is appreciated!
Tkinter: Resizing scrollbar without adjusting window size
0
0
0
423
47,525,351
2017-11-28T06:36:00.000
2
0
0
0
python,session,flask,flask-login
47,525,555
1
false
1
0
One way for ensuring this would be by generating a session id from server. You would need to generate a unique session id every time a user logs in and store it in some database against that user. Apart from this you would need to authenticate user every time an endpoint call, which requires user to be logged in, is made. And of course discard the session id on logout. This way whenever a user logs in the old session id is discarded and the previous session no more remains valid.
1
0
0
how to make ensure one active session per one user using python flask? Description: if one user logged in two different machines with same credentials I want mechanism to force logout earlier active sessions of that user with flask and python. Please help me out in this. I am currently using flask-login, load_user() and login_manager libraries for login mechanism.
how to make ensure one active session per one user using python flask?
0.379949
0
0
821
47,526,059
2017-11-28T07:23:00.000
0
0
0
0
python,pandas,dataframe
47,526,928
2
false
0
0
Another way could be. a=time.time() if value in set(dataframe.index) b=time.time() timetaken=b-a
1
1
1
When I update the dataframe, I should check if the value exists in the dataframe index, but I want to know which way is faster, thanks! 1. if value in set(dataframe.index) 2. if value in dataframe.index
How to quickly check if a value exists in pandas DataFrame index?
0
0
0
2,481
47,530,337
2017-11-28T11:22:00.000
1
1
0
0
python-pptx
47,537,303
1
false
0
0
The hierarchy governing the inheritance of font style is knowledge that belongs to the ill-documented black arts of PowerPoint. I don't know of a place where it's clearly described. If I needed to learn it, I would start with a Google search on "powerpoint style hierarchy" to gather candidate participants and then settle in for a long period of experimentation. The candidates I can think of are, roughly in order of precedence: formatting directly applied at the run level default run formatting applied at the paragraph level (this doesn't always take effect) formatting inherited from a placeholder, if the shape was originally a placeholder. A theme related to the slide, its slide layout, or its slide master. A table style Presentation-default formatting. I would devote a generous period to getting anything I could from Google, form a set of hypotheses, then set up experiments to prove or disprove those hypotheses. Note the challenge is made more complex by the conditions involved, such as "is in a table" and "is a placeholder", etc.
1
3
0
[text element].font.size returns None if the element has inherited its size from a parent text style. The documentation refers to a style hierarchy but doesn't appear to include documentation about it. Does anyone know how you traverse this hierarchy to determine the actual size of a font element if it has inherited its size from somewhere else?
python-pptx font size from hierarchy / template / master
0.197375
0
0
322
47,532,330
2017-11-28T13:04:00.000
1
0
1
0
python,python-2.7,out-of-memory
47,533,092
2
false
0
0
Python 3 is known to require more memory than Python 2.7 in many domains: strings are unicode so they use twice as much memory than Python 2 byte strings python 3 int type is equivalent of Python 2 long type so again requires more memory Some improvement may have occur in other domains, but I would not hope that passing from Python 2 to Python 3 could solve any out of memory issue. Increasing physical memory should be a much more reliable way.
1
0
0
I got MemoryError from creating 4 matrices size: (115005L, 6005L) (9738L, 6005L) (115005L, 9738L) and (115005L, 6005L) in the same function. Now I am on Python 2.7.13 (Anaconda 64-bit)in Windows. Is updating python to 3.x the best way to solve the problem? Or how to solve MemoryError without modifying hardware? I have to use this PC but I have no authority to buy or add anything.
What are the programming ways to solve MemoryError (raising from creating large matrices)?
0.099668
0
0
48
47,533,930
2017-11-28T14:25:00.000
5
0
0
0
python,cluster-analysis,dbscan
47,536,805
1
false
0
0
The DBSCAN paper suggests to choose minPts based on the dimensionality, and eps based on the elbow in the k-distance graph. In the more recent publication Schubert, E., Sander, J., Ester, M., Kriegel, H. P., & Xu, X. (2017). DBSCAN Revisited, Revisited: Why and How You Should (Still) Use DBSCAN. ACM Transactions on Database Systems (TODS), 42(3), 19. the authors suggest to use a larger minpts for large and noisy data sets, and to adjust epsilon depending on whether you get too large clusters (decrease epsilon) or too much noise (increase epsilon). Clustering requires iterations. That paper was an interesting read, because it shows what can go wrong if you don't look at your data. People are too obsesses with performance metrics, and forget to look at the actual data.
1
3
1
What routine or algorithm should I use to provide eps and minPts parameters to DBSCAN algorithm for efficient results?
How can I choose eps and minPts (two parameters for DBSCAN algorithm) for efficient results?
0.761594
0
0
4,405
47,535,356
2017-11-28T15:36:00.000
0
0
1
0
python,html,json,node.js,firebase-realtime-database
47,537,413
1
true
1
0
You have to create some sort of communication channel between the javascript and python code. This could be anything, SOAP, HTTP, RPC, any number of and flavor of message queue. If nothing like that is in place, it's quite the long way around. A complex application might warrant you doing this, think micro services communicating across some sort of service bus. It's a sound strategy and perhaps that's why your client is asking for it. You already have Firebase, though! Firebase is a real-time database that already has many of the characteristics of a queue. The simplest and most idiomatic thing to do would be to let the python code be notified of changes by Firebase: Firebase as service bus is a nice strategy too!
1
0
0
I have JS running and essentially getting user entries from my HTML session storage and pushing these to a DB. I also need to use a HTTP request to pass a json object containing the entries to a python file hosted somewhere else. Does anyone have any idea of documentation I could look at, or perhaps how to get JSON objects from JS to Python. My client does not want me to grab the variables directly from the DB.
How do you pass variables using a HTTP post from a JS file/function to a separate python file/function hosted somewhere else?
1.2
0
1
37
47,536,066
2017-11-28T16:10:00.000
0
0
0
0
python,neural-network,keras,layer
47,536,983
1
true
0
0
Since you don't need it to be trainable, a lambda function will also do. Or you can keep the custom layer as you have it, and set trainable to False. The weights will never be updated for this layer, and whatever you do here will forward propagate to the next layer in the model and as mentioned in the comments, backprop will impact the other layers with weights. So, definitely your model will learn something. I personally recommend using a custom layer, if you decide later to add some learning to this layer and check your results. You cannot do this in a Lambda function. If you add one (kernel), you will have to use in the 'call' method. Your model will throw an error during training otherwise.
1
0
1
I would like to implement a custom layer. The 2 inputs of my custom layer are 2 tensors, which come from 2 seperate 2D convolution layers, is there an example?
Define a custom layer with 2 tensors inputs in Keras
1.2
0
0
406
47,539,225
2017-11-28T19:14:00.000
1
0
0
0
python
69,762,672
1
false
1
0
selenium, find elementByName, then use pynput lib to right click and download. used similar for fetching market data works prety well, refer to official selenium docs for spesifics .
1
0
0
Can you help me with a script for autoclicking and download a photo from a photo site (ex. Flickr, Photobucket)?
Python script for autoclicking and downloading
0.197375
0
1
38
47,539,882
2017-11-28T19:55:00.000
3
0
0
0
python,django
47,540,567
1
false
1
0
There is other threads on this but basically this is the rules I use: You should definately remote migrations files using Git. Never run makemigrations on production environment always in developpment. Now, let's say you made a change on one of your models (in developpment I hope), you will run a normal makemigrations. Then, run migrate (still in dev) in order to test everything. When you're ready, you will commit and push the created files and pull in prod to then run migrate to update database schema. This will assure good versionning of your migrations files. Also, it will greatly help you in the long run, because running makemigrations in produciton and in dev simultaneously will just cause more conflicts on migrations files which can be a pain.
1
2
0
While developing a Django project tracking it with git and GitHub, how should I manage migrations? Sometimes when I deploy a release to production some migrations crash due to files that I delete after this migration. How can I avoid this? Thanks.
Django project: track migrations
0.53705
0
0
289
47,541,915
2017-11-28T22:21:00.000
1
0
1
0
python,memory-management
47,541,942
1
true
0
0
array=[1,2,3] is a list, not an array. It is dynamically allocated (resizes automatically), and you do not have to free up memory. The same applies to arrays from the array module in the standard library, and arrays from the numpy library. As a rule, python handles memory allocation and memory freeing for all its objects; to, maybe, the exception of some objects created using cython, or directly calling c modules.
1
0
0
When using large arrays, does python allocate memory as default, unlike C for example? More specifically, when using the command array=[1,2,3], should I worry about freeing this and every other array I create? Looking for answers on the web just confused me more.
Python: Does array creation automatically allocate memory?
1.2
0
0
443
47,543,734
2017-11-29T01:45:00.000
0
0
1
0
python,windows,windows-10,python-3.6
54,420,025
1
false
0
0
In Windows: Run ve\Scripts\activate.bat In Linux: source /path/to/ENV/bin/activate
1
0
0
To create a virtual environment I used: virtualenv -p C:\Users\UserName\AppData\Local\Programs\Python\Python36-32\python.exe ve However I'm not able to activate it. I tried using: C:\Users\UserName\AppData\Local\Programs\Python\Python36-32\python.exe\ve\bin\activate and source ve/bin/activate And neither worked. I looked around the site and couldn't get any of the suggestions to work either (most of the questions I saw were on different OS/Python versions so that might be part of the issue). Is the actual set up for creating a virtual environment correct? If so, how can I activate it?
Activating a virtual environment in Python 3 on Windows 10
0
0
0
1,122
47,545,711
2017-11-29T05:31:00.000
0
0
1
0
python,zero
47,545,768
1
false
0
0
Python truncates leading zeros as they do not add to the value of the number, which in this case is 0.
1
0
0
I understand that the comma (,) makes Python think that print(1,000,000) is a list of three items to be printed. However, why is only 1 zero (0) of the 3 printed? Surely 1 000 000 should be printed instead of 1 0 0? Why have the other 2 zeroes disappeared? Thank you all for your help!! Alas I must appeal to the masters^^.
Why does print(1,000,000) in Python give 1 0 0 instead of 1 000 000?
0
0
0
996
47,545,740
2017-11-29T05:33:00.000
2
0
1
0
python,beautifulsoup,importerror
47,547,367
2
false
1
0
As colorspeed pointed out above, BeautifulSoup is a class therefore import bs4.BeautifulSoup will result in error. Instead use the syntax from bs4 import BeautifulSoup.
2
1
0
To my understanding "import package.module" is same as "from package import module". But this is not behaving as expected in case of BeautifulSoup. from bs4 import BeautifulSoup: This command works fine. But, import bs4.BeautifulSoup throws the following error ModuleNotFoundError: No module named 'bs4.BeautifulSoup' Any thoughts/help on this?
Import BeautifulSoup
0.197375
0
0
5,300
47,545,740
2017-11-29T05:33:00.000
0
0
1
0
python,beautifulsoup,importerror
66,036,064
2
false
1
0
import bs4.BeautifulSoup will work when we have another file like thing in your bs4 package however BeautifulSoup is a class from that package so it cannot be called the way you are calling it. Example I have a package name Twitter and there is a sub package called twitter-api then I can call it as import Twitter.twitter.api Similarly, if I have package name Twitter and there is a class inside that package with name twitter-api then I have to call it from Twitter import twitter-api
2
1
0
To my understanding "import package.module" is same as "from package import module". But this is not behaving as expected in case of BeautifulSoup. from bs4 import BeautifulSoup: This command works fine. But, import bs4.BeautifulSoup throws the following error ModuleNotFoundError: No module named 'bs4.BeautifulSoup' Any thoughts/help on this?
Import BeautifulSoup
0
0
0
5,300
47,547,731
2017-11-29T07:59:00.000
1
0
0
1
python,multithreading,asynchronous,ipc,interprocess
47,558,211
1
true
0
0
The simplest way would be using an async TCP/Unix Socket server in consumer.py. Using HTTP will be an overhead in this case. A producer, TCP/Unix Socket client, will send data to consumer then consumer will respond right away before writing data in disk drive. File IO in consumer are blocking but it will not block producers as stated above.
1
0
0
I have multiple write-heavy Python applications (producer1.py, producer2.py, ...) and I'd like to implement an asynchronous, non-blocking writer (consumer.py) as a separate process, so that the producers are not blocked by disk access or contention. To make this more easily optimizable, assume I just need to expose a logging call that passes a fixed length string from a producer to the writer, and the written file does not need to be sorted by call time. And the target platform can be Linux-only. How should I implement this with minimal latency penalty on the calling thread? This seems like an ideal setup for multiple lock-free SPSC queues but I couldn't find any Python implementations. Edit 1 I could implement a circular buffer as a memory-mapped file on /dev/shm, but I'm not sure if I'll have atomic CAS in Python?
Interprocess communication with SPSC queue in python
1.2
0
0
173
47,560,349
2017-11-29T19:28:00.000
0
1
0
0
python,api,line,thrift
47,595,331
1
true
0
0
There is no tool needed. Since yyou did not elaborate on your actual use case too much, I can only give a generic answer. You control both RPC server & client + we do NOT talk about stored data In that case you need only to replace the transports on both ends and you're pretty much done. All other cases You will need two pieces a piece of code that deserializes old data stored with "compact" a piece of code that deserializes these data using "binary" Both cases are not really hard to implement technically.
1
0
0
I have a api made based on thrift TCompactProtocol. Is there a quick way to convert it into TBinaryProtocolTransport? Is there a tool for conversion? FYI. My api is Line Api bases api Python.
how to convert thrift TCompactProtocol to TBinaryProtocolTransport
1.2
0
0
159
47,562,817
2017-11-29T22:21:00.000
0
0
1
0
python,tensorflow
47,562,880
3
true
0
0
If it's just that simple do: sum([A[i]*B[i] for i in range(3)]) This sums the products of the first three values with each other. Hope this helps!
1
0
1
How can I compute the product sum of the first three elements of two vectors A = [a1, a2, a3, a4, a5, a6]andB = [b1, b2, b3, b4, b5, b6] (i.e. [a1b1 + a2b2 + a3b3])in python and tensorflow.
How to compute the product some of parts of two vectors
1.2
0
0
52
47,563,506
2017-11-29T23:22:00.000
0
0
1
0
python,input,output,addition,codio
62,247,414
3
false
0
0
Your solution is as below: print(N+12)
2
0
0
I am using codio and ran into a challenge that doesn't make sense to me. It says "Your code should expect one input. All you need to do is add 12 to it and output the result". I am new to Python and was wondering if someone could explain what this means. I can't seem to find any information anywhere else.
Addition with One input
0
0
0
2,033
47,563,506
2017-11-29T23:22:00.000
0
0
1
0
python,input,output,addition,codio
47,581,966
3
false
0
0
It's a task definition. You have to write a function which accepts one input parameter, add 12 to it, and return back to caller.
2
0
0
I am using codio and ran into a challenge that doesn't make sense to me. It says "Your code should expect one input. All you need to do is add 12 to it and output the result". I am new to Python and was wondering if someone could explain what this means. I can't seem to find any information anywhere else.
Addition with One input
0
0
0
2,033
47,564,580
2017-11-30T01:42:00.000
1
0
0
0
python,django,theory
47,564,916
1
true
1
0
The file is only generated for your convenience. If you don't need it, there's no reason to keep it. No part of Django relies on it being present.
1
0
0
In a project, I'm implementing using Django framework, I have two applications: Application responsible for REST API, containing production models.py file. Application responsible for Web client, that uses REST API's models. Both of them contain vast static files and hierarchy of additional source code, that is why it came to my mind to split this responsibilities into two apps, rather than different views.py and urls.py files inside of one application. Because application responsible for Web relies entirely on REST API's models is it a good practice to delete models.py file from this application entirely?
Deleting 'models.py' from Django app
1.2
0
0
24
47,564,648
2017-11-30T01:51:00.000
0
0
1
0
python
47,564,727
1
false
0
0
Trying this, from ../target_directory.library import module. ../ means going up in directory depth. I hope it helps.
1
0
0
I have a directory called "code". Within that directory I have to sub directories: lib: Holds a python module I've created src: Holds the python source code calling the module. When I try to run the code, the import statement is not seeing the module in a separate folder. How do I make the two files interact without changing directories?
working with a library in a different folder python
0
0
0
44
47,566,009
2017-11-30T04:41:00.000
0
0
1
0
python,matplotlib,plot
47,566,035
1
true
0
0
You may use tick_params function like plt.tick_params(axis='y', which='major', labelsize=10)
1
2
1
This question already has an answer here: How to change the font size on a matplotlib plot
How to change a font size within a plot figure
1.2
0
0
445
47,566,623
2017-11-30T05:44:00.000
0
0
0
0
python,python-3.x,pypdf2
47,566,749
2
false
1
0
Get a word processor or page layout software. Convert PDF to format your software can read. Edit document. Write new PDF. Anything else sounds nefarious. Nobody should help you do it any other way.
1
0
0
I am using pypdf2 to highlight text in a particular page in Pdf files.Hence,I get only a single page with higlighted text as an output.Now,I want to replace this page in the original pdf file. I have also tried "search=" parameter from abode to highlight in the same file itself.But,It is not working. I am new to working with PDFs.Sorry,If the question sounded a bit naive.
Replace a Specific page in a PDF with a page from another PDF in python 3
0
0
0
609
47,567,227
2017-11-30T06:32:00.000
2
0
1
0
python,libraries
47,568,451
3
false
0
0
in command line: pip list you can output as pip freeze just typing pip in a command line will give you all the very handy pip commands and flags
1
1
0
I would like to have a list with my installed libraries in my python console. How can be that possible ?
how can I see my installed libraries in python ?
0.132549
0
0
3,875
47,567,982
2017-11-30T07:24:00.000
0
0
1
0
python,pip,package
47,568,291
1
false
0
0
any version with string is okay, for my case, 1.0 is the latest version, I use 1.1.dev for the development version. pip install packagename will only instal version 1.0
1
0
0
I need to publish a development version of python package without affect current cases where latest version is used. after testing(for some reason can't test locally), will need to publish the development version as latest version. for example: current latest version is : 1.0 how should the dev version of package can be named. 1.0.dev or 1.1.dev or something else. (pip install) I want my dev version can only be install by "pip install packagename==version", not "pip install packagename"
Publish python dev version of package
0
0
0
331
47,568,567
2017-11-30T08:03:00.000
1
0
1
0
python,anaconda
48,060,386
1
false
0
0
Step 1 : Click on start Step 2 : Search and Open Anaconda Navigator Step 3 : In the Anaconda Navigator, Go to Evironments Step 4 : There is a drop down menu consisting of options : 1)Installed 2)Not Installed 3)Upgradable 4) Selected 5)All Step 5 : Click on "Not Installed" Step 6 : There is a block stating Search Packages, there enter the name of the package you want to install (in your case you want to install tweepy) Step 7 : Mark the tweepy in the search results, and click Apply Installation will start. Congratulations! You have successfully installed the package.
1
0
0
When i'm trying to install tweepy in windows 10 for python 3.6 using conda by this command: conda install -c conda-forge tweepy after downloading all the required packages .It shows an error like this : ERROR conda.core.link:_execute_actions(337): An error occurred while installing package 'conda-forge::pyjwt-1.5.3-py_0'. CondaError: Cannot link a source that does not exist.
Unable to install tweepy in anaconda
0.197375
0
0
756
47,572,918
2017-11-30T11:54:00.000
1
0
0
0
python,python-3.x,python-requests,zip
47,573,727
1
false
1
0
Is there a way to avoid downloading the zips and directly accessing the files inside? Generally speaking : no. A web server serves files in a file system, not in a zip archive. If I do need to download them, how can I track where the zips have been downloaded to? (So I can extract the .xlsx) If not specified, the location is the current directory, the one the script has been launched in.
1
0
0
I have an HTML web page with many download links in a table. I have isolated the path to my desired zips. They all contain an .xlsx file but sometimes other files. Is there a way to avoid downloading the zips and directly accessing the files inside? If I do need to download them, how can I track where the zips have been downloaded to? (So I can extract the .xlsx) I am currently looking into zipfile and requests for solutions. zipfile.extract needs the path of the zip file, but I don't know exactly where the script will download to. requests gives a response object, but how do I prompt it to download?
unzipping an html link with python
0.197375
0
1
35
47,578,485
2017-11-30T16:47:00.000
0
0
1
0
python,sql,db2
47,580,157
1
false
0
0
Use a SEQUENCE object and NEXT VALUE FOR to control the auto-increment sequence Use a FINAL TABLE select statement to return the newly inserted values, including the auto-increment value
1
0
0
I have a program written in python which takes user input and updates a db table. Within that I have a separate function that will be executed as below: Insert some values with a auto increment sequence number. Get the sequence number (which has been generated last) and give it back to user. I have multiple instances of this process in different hosts. How will I make sure that this function is executed mutually exclusive for all instances, so that the Step 2 always gives me the last generated sequence number?. Any other suggestion is also welcome.
Mutex for multiple instances of a service
0
0
0
72
47,578,774
2017-11-30T17:01:00.000
0
0
0
0
python,pandas,dataframe
47,578,933
2
false
0
0
If you have unique date or DaysToReception, you can actually use Map/HashMap where the key will be the date or DaysToReception and the values will be other information which you can store using a list or any other appropriate data structure. This will definitely improve the efficiency. As you pointed out that "number of rows I search the value below depends on the value "DaysToReception", I believe "DaysToReception" will not be unique. In that case, the key to your Map will be the date.
1
0
1
My goal is that by given a value on a row (let's say 3), look for the value of a given column 3 rows below. Currently I am perfoming this using for loops but it is tremendously inefficient. I have read that vectorizing can help to solve this problem but I am not sure how. My data is like this: Date DaysToReception Quantity QuantityAtTheEnd 20/03 3 102 21/03 - 88 22/03 - 57 23/03 5 178 24/03 And I want to obtain: Date DaysToReception Quantity QuantityAtReception 20/03 3 102 178 21/03 - 88 22/03 - 57 23/03 5 178 24/03 ... Thanks for your help!
Python: How to efficiently do operations using different rows of the same column?
0
0
0
19
47,579,464
2017-11-30T17:39:00.000
1
0
1
1
python,fmi,jmodelica
47,597,565
1
false
0
0
Meanwhile I solved the above problem. pyfmi was built from source using the intel compiler instead of gcc. So with: module switch intel gcc this error didnt occur anymore.
1
0
0
I am tying to use pyfmi on our universities Linux HPC cluster. Building the FMILibrary and also installing pyfmi does not throw any error. However I get the below error message when trying to import pyfmi in python: File "/home/user/.local/lib/python2.7/site-packages/pyfmi/init.py", line 24, in from .fmi import FMUModel, load_fmu, FMUModelME1, FMUModelME2 ImportError: /home/user/.local/lib/python2.7/site-packages/pyfmi/fmi.so: undefined symbol: __intel_sse2_strcpy Does anyone have an idea what the reason might be? Thanks in advance!
ImportError with pyfmi
0.197375
0
0
520
47,580,081
2017-11-30T18:17:00.000
0
0
1
0
python,methods
47,580,149
4
false
0
0
date is a method of datetime. Whenever you want the date you need to invoke (or call) that method - and the way to do it is with parenthesis. When you don't use parenthesis what you access is the function itself as an object.
1
0
0
This question is just an attempt at understanding something; I know what to do, just want to know what is the difference between: datetime.datetime.now().date and datetime.datetime.now().date()
Why do you need parenthesis in Python?
0
0
0
430
47,584,413
2017-11-30T23:36:00.000
1
0
0
0
python,django,authentication,url
47,584,549
1
false
1
0
You should try using Djago REST Framework, it will make it easy to retrieve data with urls using unique identifier.
1
0
0
I'm currently working with some people to develop an application that will display a "sound library" when the user selects an option on their voip phone. The idea is that the phone system will pass a url with a device id in it, and that will open the django app to the users' library. I was told to remove login/user authentication in order to make the process easier for the user. My question is, is there a way to create a user field and save the model for future retrieval via the url request alone? Do I need to pass the device id to some hidden form first and redirect to the main page, and query the users' objects via the device id? I know there are security concerns but was wondering if it's even possible, any help is appreciated!
Django create user account via URL
0.197375
0
0
77
47,585,666
2017-12-01T02:13:00.000
0
0
0
0
python,opencv,heroku,cascade-classifier
47,585,710
2
false
0
0
I figured it out. All I needed to do was use the os.path.abspath() method to convert the relative path to an absolute path
1
0
1
I am loading a cascade classifier from a file in OpenCV using python. Since the CascadeClassifier() method requires the filename to be the absolute filename, I have to load the absolute filename. However, I am deploying and using it on Heroku and I can't seem to get the absolute path of the file. I've tried using os.getcwd() + '\cascader_file.xml', but that still does not work. I would like to know how to load the classifier on the Heroku deployment
Load classifier on Heroku Python
0
0
0
174
47,586,587
2017-12-01T04:19:00.000
0
0
0
0
python,opencv,image-processing,bounding-box
47,586,784
2
false
0
0
I have worked on almost similar problem. Easiest way is to train a Haar-cascade on the vehicles of similar size. You will have to train multiple cascades based on the number of categories. Data for the cascade can be downloaded from any used car selling site using some browser plugin. The negative sets pretty much depend on the context in which this solution will be used. This also brings the issue that, if you plan to do this on a busy street, there are going to be many unforeseen scenarios. For example, pedestrian walking in the FoV. Also, FoV needs to be fixed, especially the distance from which objects are observed. Trail and error is the only way to achieve sweet-spot for the thresholds, if any. Now I am going to suggest something outside the scope of the question you asked. Though this is purely image processing based approach, you can turn the problem on its face, and ask a question 'Why' classification is needed? Depending on the use-case, more often than not, it will be possible to train a deep reinforcement learning agent. It will solve the problem without getting into lot of manual work. Let me know in case of a specific issues.
1
0
1
I am currently working on vehicle platooning for which I need to design a code in python opencv for counting the number of vehicles based on the classification.The input is a real time traffic video. The aim is finding an average size "x" for the bounding box and say that for cars its "x", for buses its "3x" and so on.Based on size of "x" or multiples of "x", determine the classification.Is there any possible way I can approach this problem statement?
Vehicle counting based on classification (cars,buses,trucks)
0
0
0
1,211
47,588,910
2017-12-01T07:55:00.000
0
0
0
0
python,mysql,ubuntu,sublimetext3,mysql-connector
52,473,839
2
false
0
0
I am now using Python 3.6, mysql.connector is working for me best. OS: Ubuntu 18.04
1
2
0
OS: Ubuntu 17.10 Python: 2.7 SUBLIME TEXT 3: I am trying to import mysql.connector, ImportError: No module named connector Although, when i try import mysql.connector in python shell, it works. Earlier it was working fine, I just upgraded Ubuntu and somehow mysql connector is not working. I have tried reinstalling mysql connector using pip and git both. Still no luck. Please help!
MySQL Connector not Working: NO module named Connector
0
1
0
722
47,590,504
2017-12-01T09:40:00.000
1
1
1
0
dns,resolver,dig,dnspython
47,598,958
1
true
0
0
An ANY query is a perfectly ordinary query that asks for the record type with number 255, which is usually referred to as the ANY type, for fairly obvious reasons. It doesn't matter which tool sends the query (the program dig, or code you write, or something else), it's the same query anyway. There is no guarantee that an ANY query will give the same results as multiple queries for many different types, it's entirely up to the server that generates the responses. Other than for debugging and diagnostics, there is hardly ever a reason to send an ANY query. There are loads of DNS libs for Python. I'm sure someone else can tell you which one is the preferred one these days.
1
1
0
My general question is how does "dig any" work? In particular, I would like to compare the use of dig to naive sending of multiple equivalent requests (a, txt, mx, ...). Does a single DNS query is sent? Is the use of dig more efficient? Is it guaranteed to get the same results as sending multiple equivalent requests (a, txt, mx, ...)? If they are not equivalent, when should I use each of the methods? And finally, if somebody has Python (prefered Python3) implementation of dig (not by running it using subprocess etc.) - I will be glad to get a reference.
How does dig any work?
1.2
0
0
1,027
47,590,564
2017-12-01T09:43:00.000
0
1
1
0
python,debugging,pycharm,pytest
47,594,318
1
false
0
0
A few things that might help you with debugging speed are: using Python 3.5/3.6. If you're running in linux/macos install the cython extension (you get prompted) Use the latest Pycharm, 2017.3 at this moment. Try to simplify tests. If not possible, just run those you need for the debugging process by creating a special runner in Pycharm and using the -k 'pattern' argument for pytest. Moreover, I don't understand why you're invoking pdb if you're using the Pycharm debugger.
1
0
0
I have a series of tests that are quite complicated. Unfortunately, the builtin Pycharm Debugger is waaay too slow to handle them. I tried making it faster, but any attempts failed, so I have to resort to using pdb. My problem is that the command line that appears if I run my tests with pycharm and come across a pdb breakpoint is quite annoying: It does not support code completion (of course I googled it, but my attempts failed). I can't even press 'up' to get the last command again Most annoyingly: When I already wrote some code in the command line and jump to the beginning of the line to edit it, the cursor automatically jumps to the end of the line. I noticed that it is not the iPython console which I get when I go into pdb debug mode when I don't use pytest. Do you have any idea on how to solve any of these issues? Ideally on how to speed up the Pycharm debugger, or how to get the iPython console also in pytest? Help is much appreciated. Thanks in advance.
Debugging pytests in Pycharm with pdb shows annoying console
0
0
0
560
47,590,717
2017-12-01T09:52:00.000
0
0
0
0
python,kivy
47,592,299
1
true
0
1
You can install them manually into your project directory. So if you need Requests for example, you can package it with the application. You can download the source, copy the requests directory into your application’s codebase, and import requests in your python files as you normally would.
1
0
0
I have created an app in Kivy. It uses two modules: requests and geocoder. How can I run this app by using Kivy Launcher?
How to run apps in Kivy Launcher if the application uses additional modules?
1.2
0
0
446
47,591,513
2017-12-01T10:36:00.000
1
0
1
0
python,anaconda,commit
47,591,883
1
false
0
0
If you already have the binary downloaded on the folder why you need to run pip install commit That would download and install it again right? Just run pip install . on the folder where you downloaded the package.
1
0
1
I am going to install commit framework (Convex Optimization Modeling for Microstructure Informed Tractography) on anaconda 3. Here I explained the process that I did: 1st, I downloaded commit and then opened an anaconda prompt and went to the location that I downloaded the folder but when I run pip install commit I faced with this error: Could not find a version that satisfies the requirement commit. No matching distribution found for commit. I am grateful for any suggestion to solve this error.
error for installing COMMIT package in anaconda 3
0.197375
0
0
30
47,592,516
2017-12-01T11:35:00.000
0
0
1
0
python,import,installation,scikit-learn
47,598,201
2
false
0
0
Have you tried to import sklearn directly from a Python interpreter ? Also, try to check in your Project Settings that sklearn is recognized as a package of the Python interpreter associated (Settings --> Project --> Project Interpreter)
1
0
1
When I try to import sklearn, I get the following error message: ImportError: DLL load failed: Das angegebene Modul wurde nicht gefunden. (In English: The specified module was not found) I'm working on Windows 10, 64-Bit. I'm using Python 3.6.1. (and no other version), Anaconda and PyCharm. I installed scikit-learn using conda install scikit-learn and I can find it in conda list as well as in File | Settings | Project Interpreter with version 0.19.1. I also have numpy 1.13.3 and scipy 1.0.0. I know this error message has been discussed multiple, but none of these discussions could help me ... I tried uninstalling and re-installing numpy, scipy and scikit-learn. I also tried installing scikit-learn using pip. If I randomly try to load other packages, that are in my conda list, they all work perfectly fine, but not scikit-learn. I don't even know where my error is. Can anyone give me a hint in the right direction, or a suggestion what I could try? Thanks!
Cannot load scikit-learn
0
0
0
606
47,592,850
2017-12-01T11:54:00.000
16
0
1
0
python,ipython,anaconda,spyder
47,603,175
1
true
0
0
(Spyder maintainer here) This is done to avoid blocking Spyder when too much output is going to be printed in the console. You can increase the current limit by going to Tools > Preferences > IPython console > Display > Buffer However, if you want to read the help associated to an object, you can press Ctrl + I (Cmd + I in macOS) in front of it and the get its help rendered in another Spyder pane called Help.
1
8
0
The IPython console in Spyder(Anaconda) is truncating the upper part of the output when the output is large. Eg I was trying to see what all is in the os module. I wrote the command help(os) and the output was very big, so it truncated some of the top entries. What should I do to see the full output?
IPython Console in Spyder(Anaconda) is truncating output
1.2
0
0
12,526
47,597,320
2017-12-01T16:16:00.000
0
1
1
0
python
50,000,236
1
false
0
0
from dash_table_experiments import DataTable
1
0
0
I tried to run "from dash_table_component import Table" and it said it had no module named dash_table_component. I tried to pip install dash-table-experiments but it still showed no module for dash_table_component. I would like to ask that what Python package I should install to get this module? Thank you!
Package for dash_table_component
0
0
0
953
47,599,936
2017-12-01T19:06:00.000
2
0
1
1
python,windows,shutdown,sleep-mode
47,599,973
2
false
0
0
Shutdown doesn't have a function for Sleep integrated however you can use the following: rundll32.exe powrprof.dll,SetSuspendState 0,1,0
1
2
0
I'm in the middle of making a Python script to shutdown, restart, hibernate or put to sleep after a few seconds. I know its subprocess.call(["shutdown", "/s"]) to shutdown, "/r" to restart and "/h" to hibernate. How can you put the computer to sleep using call() on Windows 10?
How to put computer to sleep using subprocess.call?
0.197375
0
0
909
47,602,151
2017-12-01T22:07:00.000
2
0
1
0
python,visual-studio,terminal
49,505,028
4
false
0
0
At the bottom of the MS Code screen is an info bar that lets you know what line, col, text encoding, etc... It also shows the python interpreter you are accessing. If you click on the text for the version of python that is running, it will open a list of available interpreters on your system. If 2.7 is in your path, you can select it.
1
6
0
I have installed on my system several Python interpreters, 2.x and 3.x versions. I am trying to prepare my work environment to allow easily switch between code written in both Python version. It is really important to have as much flexible setting in Visual Studio Code (VSC). The problem is that I have no idea how to set VSC terminal to run code in Python 2.x. Terminal output is needed because it allows to provide user input easily. I've tried instructions provided on VSC page, like manual interpreter's path indication in folder or workspace setting. I reinstalled Python 2.x to ensure PATH variable has been updated. When I run code with CodeRunner extension, it always run it in Python 3.x. Does anyone have similar issue and found how to change Python environment used by this integrated terminal?
How to change interpreter in Visual Studio Code?
0.099668
0
0
18,249
47,604,678
2017-12-02T04:44:00.000
1
0
0
0
python,pygame
47,604,704
1
true
0
1
In order to have the box removed you need to call a ~destructor function on it, which would remove the image of the box and so on correct? Take advantage of that and create a function that chooses which item to spawn (could be random, up to you) in the position where the box used to be. Then call this function at the end of the destructor. That's how imagine it working.
1
0
0
I'm making a game using Pygame where there are random obstacles on the screen (like boxes). The boxes can be removed by the player when they place a bomb next to it. I have some items that I would like to have randomly appear in place of the box if it is removed. I'm not sure how to go about this logically. Does anyone have any tips? No code is necessary (but helpful), I just want some steps logic-wise to get me started.
Making items appear from obstacles in game?
1.2
0
0
30
47,606,161
2017-12-02T08:53:00.000
0
0
0
0
python-3.x,tensorflow,darkflow
47,685,678
2
false
0
0
Problem solved. Changing batch size and image size in config file didn't seem to help as they didn't load correctly. I had to go to defaults.py file and change them up there to lower, to make it possible for my GPU to calculate the steps.
2
1
1
I've used YOLO detection with trained model using my GPU - Nvidia 1060 3Gb, and everything worked fine. Now I am trying to generate my own model, with param --gpu 1.0. Tensorflow can see my gpu, as I can read at start those communicates: "name: GeForce GTX 1060 major: 6 minor: 1 memoryClockRate(GHz): 1.6705" "totalMemory: 3.00GiB freeMemory: 2.43GiB" Anyway, later on, when program loads data, and is trying to start learning i got following error: "failed to allocate 832.51M (872952320 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY" I've checked if it tries to use my other gpu (Intel 630) , but it doesn't. As i run the train process without "--gpu" mode option, it works fine, but slowly. ( I've tried also --gpu 0.8, 0.4 ect.) Any idea how to fix it?
YOLO - tensorflow works on cpu but not on gpu
0
0
0
1,053
47,606,161
2017-12-02T08:53:00.000
0
0
0
0
python-3.x,tensorflow,darkflow
50,273,011
2
false
0
0
Look like your custom model use to much memory and the graphic card cannot support it. You only need to use the --batch option to control the size of memory.
2
1
1
I've used YOLO detection with trained model using my GPU - Nvidia 1060 3Gb, and everything worked fine. Now I am trying to generate my own model, with param --gpu 1.0. Tensorflow can see my gpu, as I can read at start those communicates: "name: GeForce GTX 1060 major: 6 minor: 1 memoryClockRate(GHz): 1.6705" "totalMemory: 3.00GiB freeMemory: 2.43GiB" Anyway, later on, when program loads data, and is trying to start learning i got following error: "failed to allocate 832.51M (872952320 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY" I've checked if it tries to use my other gpu (Intel 630) , but it doesn't. As i run the train process without "--gpu" mode option, it works fine, but slowly. ( I've tried also --gpu 0.8, 0.4 ect.) Any idea how to fix it?
YOLO - tensorflow works on cpu but not on gpu
0
0
0
1,053
47,606,219
2017-12-02T09:01:00.000
1
0
1
1
python,virtualenv,fedora-26,pip3
47,606,470
1
true
0
0
Check if you have a file named string.py in your current working directory. If so, rename, move, or delete it. This should solve your problem.
1
0
0
I installed virtualenv on fedora 26 using pip3 install --user virtualenv. System has both python 2.7 and python 3.6. When I am creating a "virtualenv venv", I am getting this output and error. New python executable in /home/asraisingh/venv/bin/python2 Also creating executable in /home/asraisingh/venv/bin/python Installing setuptools, pip, wheel... Complete output from command /home/asraisingh/venv/bin/python2 - setuptools pip wheel: Traceback (most recent call last): File "", line 7, in File "/home/asraisingh/.local/lib/python2.7/site-packages/virtualenv_support/pip-9.0.1-py2.py3-none-any.whl/pip/init.py", line 7, in File "/usr/lib64/python2.7/optparse.py", line 77, in import textwrap File "/usr/lib64/python2.7/textwrap.py", line 10, in import string, re File "string.py", line 1 KDE: 9 ^ SyntaxError: invalid syntax ---------------------------------------- ...Installing setuptools, pip, wheel...done. Traceback (most recent call last): File "/home/asraisingh/.local/bin/virtualenv", line 11, in sys.exit(main()) File "/home/asraisingh/.local/lib/python2.7/site-packages/virtualenv.py", line 713, in main symlink=options.symlink) File "/home/asraisingh/.local/lib/python2.7/site-packages/virtualenv.py", line 945, in create_environment download=download, File "/home/asraisingh/.local/lib/python2.7/site-packages/virtualenv.py", line 901, in install_wheel call_subprocess(cmd, show_stdout=False, extra_env=env, stdin=SCRIPT) File "/home/asraisingh/.local/lib/python2.7/site-packages/virtualenv.py", line 797, in call_subprocess % (cmd_desc, proc.returncode)) OSError: Command /home/asraisingh/venv/bin/python2 - setuptools pip wheel failed with error code 1
Getting error while using virtualenv venv on fedora 26
1.2
0
0
713
47,606,914
2017-12-02T10:30:00.000
0
1
0
0
python,angular
47,606,964
1
false
1
0
Root directory if it is one or two files - like configs etc. If you actually need a server side app maybe it's better to make that the root project and have angular in a view folder.
1
0
0
ng serve command runs a server instance on a localhost. I would like to use some server-side scripts on the same localhost and communicate between angular app and the http server. I have some issue by not knowing in which directory I should place Python scripts. So that I can run extend the functionality of Angular app, to gather data from localhost server? EDIT: I am visiting http://localhost:4200/data.php in a browser, but I am always getting a 304 redirect. I figure this could be due to incorrect placing of php files. I've put it in src, ..src and app folder, but nothing seem to work.
Angular project and server files on localhost
0
0
0
629
47,608,506
2017-12-02T13:41:00.000
0
0
0
0
python,excel,win32com
61,532,508
6
false
0
0
Deletion of the folder as mentioned previously did not work for me. I solved this problem by installing a new version of pywin32 using conda. conda install -c anaconda pywin32
3
14
0
everyone! I have been using the win32com.client module in Python to access cells of an Excel file containing VBA Macros. A statement in the code xl = win32com.client.gencache.EnsureDispatch("Excel.Application") has been throwing an error: AttributeError: module 'win32com.gen_py.00020813-0000-0000-C000-000000000046x0x1x6' has no attribute 'MinorVersion' Has anyone faced a similar situation and, if yes, what can a possible remedy for this? (I've had a look at the source code for win32com on GitHub, but haven't been able to make much sense from it.)
Issue in using win32com to access Excel file
0
1
0
17,855
47,608,506
2017-12-02T13:41:00.000
5
0
0
0
python,excel,win32com
61,842,925
6
false
0
0
A solution is to locate the gen_py folder (C:\Users\\AppData\Local\Temp\gen_py) and delete its content. It works for me when using the COM with another program.
3
14
0
everyone! I have been using the win32com.client module in Python to access cells of an Excel file containing VBA Macros. A statement in the code xl = win32com.client.gencache.EnsureDispatch("Excel.Application") has been throwing an error: AttributeError: module 'win32com.gen_py.00020813-0000-0000-C000-000000000046x0x1x6' has no attribute 'MinorVersion' Has anyone faced a similar situation and, if yes, what can a possible remedy for this? (I've had a look at the source code for win32com on GitHub, but haven't been able to make much sense from it.)
Issue in using win32com to access Excel file
0.16514
1
0
17,855
47,608,506
2017-12-02T13:41:00.000
6
0
0
0
python,excel,win32com
55,256,887
6
false
0
0
Renaming the GenPy folder should work. It's present at: C:\Users\ _insert_username_ \AppData\Local\Temp\gen_py Renaming it will create a new Gen_py folder and will let you dispatch Excel properly.
3
14
0
everyone! I have been using the win32com.client module in Python to access cells of an Excel file containing VBA Macros. A statement in the code xl = win32com.client.gencache.EnsureDispatch("Excel.Application") has been throwing an error: AttributeError: module 'win32com.gen_py.00020813-0000-0000-C000-000000000046x0x1x6' has no attribute 'MinorVersion' Has anyone faced a similar situation and, if yes, what can a possible remedy for this? (I've had a look at the source code for win32com on GitHub, but haven't been able to make much sense from it.)
Issue in using win32com to access Excel file
1
1
0
17,855
47,610,826
2017-12-02T17:56:00.000
1
0
0
0
android,python-3.x,kivy,kivy-language
47,613,212
3
false
0
1
I found the answer in the source code of TextInput. Simply set the TextInput use_bubble to False to disable the selection paste popup.
2
0
0
On Android, when you do a long touch on a TextInput widget, a 'Select All Paste' popup shows up. Is there a way to disable this feature ?
Is there a way to get rid of the TextInput 'Select all' popup?
0.066568
0
0
273
47,610,826
2017-12-02T17:56:00.000
0
0
0
0
android,python-3.x,kivy,kivy-language
47,611,045
3
false
0
1
yes the kivy text input has a property use_bubble, change it to False if you don't want this behaviour
2
0
0
On Android, when you do a long touch on a TextInput widget, a 'Select All Paste' popup shows up. Is there a way to disable this feature ?
Is there a way to get rid of the TextInput 'Select all' popup?
0
0
0
273
47,610,890
2017-12-02T18:02:00.000
3
0
0
0
python,django
47,611,173
1
false
1
0
You could lowercase or uppercase the username when someone register or login. First get the data from the form: username = login_form.cleaned_data.get('user_username') password = login_form.cleaned_data.get('user_password') After that lowercase the username: username = username.lower() Then authenticate: user = authenticate(username=username, password=password) Remember to lowercase the username in the register form too.
1
2
0
I have created login model in Django and I am using username for authentication(using default authenticate()), but at present the username is case-sensitive. Is there a way in django that while authenticating the case-sensitivity of username is not considered?
Authentication with case insensitive username
0.53705
0
0
1,395
47,611,139
2017-12-02T18:29:00.000
1
1
0
0
python,websocket,cloud,aiohttp,python-aiofiles
47,617,568
1
false
0
0
To answer your actual question, threads and coroutines will be equally reliable but coroutines are much easier to reason with and much of the modern existing code you'll find to read or copy will use them. If you want to benefit from multiple cores, much better to use multiprocessing than threads to avoid the trickiness of the GIL.
1
2
0
I have a more open-ended design oriented question for you here. I have background in Python but not in web nor async programming. I am writing an app to save data collect from websockets 24/24, 7/7 with the aim to minmise data loss. My initial thoughts is to use Python 3.6 with asyncio, aiohttp and aiofiles. I don't know whether to use one co-routine per websocket connection or one thread per websocket connection. Performance may not an issue as much as good connection error handling.
websocket data collection design
0.197375
0
1
574
47,614,316
2017-12-03T01:13:00.000
2
0
1
0
python,string,python-2.7,list,python-2.x
47,614,365
2
false
0
0
For this, you would use the str.split() builtin method. Set x = "[1.0, 2.0, 3.0, 4.0, 5.0]" First of all, get rid of the square brackets around the string: x = x[1:-1] (x is now "1.0, 2.0, 3.0, 4.0, 5.0", a long string) Then, split the string to form a list of strings: x = x.split(',') (x is now ["1.0", "2.0", "3.0", "4.0", "5.0"], a list of strings) Then, convert all of these strings into floats (as I assume you want that): x = [float(i) for i in x] (x is now [1.0, 2.0, 3.0, 4.0, 5.0], a list of floats)
1
1
0
So I'm having an issue where when I read in info from an xml file. The data it reads in is supposed to be a list of numbers but when I read it in, it comes as a string. Is there a way to read xml data as a list or a way to convert a string to a list. Eg I get the data from the xml as say [1.0, 2.0, 3.0, 4.0, 5.0] and if I check the type it says its a string, ie the whole thing is a string including the brackets and the comma. Can't wrap my head around how to convert this back to a list of numbers
Convert string to list in python or read xml data as list
0.197375
0
1
606
47,615,323
2017-12-03T04:43:00.000
0
0
0
0
python,django
47,616,024
1
false
1
0
Now what you can do is : 1) Take backup of your code 2) Create another project with other name or same name, it will have it's own sqlite db 3) Place your code in that project and then run migrations, it will recreate your db schema. And it should work.
1
0
0
I have some problem with my database, so I deleted db.sqlite3 and Migrations manually. than I recreate the database by using manage.py makemigrations <appname> manage. py migrate <appname> Everything looks normal,but when I get into localhost:8000, It is a blank page without anything (even if I never change the templates). To my confused, I can get into the admin page. Are there any mistakes in this process?what happened to my django app?
After delete database ,my Django does't work
0
0
0
558
47,618,217
2017-12-03T12:11:00.000
1
0
0
0
python,fasttext,sentence-similarity
47,619,440
1
false
0
0
Cosine similarity might be useful if you average both word vectors in a bi-gram first. So you want to take the vector for 'his' and 'name', average them into one vector. Then take the vector for 'I' and 'am' and average them into one vector. Finally, calculate cosine similarity for both resulting vectors and it should give you a rough semantic similarity.
1
1
1
I'm trying to calculate the semantic similarity between two bi-grams and I need to use fasttext's pre-trained word vectors to accomplish this task. For ex : The b-grams are python lists of two elements: [his, name] and [I, am] They are two tuples and I need to calculate the similarity between these two tuples by any means necessary. I'm hoping there's a score which can give me a good approximation of similarity. For ex - If there are methods which can tell me that [His, name] is more similar to [I, am] than [An, apple]. Right now I only made use of cosine similarity which does include any semantic similarity.
How do I calculate the semantic similarity between two n-grams?
0.197375
0
0
567
47,618,328
2017-12-03T12:26:00.000
1
0
0
0
python-3.x,websocket,python-asyncio,aiohttp
47,633,790
1
false
0
0
aiohtp is a relative low level library, autoreconnection should be built on top of it. Websocket connection is non-blocking operation in aiohttp. Reliable websocket reconnection is not trivial task. Maybe you need to know what data are received by peer or maybe not -- it depends. In first case you need some high level protocol on top of plain websockets to send acknowledges etc.
1
2
0
I am writing an app listening to websocket connections and making infrequent REST requests. aiohttp seems like a natural choice for this, but I'm flexible. The app is simple but needs to be reliable (gigabytes of data to collect daily while minimising data loss). What is the best way to handle connection loss with aiohttp? I notice that other some other Python libraries have auto reconnect options available. With aiohttp, I could always manually implement this with a loop (start over again as soon as the connection is lost) but I wouldn't know what's the best practice (is it acceptable to keep making reconnection attempts without delay in a loop?).
autoreconnect for aiohttp websocket
0.197375
0
1
1,106
47,621,228
2017-12-03T17:40:00.000
0
0
0
0
python,django,django-models
47,622,247
1
true
1
0
There's a simple difference. Model methods act on a single instance. Manager methods create queries to act on multiple instances.
1
0
0
when should I write methods in the model itself and when in the model manager? is it like all methods to get something should be written in manager and others in model
when should I write custom methods in model vs model manager/ queryset
1.2
0
0
120
47,622,391
2017-12-03T19:31:00.000
1
1
1
0
python,inheritance
47,622,552
1
true
0
0
It is the nature of a non-compiled language like Python that it is impossible to do anything like this without importing. There is simply no way for Python to know what subclasses any class has without executing the files they are defined in.
1
0
0
The primary problem I'm trying to solve is how to detect all the subclasses of a particular class. The reason I'm unable to use __subclasses__ is that the child classes aren't yet accessible in the context from which I'm attempting to access them. The folder structure I'm working with looks like this: main.py projects/ __init__.py project.py some_project_child.py What I'd like to do is get a list of all subclasses of Project (defined in project.py) from main.py. I'm able to do this by doing: from projects.project import Project from projects.some_project_child import SomeProjectChild Project.__subclasses__ The aspect of this approach I'd like to avoid is having to add an import line every time I add a new project file/class. I realize I can do this by iterating over the files in the directory and import each one's contents but is there a cleaner more pythonic manner of handling this?
Fetching subclasses that are not present in current context
1.2
0
0
19
47,624,488
2017-12-03T23:27:00.000
-9
0
0
0
c#,python,.net,tensorflow,keras
47,719,572
3
false
0
0
it does not work like that, since you wont even be able to install tensorflow in a C# project. Abandon C# stack and learn framework in python stack instead, ex. if if you need to consume the prediction result in a web app, learn Flask or Django
1
6
1
As title states I'm trying to use my Keras (tf backend) pretrained model for predicitions in c#. What's the best approach? I've tried IronPython but it gave me errors, after search I found it isn't supported. Simply calling python script won't work since target Windows devices won't have python interpreters installed.
Getting Keras trained model predictions in c#
-1
0
0
11,563
47,626,377
2017-12-04T04:11:00.000
0
0
1
1
python,linux,visual-studio-code
47,643,346
1
true
0
0
In general, Ctrl-` opens the integrated terminal in VS Code. Otherwise look under the View menu for the Integrated Terminal option. If you're looking for Python-specific options, there are Run Python File in Terminal and Run Python Selection/Line in Terminal commands from the command palette.
1
0
0
I'm on Linux Mint running VSCode and I was somehow able to run a terminal not as a separate window but right below an open Python file. Seems to be easy on Win/OSX (Ctrl/Cmd+J and select Terminal tab) but not specifically a feature that I can choose when I'm on a Linux machine. Any special keys to bring it back?
Run terminal inside VS Code for python script?
1.2
0
0
81
47,626,431
2017-12-04T04:18:00.000
0
0
1
0
python,tensorflow,pycharm,macos-sierra
47,655,652
1
false
0
0
Click on the python 2.7 that you see. Then after you select you python you will see all the packages that come with it. One of the packages will be Tensorflow.
1
0
1
I am new to tensorflow, OS: Mac 10.13.1 Python: 2.7 tensorflow: 1.4.0(install with pip) I want to use tensorflow from Pycharm for a project, and when I open: "Pycharm" - "Preferences" - "Project Interpreter", There are only two local: 2.7.13/(Library/Frameworks/Python.framework.Versions.2.7/bin/python2.7) /System/Library/Frameworks.Python.framwork.Versions.2,7.bin.python2.7 I can't find tensorflow, what should I do?
can't find tensorflow local in Pycharm project Interpreters
0
0
0
184
47,631,328
2017-12-04T10:32:00.000
0
0
0
0
python,view,dns,bind,dnspython
47,667,697
1
false
1
0
Finally, this problem has been solved by changing the match-clients dynamically through ssh and then rndc reload. This make the remote server only access to the specific view in which you want to update the zone using dnspython.
1
0
0
Just like the title. I was struggling to work with that, but I couldn't find any way to make that work. Is there anyone know how to do that. Please help, thanks!
Is there any way to update zone in specific views using dnspython?
0
0
0
83
47,632,891
2017-12-04T11:59:00.000
1
0
1
0
python,python-3.x,pip
62,507,390
17
false
0
0
The OS is not recognizing 'python' command. So try 'py' Use 'py -m pip'
7
19
0
I've just installed python 3.6 which comes with pip However, in Windows command prompt, when I do: 'pip install bs4' it returns 'SyntaxError: invalid syntax' under the install word. Typing 'python' returns the version, which means it is installed correctly. What could be the problem?
pip install returning invalid syntax
0.011764
0
0
173,878
47,632,891
2017-12-04T11:59:00.000
0
0
1
0
python,python-3.x,pip
50,752,448
17
false
0
0
I use Enthought Canopy for my python, at first I used "pip install --upgrade pip", it showed a syntax error like yours, then I added a "!" in front of the pip, then it finally worked.
7
19
0
I've just installed python 3.6 which comes with pip However, in Windows command prompt, when I do: 'pip install bs4' it returns 'SyntaxError: invalid syntax' under the install word. Typing 'python' returns the version, which means it is installed correctly. What could be the problem?
pip install returning invalid syntax
0
0
0
173,878