Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
41,364,326
2016-12-28T14:44:00.000
1
0
1
0
python,initialization,delay,jupyter-notebook
41,364,412
1
false
0
0
Visual Studio Code + Python extension works fine (both Windows and Mac, not sure about Linux). Very fast and lightweight, Git integration, debugging refactorings, etc. Also there is an IDE called Spyder that is more Python-specific. Also works fine but is more heavy-weight.
1
0
0
Usually the main reason I'm using jupyter notebook with python is the possibility to initialize once (and only once) objects (or generally "data") that tend to have long (lets say more than 30 seconds) loading times. When my work is iterative, i.e. I run minimally changed version of some algorithm multiple times, the accumulated cost of repeated initialization can get large at end of a day. I'm seeking an alternative approach (allowing to avoid the cost of repeated initialization without using a notebook) for the following reasons: No "out of the box" version control when using notebook. Occasional problems of "I forgot to rename variable in a single place". Everything keeps working OK until the notebook is restarted. Usually I want to have usable python module at the end anyway. Somehow when using a notebook I tend to get code that if far from "clean" (I guess this is more self discipline problem...). Ideal workflow should allow to perform whole development inside IDE (e.g. pyCharm; BTW linux is the only option). Any ideas? I'm thinking of implementing a simple (local) execution server that keeps the problematic objects pre-initialized as global variables and runs the code on demand (that uses those globals instead of performing initialization) by spawning a new process each time (this way those objects are protected from modification, at the same time thanks to those variables being global there is no pickle/unpickle penalty when spawning new process). But before I start implementing this - maybe there is some working solution or workflow already known?
Alternative workflow to using jupyter notebook (aka how to avoid repetitive initialization delay)?
0.197375
0
0
849
41,364,998
2016-12-28T15:25:00.000
0
1
0
0
python,whatsapp
41,365,717
2
false
1
0
The error you get clearly points out the nature of the challenge you are facing: it has to do with your version of yowsup-cli; it's an old version. It means that your project requires a version of yowsup-cli higher than what you currently have so as to work effectively as require. What you need to do so as to resolve it is: to update your yowsup-cli application to a more recent version.
2
0
0
I want to send bulk SMS on WhatsApp without creating broadcast list. For that reason, I found pywhatsapp package in python but it requires WhatsApp client registration through yowsup-cli. So I've run yowsup-cli registration -r sms -C 00 -p 000000000000 which resulted in the error below: INFO:yowsup.common.http.warequest:{"status":"fail","reason":"old_version"} status: fail reason: old_version What did I do wrong and how can I resolve this?
How to send bulk sms on whatsapp
0
0
1
1,399
41,364,998
2016-12-28T15:25:00.000
0
1
0
0
python,whatsapp
41,424,293
2
false
1
0
The problem is with the http headers that are sent to whatsapp servers, these are found in env/env.py The name of headers are manually provided, therefore due to new updates whatsapp servers only serve or authenticate to updated devices which is identified with their http/https/etc headers, in this case you need to update some constants in the above file(env/env.py) in your yowsup folder.
2
0
0
I want to send bulk SMS on WhatsApp without creating broadcast list. For that reason, I found pywhatsapp package in python but it requires WhatsApp client registration through yowsup-cli. So I've run yowsup-cli registration -r sms -C 00 -p 000000000000 which resulted in the error below: INFO:yowsup.common.http.warequest:{"status":"fail","reason":"old_version"} status: fail reason: old_version What did I do wrong and how can I resolve this?
How to send bulk sms on whatsapp
0
0
1
1,399
41,365,358
2016-12-28T15:47:00.000
0
1
0
0
python,api
41,819,998
1
false
0
0
You will need to copy the scripts to your /cgi-bin/ directory. You can find further reference in your iPage Control Panel, under "Additional Resources/CGI and Scripted Language Support". Then look for "Server Side Includes and CGI", and you will find the supported Python version and other relevant directory paths for your setup.
1
1
0
I was wondering, I'm interested in using the Python Yahoo Finance API on my website, I'm using iPage as my webhost, how can I install APIs there, I just today found out how can I code the website using python
Using Python APIs on ipage?
0
0
1
604
41,365,828
2016-12-28T16:14:00.000
0
0
1
0
python,windows,pip,anaconda
41,365,912
2
false
0
0
Instead of just writing pip instal ... in the command line, which apparently points to your Anaconda installation, you can navigate (using the cd command) to your Python installation and invoke the pip.exe file located somewhere there. I guess you could try renaming one of pip.exe files (the one in Anaconda or the one in Python) to something else (e.g. pipanadonda.exe), and then you will be able to call them separately from the command line.
2
1
0
Whenever I try to install package with pip (using wheel or just regular pip install numpy ->e.g), pip installs new package to location where Anaconda holds its site-packages. How do I remove that? That started happening since I installed Anaconda which I use for some tasks as python interpreter, but now I need my regular python installation.
Pip installs to anaconda directory instead python's directory (Windows)
0
0
0
2,437
41,365,828
2016-12-28T16:14:00.000
1
0
1
0
python,windows,pip,anaconda
55,282,609
2
false
0
0
If you have Python 3 installed but you see that which pip points to your Anaconda installation, try using pip3 instead - if it is available then you will see that which pip3 points to your Pythons installation path instead of your Anaconda path. Same with which python3.
2
1
0
Whenever I try to install package with pip (using wheel or just regular pip install numpy ->e.g), pip installs new package to location where Anaconda holds its site-packages. How do I remove that? That started happening since I installed Anaconda which I use for some tasks as python interpreter, but now I need my regular python installation.
Pip installs to anaconda directory instead python's directory (Windows)
0.099668
0
0
2,437
41,367,705
2016-12-28T18:27:00.000
0
0
1
0
python,class,module
41,368,223
1
false
0
0
I believe the best way to answer this question is to look at what the leaders in this field are doing. There is a very healthy eco-system of modules available on pypi whose authors have wrestled with this question. Take a look at some of the modules you use frequently and thus are already installed on your system. Or better yet, many of those modules have their development versions hosted on GitHub (The pypi page usually has a pointer). Go there and look around.
1
0
0
Context I write my own library for data analysis purpose. It includes classes to import data from the server, procedures to clean, analyze and display results. It also includes functions to compare results. Concerns I put all these in a single module and import it when I do a new project. I put any newly-developed classes/functions in the same file. I have concerns that my module becomes longer and harder to browse and explain. Questions I started Python six months ago and want to know common practices: How do you group your function/classes and put them into separated files? By Purpose? By project? By class/function? Or you are not doing it at all? In general how many lines of code in a single module? What's the way to track the dependency among your own libraries? Feel free to suggest any thoughts.
rule of thumb to group/split your own functions/classes into modules
0
0
0
38
41,370,987
2016-12-28T23:09:00.000
7
0
0
0
python,machine-learning,tensorflow
41,371,381
1
true
0
0
I kicked this around with my local TF expert, and the brief answer is "no"; TF doesn't have a built-in facility for this. However, you could write custom endpoint layers (input and output) with synch operations from Python's process management, so that they'd maintain parallel processing of each input, and concatenate the outputs. Rationale I like the way this could be used to get greater accuracy with multiple features, where the features have little or no correlation. For instance, you could train two character recognition models: one to identify the digit, the other to discriminate between left- and right-handed writers. This would also allow you to examine the internal kernels that evolved for each individual feature, without interdependence with other features: the double-loop of an '8' vs the general slant of right-handed writing. I also expect that the models for individual features will converge measurably faster than one over-arching training session. Finally, it's quite possible that the individual models could be used in mix-and-match feature sets. For instance, train another model to differentiate letters, while letting your previously-trained left/right flagger would still have a pretty good guess at the writer's moiety.
1
9
1
I have two models trained with Tensorflow Python, exported to binary files named export1.meta and export2.meta. Both files will generate only one output when feeding with input, say output1 and output2. My question is if it is possible to merge two graphs into one big graph so that it will generate output1 and output2 together in one execution. Any comment will be helpful. Thanks in advance!
Is it possible to merge multiple TensorFlow graphs into one?
1.2
0
0
2,821
41,372,369
2016-12-29T02:30:00.000
0
0
1
0
python,jupyter,qtconsole,jupyter-console
42,592,006
1
false
0
0
If you're using numpy then take a loot at np.set_printoptions. Try adjusting the linewidth argument; the default is 75. So maybe run np.set_printoptions(linewidth=150) and see if that helps.
1
0
0
I have a QtConsole running. Whenever I output a matrix (for example) that has many columns, QtConsole wraps the matrix to the next line. However, the break point is only halfway through my window.. lots of wasted blank space. How can I make QtConsole use more columns in it's output?
increase number of columns in window using jupyter qtconsole in python
0
0
0
105
41,372,955
2016-12-29T04:00:00.000
2
0
0
0
python,django
41,373,629
1
false
1
0
If you're looking for the source of module foo.bar, it can be one of the two: foo/bar.py foo/bar/__init__.py Also note that often upper-level modules re-export selected names imported from deeper-down modules: a name may be merely imported, not otherwise defined; e.g. django.db does a lot of this.
1
2
0
As I work through the Django tutorials, I like to see with my own eyes the module and class/attribute that I am inheriting via import by going to the source code at Github. However, and I have attached pics to illustrate that I (think) I went to the right place, but the files seem to be missing. For example, in Django tutorial Part 1: from django.conf.urls import include, url So I go to Github django code and I find: django/django/conf/urls What I find is that urls is a directory with only files : __init__.py, i18n.py and static.py. There is no urls.py file which might have url() or include() methods. Same with models.Models. from django.db import models On django Github site I follow the directories... django/django/db/models  models is a directory, not a file with a class Model() So, what am I missing here? Looking forward to a few bread crumbs :)
Python Package Paths :: Finding Directories that Should Be .py Files
0.379949
0
0
45
41,375,247
2016-12-29T07:42:00.000
-1
0
0
1
python,windows,python-2.7,python-3.x,background
41,376,619
3
false
0
0
try to spin up an AWS instance and run it on a more reliable server. Or you can look into hadoop to process the code across multiple fail-safe servers
1
2
0
I have a python script that running on windows server 2008 on cmd line. I don't need any interact during script running. By the way the script is running during about a week. So if the server disconnects my connection for some reason, my script stops and I have to start over and over again. It is huge trouble for me and I don't know how to solve this problem. Here is my question. How to run a python script in backround on windows server even user disconnect from the server? Thanks in advance for your help.
How to run python script in windows backround?
-0.066568
0
0
4,219
41,377,059
2016-12-29T09:46:00.000
0
0
0
1
python,django,uwsgi
44,000,160
2
false
1
0
You can increase 'listen' value in uwsgi configure file. The default value is 100 which is too small.
1
0
0
I am using uwsgi with this configuration : net.core.somaxconn = 1024 net.core.netdev_max_backlog=1000 I got resource temporarily unavailable issue. How to resolve this issue? df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.8G 2.1G 5.6G 28% / devtmpfs 1.9G 12K 1.9G 1% /dev tmpfs 1.9G 16K 1.9G 1% /dev/shm
How to solve [Errno 11] Resource temporarily unavailable using uwsgi + nginx
0
0
0
1,657
41,380,710
2016-12-29T13:25:00.000
0
0
0
0
python,apache,flask,scikit-learn
41,383,430
1
false
1
0
Use anaconda. It will save you so much time with these annoying dependency issues.
1
0
1
I am trying to install the latest version (0.18.1) of sklearn for use in a web app I am hosting my webapp with apache web server and flask I have tried sudo apt-get -y install python3-sklearn and this works but installs an older version of sklearn (0.17) I have also tried pip3 and easy_install and these complete the install but are not picked up by flask or apache. I get the following error log on my apache server [Thu Dec 29 13:07:45.505294 2016] [wsgi:error] [pid 31371:tid 140414290982656] [remote 90.201.35.82:25030] from sklearn.gaussian_process import GaussianProcessRegressor [Thu Dec 29 13:07:45.505315 2016] [wsgi:error] [pid 31371:tid 140414290982656] [remote 90.201.35.82:25030] ImportError: cannot import name 'GaussianProcessRegressor' This is because I am trying to access some features of sklearn which are not present in 0.17 but are there in 0.18.1 Any ideas?
installing sklearn version 0.18.1 in Apache web server
0
0
0
362
41,381,705
2016-12-29T14:32:00.000
0
0
0
1
python,flask,undefined,global,mod-wsgi
41,394,185
1
false
1
0
No main function call with mod_wsgi was the right answer. I do not implemented my required modules in the wsgi file, but on top of the flask app.
1
0
0
I have changed my application running with flask and python2.7 from a standalone solution to flask with apache and mod_wsgi. My Flask app (app.py) includes some classes which are in the directory below my app dir (../). Here is my app.wsgi: #!/usr/bin/python import sys import logging logging.basicConfig(stream=sys.stderr) sys.stdout = sys.stderr project_home = '/opt/appdir/Application/myapp' project_web = '/opt/appdir/Application/myapp/web' if project_home not in sys.path: sys.path = [project_home] + sys.path if project_web not in sys.path: sys.path = [project_web] + sys.path from app import app application = app Before my configuration to mod_wsgi my main call in the app.py looks like that: # Main if __name__ == '__main__' : from os import sys, path sys.path.append(path.dirname(path.dirname(path.abspath(__file__)))) from logger import Logger from main import Main from configReader import ConfigReader print "Calling flask" from threadhandler import ThreadHandler ca = ConfigReader() app.run(host="0.0.0.0", threaded=True) I was perfectly able to load my classes in the directory below. After running the app with mod_wsgi I get the following error: global name \'Main\' is not defined So how do I have to change my app that this here would work: @app.route("/") def test(): main = Main("test") return main.responseMessage()
Flask with mod_wsgi - Cannot call my modules
0
0
0
1,008
41,381,825
2016-12-29T14:39:00.000
0
0
1
0
python,pygame,vpython
42,990,373
1
false
0
1
I don't know anything about pygame, but I'm guessing that the interaction processing of pygame and the interaction processing of VPython would be incompatible with each other.
1
0
0
I've made a simulation using Vpython and I want to create a GUI using PYGAME. I was wondering if it's possible to embed that simulation made using Vpython into my GUI.
Is it possible to overlap different modules in python?
0
0
0
44
41,382,736
2016-12-29T15:34:00.000
2
0
0
0
python,numpy
41,382,785
3
true
0
0
There are two ways to archive this either np.reshape(x, ndims) or np.transpose(x, dims). For pictures I propose np.transpose(x, dims) which can be applied using X_train = np.transpose(X_train, (3,0,1,2)).
1
0
1
I have an numpy-array with 32 x 32 x 3 pictures with X_train.shape: (32, 32, 3, 73257). However, I would like to have the following array-shape: (73257, 32, 32, 3). How can I accomplish this?
Numpy Array Change indices
1.2
0
0
3,420
41,386,463
2016-12-29T20:04:00.000
0
0
1
0
python-2.7,opencv
41,387,325
1
false
0
0
Not sure what the issue is. Normally, x and y coordinates (of the dart) will be given relative to the top-left corner of the image so you will need to add the radius of the dartboard to each to get your coordinates relative to the centre of the dartboard. There are 20 segments on a dartboard, so each segment will subtend 360/20 or 18 degrees around the centre. You can get the angle from the vertical using tan inverse x/y and test which segment, and therefore, which number that corresponds to. The distance from the centre will be sqrt(x^2 + y^2) and you can test if that is within the segment corresponding to a treble or a double.
1
0
1
I'm looking for a way to split a dartboard image into polygons so that given an x,y coordinate I can find out which zone the dart fell within. I have found a working python script to detect if the coordinate falls within a polygon (stored as a list of x,y pairs), but I am lost as to how to generate the polygons as a list of points. I'm open to creating the "shape map" if that's the correct term, in whatever way necessary to get it done, I just don't know the correct technology or method to do this. Any advice is welcome!
Split Dartboard into Polygons
0
0
0
109
41,387,433
2016-12-29T21:26:00.000
2
0
0
0
python,kivy
41,388,282
1
false
0
1
Found the answer. The default widget size is 100, 100. By the time I add the initial ball, the World widget is not rendered and therefore has a default size. But it is possible to pass the windows size in the Widget constructor. So changing the World instantiation to world = World(size=Window.size) solved the problem
1
1
0
I am learning to use Kivy, so I walked through the Pong tutorial and started messing around with the code. So, I removed everything but the bouncing ball and decided to generate multiple balls on demand. The problem I am having is that while I can place balls where I want them when application is already running (for example, adding a ball on touch works fine), but when I add balls in the app build() they don't get placed right. Here is the code I have. The balls placed on touch, correctly start from the center. But the ball added in build() starts from the lower left corner. Why? I wanted to add more moving widgets with different properties, but I cannot seem to figure out how to place them on application start. #:kivy 1.0.9 <World>: canvas: Ellipse: pos: self.center size: 10, 10 <Agent>: size: 50, 50 canvas: Ellipse: pos: self.pos size: self.size from random import randint from kivy.app import App from kivy.uix.widget import Widget from kivy.properties import NumericProperty, ReferenceListProperty, ListProperty from kivy.vector import Vector from kivy.clock import Clock class World(Widget): agents = ListProperty() def add(self): agent = Agent() agent.center = self.center agent.velocity = Vector(4, 0).rotate(randint(0, 360)) self.agents.append(agent) self.add_widget(agent) def on_touch_down(self, touch): self.add() def update(self, dt): for agent in self.agents: agent.move() if agent.y < 0 or agent.top > self.height: agent.velocity_y *= -1 if agent.x < 0 or agent.right > self.width: agent.velocity_x *= -1 class Agent(Widget): velocity_x = NumericProperty(0) velocity_y = NumericProperty(0) velocity = ReferenceListProperty(velocity_x, velocity_y) def move(self): self.pos = Vector(*self.velocity) + self.pos class WorldApp(App): def build(self): world = World() # add one ball by default world.add() Clock.schedule_interval(world.update, 1.0/60.0) return world if __name__ == '__main__': WorldApp().run()
Center widgets in Kivy
0.379949
0
0
1,337
41,388,006
2016-12-29T22:22:00.000
0
0
0
0
python,html,css,pycharm
41,388,131
2
false
1
0
Let me try a lucky guess since I don't know what is exactly rendering different: it could be the Encoding of the file, you can try and change in Sublime selecting a different enconding type to save the file to match the file saved in pycharm. File>Save with encoding>[select] If both are completely equal is the only thing that I can imagine.
2
0
0
I've got exactly the same files (HTML + CSS), in both PyCharm and Sublime Text, and the results of rendering these in Google Chrome is completely different. Editing CSS doesn't have any affect on the results of rendering the HTML. I have to make the project using Python Flas, but I want to start from HTML and CSS. Does anybody know why have I different results from the same files?
Differences between rendering HTML in PyCharm and a text editor (Sublime Text)
0
0
0
114
41,388,006
2016-12-29T22:22:00.000
0
0
0
0
python,html,css,pycharm
41,388,349
2
true
1
0
When we run PyCharm project it give us the same link and we have to clear cache or cookies every time we open this link
2
0
0
I've got exactly the same files (HTML + CSS), in both PyCharm and Sublime Text, and the results of rendering these in Google Chrome is completely different. Editing CSS doesn't have any affect on the results of rendering the HTML. I have to make the project using Python Flas, but I want to start from HTML and CSS. Does anybody know why have I different results from the same files?
Differences between rendering HTML in PyCharm and a text editor (Sublime Text)
1.2
0
0
114
41,388,846
2016-12-29T23:56:00.000
1
1
0
0
python,rabbitmq
41,388,885
2
false
0
0
I have been using celery recently to do what you are trying to achieve. With celery you can create tasks which are essentially functions that you are distributed to a task queue. You can also make celery tasks run periodically, whether that means every x seconds or a more crontab style approach. Look for periodic tasks in the celery documentation to see if it suits what you are trying to do. Celery uses rabbitmq or redis (primarily). Each task runs in its own separated thread from the main program.
1
0
0
I have a service that I'm writing in Python that allows users to schedule a task to happen at different intervals. Examples of tasks would be: Task A: do a status check for every 10 seconds Task B: do a status check for every 3 seconds Task C: do a status check for every 15 seconds The tasks should run independently of each other. I also want to make sure that Task A can't run again until it's previous attempt is complete. Remember that the number of tasks is dynamic, and so in the interval at which they run. I've looked at RabbitMQ, but am having a hard time deciding if it's capable of this sort of thing.
Best tool for scheduling tasks at intervals?
0.099668
0
0
60
41,395,396
2016-12-30T11:18:00.000
0
1
1
0
python,aws-lambda,python-module
41,395,518
1
false
0
0
let only one copy of awesome-lib.py placed where it is placed and append it's path in other modules. let sample path is "/home/user/awesome-lib.py" Add following code in every other module you want to import awesome-lib.py import sys sys.path.append('home/user/awesome-lib') import awesome-lib Note: path of awesome-lib may differ on your choice
1
0
0
I have made a custom python module (say awesome-lib.py), which is to be imported and used by the multiple other python modules(module1.py, module2.py etc). The problem is that all the modules need to be in separate folders and each of them should have a copy of awesome-lib.py for them to import it. I thought of two options for doing this: Each module folder will have a copy of awesome-lib.py in it. That way I can import awesome-lib and used it in each module. But issue is when I have to make any changes in awesome-lib.py. I would have to copy the file in each module folder separately, so this might not be a good approach. I can package the awesome-lib.py using distutils. Whenever I make the change in the module, I will update awesome-lib.py in each module using some script. But still I want the awesome-lib distribution package to be individually included in each module folder. Can anyone please tell me an efficient way to achieve this? So that I can easily change one file and the changes are reflected in all the modules separately. P.S: I want the awesome-lib.py in each module folder separately because I need to zip the contents of it and upload each module on AWS Lambda as a Lambda zip package.
Include a custom python module in multiple modules separately
0
0
0
770
41,398,166
2016-12-30T14:53:00.000
0
0
1
1
python,python-2.7,shell,python-3.x,ide
41,398,603
2
false
0
0
Usually python2 interpreter is opened with the python command, and the python3 interpreter is opened with the python3 command. On linux, you may want to put #!/usr/bin/env python at the top of your code.
1
1
0
Hey I'm just starting out with Eric6. Is it possible to change the shell to use python 2.* instead of python3? can't find anything related to that in the preferences? thanks
Eric IDE: How do I change the shell from python3 to python2?
0
0
0
1,575
41,404,053
2016-12-30T23:39:00.000
1
0
0
0
python,opencv,image-processing,raspberry-pi
41,408,698
1
false
0
0
Well I can suggest a way of doing this. So basically what you can do is you can use some kind of Object Detection coupled with a Machine Learning Algo. So the way this might work is you first train your camera to recongnize the closed box. You can take like 10 pics of the closed box(just an example) and train your program to recognize that closed box. So the program will be able to detect when the box is closed. So when the box is not closed(i.e open or missing or something else) then you can code your program appropriately to fire off a signal or whatever it is you are trying to do. So the first obvious step is to write code for object detection. There are numerous ways of doing this alone like Haar Classification, Support Vector Machines. Once you have trained your program to look for the closed box you can then run this program to predict what's happening in every frame of the camera feed. Hope this answered your question! Cheers!
1
1
1
I have a small project that I am tinkering with. I have a small box and I have attached my camera on top of it. I want to get a notification if anything is added or removed from it. My original logic was to constantly take images and compare it to see the difference but that process is not good even the same images on comparison gives out a difference. I do not know why? Can anyone suggest me any other way to achieve this?
Detect object in an image using openCV python on a raspberry pi
0.197375
0
0
619
41,404,716
2016-12-31T01:43:00.000
1
0
1
0
python,redirect,text,input,option
41,405,029
1
true
0
0
Them simplest way, as "furas" already said is the ">" and "<" symbols. I'm using linux, but as I know, they work in windows too. The syntax you want: python myfile.py < input.txt > output.txt
1
1
0
I wish that, what I input in command line of python/ipython, can also be redirect to a file. Also, I wish to capture the output text to a file. Is there any option, or internal function could help on this?
Can I redirect my python/ipython input/output to a text file?
1.2
0
0
871
41,404,817
2016-12-31T02:08:00.000
12
0
0
0
python,linear-regression,statsmodels
41,404,825
2
true
0
0
It doesn't add a constant to your values, it adds a constant term to the linear equation it is fitting. In the single-predictor case, it's the difference between fitting an a line y = mx to your data vs fitting y = mx + b.
2
12
1
Reviewing linear regressions via statsmodels OLS fit I see you have to use add_constant to add a constant '1' to all your points in the independent variable(s) before fitting. However my only understanding of intercepts in this context would be the value of y for our line when our x equals 0, so I'm not clear what purpose always just injecting a '1' here serves. What is this constant actually telling the OLS fit?
statsmodels add_constant for OLS intercept, what is this actually doing?
1.2
0
0
9,060
41,404,817
2016-12-31T02:08:00.000
7
0
0
0
python,linear-regression,statsmodels
43,397,319
2
false
0
0
sm.add_constant in statsmodel is the same as sklearn's fit_intercept parameter in LinearRegression(). If you don't do sm.add_constant or when LinearRegression(fit_intercept=False), then both statsmodels and sklearn algorithms assume that b=0 in y = mx + b, and it'll fit the model using b=0 instead of calculating what b is supposed to be based on your data.
2
12
1
Reviewing linear regressions via statsmodels OLS fit I see you have to use add_constant to add a constant '1' to all your points in the independent variable(s) before fitting. However my only understanding of intercepts in this context would be the value of y for our line when our x equals 0, so I'm not clear what purpose always just injecting a '1' here serves. What is this constant actually telling the OLS fit?
statsmodels add_constant for OLS intercept, what is this actually doing?
1
0
0
9,060
41,404,967
2016-12-31T02:39:00.000
0
0
0
0
javascript,jquery,python,jqwidget
41,450,857
1
true
1
0
I had to resort to this. The application's user interface is posting to methods back on the site's controllers. Then code executing on the site's server (Python) is providing the user's interface lists of folders and files on the file server's shares. With some effort I will be able to provide the user with a rich and comprehensive UX that will effectively be the same originally desired. Or at least enough so. What I will not be able to provide is a folder-file lists of the user's local hard drive. And any drives mapped between the user's workstation and the file server will not be represented from the web server to the file server. Meaning the user will have to learn to live without the mapped drive letters which they have become accustomed to.
1
1
0
I'm building a browser application in web2py (Python based CMS). One requirement this application has is to enable is the user to browse to a folder within the local network or local drive. The user selects a folder, that selection becomes a string that I record in the application's database. File selection is entirely off the table. I don't care at all about file selection. I only need to select one and only one folder. And get the one folder's fullpath/UNC as a string, collection is strings, object, or whatever that I can then assemble the fullpath as a string. How can I develop a browser user interface screen object of some sort that enables the user to browser to and select a folder ( c:\folder\folder -or- \\server\share\folder ...) Then capture that string in a variable I can write to a databases? I'm finding there are a lot of impediments to just such use of a browser application (didn't used to be that way). I get the security concerns but I also can't believe all similar enterprise uses of a browser are being torn down and made impossible (again, because it didn't used to be that way). I don't want to dictate implementation. So spitball ideas if you like. Get out of the box of this tech stack if you like. But browser based is HIGHLY compelling (if I were to do this as a desktop app or something else I wouldn't even need to post this question). The current tech stack of the application is: browser (open to suggestions but Chrome is the preference), JavaScript, jQuery, JQWidgets, Python, MSSQL (server hosted, not CE/local). But none of these elements are hard limitations. Except IE/Edge. We'll never use that. If you can point me to fiddle, GitHub, ... examples that would be greatly appreciated. Is there a particular JavaScript library, browser addin, Python import, ... I should research? Would .Net be better suited to champion this challenge? Is there a better forum you know of where I should post this question? Thanks
Web Dialog box used to capture a full file path / UNC
1.2
0
0
135
41,405,062
2016-12-31T03:04:00.000
1
0
0
1
python,file,python-3.x,io
41,494,452
1
true
0
0
The best solution I've found is to make a copy of the file and then open it if you wish to view the contents of the file while it is being written to. It is easy to make a copy of a file programmatically if you wish to automate the process. If one wishes to implement a feature where the user can see the file as it is updated in real time, it is best to use communicate the data to a receive through a separate method, possibly by sockets or simply to stdout.
1
3
0
I am writing a utility in Python that may require a user to open a file while the Python program is writing to it. The file is open as so in the Python program: CSV = open(test.csv, "a") When the Python program is running and the user double clicks test.csv in the OS, the Python program halts with the following error: PermissionError: [Errno 13] Permission denied: 'test.csv' I understand why this happens. I am wondering if there is something I can do so that the Python program can still write to the file while the file (a read-only copy perhaps) is open. I've noticed in the past that Perl programs, for example, still write to a file while it is open, so I know that this is programmatically possible. Can this be done in Python?
Python: How to write to a file while it is open in the OS
1.2
0
0
1,049
41,405,113
2016-12-31T03:16:00.000
1
0
1
0
python,shell,editor,enthought,canopy
41,410,239
2
false
0
0
EDITED, see bottom of answer The key point to understand is that when IPython prompts you with ..., it is because you are in the middle of typing a multi-line statement (whether that was your intention or not). Typically this is because on some previous line, you typed a left parenthesis (or bracket), or a triple-quote-mark, etc and IPython is waiting for you to complete your statement with a right parenthesis or matching triple-quote, etc. So what you probably want to do is simply to erase your partially entered statement. The easiest way to do this, assuming that your cursor is already at the end of the last line in your multi-line statement, is just to press and hold the backspace key until your statement is all erased. Slightly quicker is to do the same with Ctrl+Backspace, which erases a word at a time instead of a character at a time. After you've erased all the garbage, press Enter, not actually needed but it will make you feel better, to convince yourself that everything is back to normal. (BTW, the fact that you were actually in the middle of typing a single long statement also explains why typing "quit" does nothing; you are not really typing a "quit" command, but just typing the additional letters "quit" into the middle of your already too-long and erroneous command, whatever that might be, which makes it even longer and more erroneous! As a further side note -- quit is actually not very useful in Canopy's IPython panel, because it just closes the panel but doesn't really close down IPython; if you reopen the panel from the View menu, it is still just as you left it. If you really want to restart IPython (clear all your variables and imports), do it with the "Restart kernel" command in Canopy's Run menu.) EDIT: OP's screen shots, sent privately, showed that Autodebug mode was on (this is the bulls-eye-like icon on the toolbar.) The solution was to toggle off Autodebug. Background: Autodebug hooks into the channel between Canopy's IPython (QtConsole) front end, and the IPython kernel back end. If autodebug is left on, some problems can break this channel. This should be improved in Canopy 2.0, currently in alpha internally.
2
1
0
Context: using Enthought's Canopy Version: 1.7.4.3348 (64 bit) on Windows 10. Typing into the python shell, errors produce a "...:" prompt, which I can then not break out of. Hitting enter and trying other ideas sadly leads to a repeat of the same prompt. How to break out of this mode, and get on with debugging?
breaking out of python shell prompt "...:" in enthought canopy
0.099668
0
0
165
41,405,113
2016-12-31T03:16:00.000
0
0
1
0
python,shell,editor,enthought,canopy
41,405,895
2
false
0
0
Try pressing Ctrl + D, that help in coming out of the console panel.
2
1
0
Context: using Enthought's Canopy Version: 1.7.4.3348 (64 bit) on Windows 10. Typing into the python shell, errors produce a "...:" prompt, which I can then not break out of. Hitting enter and trying other ideas sadly leads to a repeat of the same prompt. How to break out of this mode, and get on with debugging?
breaking out of python shell prompt "...:" in enthought canopy
0
0
0
165
41,406,339
2016-12-31T07:17:00.000
1
0
0
0
python,pandas,rolling-computation
62,550,444
2
false
0
0
Just use rolling correlation, with a very large window, and min_period = 1.
1
1
1
Is there any built-in pandas' method to find the cumulative correlation between two pandas series? What it should do is effectively fixing the left side of the window in pandas.rolling_corr(data, window) so that the width of the window increases and eventually the window includes all data points.
pandas "cumulative" rolling_corr
0.099668
0
0
506
41,407,241
2016-12-31T09:53:00.000
0
0
0
0
python,machine-learning,tensorflow
45,947,287
2
false
0
0
Since Python code of TF only setups the graph, which is actually executed by native implementation of all ops, your variables need to be executed in this underlying environment. This happens by executing two ops - for local and global variables initialization: session.run(tf.global_variables_initializer(), tf.local_variables_initializer()) On the original question - as far as I know - YES, it computes all the graph, and it requires you to feed placeholders, even if the executed op (in the session) is not dependent on them.
1
0
1
For example, when we compute a variable c as result = sess.run(c), does TF only compute the inputs required for computing c or updates all the variables of the complete computational graph? Also, I don't seem to be able to do this: c = c*a*b as I am stuck with uninitialized variable error even after initializing c as tf.Variable(tf.constant(1)). Any suggestions?
Does TensorFlow execute entire computation graph with sess.run()?
0
0
0
1,412
41,409,731
2016-12-31T15:43:00.000
2
0
0
0
python,plugins,gimp,python-fu
41,410,753
1
true
0
1
Either you build a full GUI with PyGTK (or perhaps tkinter) or you find another way. Typically for this if you stick to the auto-generated dialogs you have the choice between: a somewhat clumsy dialog that asks for both parameters and will ignore one or the other depending of the image format, two menu entries for two different dialogs, one for PNG and one for JPG. On the other hand, I have always use compression level 9 in my PNGs (AFAIK the only benefit of other levels is CPU time, but this is moot in modern machines) so your dialog could only ask for the JPEG quality which would mak it less clumsy. However... JPEG quality isn't all there is to it and there are actually many options (chroma sub-sampling being IMHO at least as important as quality), and to satisfy all needs you could end up with a rather complex dialog. So you could either: Just save with the current user's default settings (gimp_file_save()) Get these settings from some .ini file (they are less likely to change than other parameters of your script) Not save the image and let the user Save/Export to his/her liking (if this isn't a batch processing script)
1
2
0
Situation: My gimp python plug-in shows the user a drop down box with two options [".jpg", ".png"]. Question: How to show a second input window with conditional content based on first input? .jpg --> "Quality" range slider [0 - 100] .png --> "Compression" range slider [0 - 9] In different words: How to trigger a (registered) plug-in WITH user-input-window from within the main function of a plug-in?
gimp python plug in: how to trigger another user input
1.2
0
0
404
41,413,303
2017-01-01T04:12:00.000
0
1
0
0
file,python-3.x,path
41,413,400
1
false
0
0
The first path, /data/data/org.qpython.qpy3, this is where the actual QPython app is stored on your device. I don't believe you can access this path without having root access. The second path, /storage/sdcard0/qpthon, this is where QPython saves files by default. It uses this location because it can be easily accessed with normal user privileges.
1
0
0
I am learning Python programming language. Currently I am experimenting i-o files. I imported sys module and in sys.path list I saw two kinds of paths: /data/data/org.qpython.qpy3.... /storage/sdcard0/qpthon... The former path does not exist physically on my device (Tablet), although I can create/read files using this path through python. I want to know about these paths. What are they called? What are they for? etc.
Directory path which does not physically exist on my device
0
0
0
43
41,414,283
2017-01-01T08:32:00.000
0
0
1
0
perforce,p4python
56,035,100
2
false
0
0
If You are getting this error while Unshelve then select Overwrite option or manually delete the file from explorer and proceed with unshelve
2
0
0
i am syncing my workspace file to a previous revision by using sync command as : p4_object.run("sync", "-f", "--parallel=0", "c:\Users\agrahari\Desktop\give\first\test_2.txt#2") it is throwing error: rename: failed to rename c:\Users\agrahari\Desktop\give\first\test_2.txt after 10 attempts: Cannot create a file when that file already exists. file is there in workspace but with revision #3 synced. please suggest what to do to get it synced with revision #2
p4python: perforce: giving sync command throwing error-: rename: failed to rename after 10 attempts
0
0
0
2,128
41,414,283
2017-01-01T08:32:00.000
3
0
1
0
perforce,p4python
41,425,478
2
false
0
0
Got the solution. issue was I was not closing the file handler before committing a sync command. thanks.
2
0
0
i am syncing my workspace file to a previous revision by using sync command as : p4_object.run("sync", "-f", "--parallel=0", "c:\Users\agrahari\Desktop\give\first\test_2.txt#2") it is throwing error: rename: failed to rename c:\Users\agrahari\Desktop\give\first\test_2.txt after 10 attempts: Cannot create a file when that file already exists. file is there in workspace but with revision #3 synced. please suggest what to do to get it synced with revision #2
p4python: perforce: giving sync command throwing error-: rename: failed to rename after 10 attempts
0.291313
0
0
2,128
41,414,660
2017-01-01T10:04:00.000
3
0
0
0
python-3.x,cx-oracle
41,429,347
1
false
0
0
The reason there is a dependency is because cx_Oracle is a C extension, which means that it must be compiled every time the Python C API changes. That generally happens each time a minor version is released. As to when cx_Oracle will be released for Python 3.6 -- that is unknown but hopefully will be soon! In the meantime you can compile it for yourself and use it before any official release is made.
1
2
0
I know that python 3.6 is only available since a few days. What do you think when cx_Oracle for python 3.6 will become available? I'm not a python expert. May I also ask why there is a dependency between the python minor version and the Oracle library? thanks a lot, and have a great new year. Juergen
cx_Oracle for Python 3.6
0.53705
0
0
3,321
41,416,652
2017-01-01T15:44:00.000
0
0
0
0
python,machine-learning,scikit-learn,cluster-analysis
41,417,067
1
false
0
0
You can use the skikit-learn Affinity propagation or Mean-shift libraries for clustering. Those algorithms will output a number of clusters and centers. To use the Y seems to be a different question because you can't plot the multi dimensional point on a 3D plane unless you do some import some other libraries.
1
0
1
I have two sparse scipy matrix's, title and paragraph whose dimensions are (284,183) and (284,4195) respectively. Each row of both matrix's are features from one instance of my dataset. I wish to cluster these without a predefined number of clusters and then plot them. I also have an array, Y that relates to each row. (284,1). One class is represented by 0, the other by 1. I would like to color the points using this. How can I do this using Python?
Cluster two features in Python
0
0
0
156
41,419,145
2017-01-01T21:22:00.000
1
0
1
0
python,python-2.7,dst
41,419,219
3
false
0
0
Most computers track time in terms of seconds, or milliseconds, since an "epoch," which is usually January 1, 1970 UTC. Setting the time zone to something other than UTC doesn't actually change the system clock. The system clock continues to run in UTC. Instead the computer simply remembers that you, the user, would like to see the time in some particular time zone, and converts the UTC system time to your preferred time zone whenever it's needed. So the reason the sleep function won't be affected by a daylight savings time change is that the computer calculates the wake-up time relative to the system clock, which is always UTC.
1
0
0
If it's 5 seconds before setting the time forward an hour, will time.sleep(10) sleep for 10 seconds, or five seconds? (after five seconds, the time is more than ten seconds from the start). If it's 5 seconds before setting the time backwards an hour, time.sleep(10) sleep for ten seconds, or for an hour and ten seconds? (it takes an extra hour for the current time to be more than ten seconds from the start time, at least by the clock) And for completeness, what does sleep do for leap seconds, like we just had? Are there any times that time.sleep would not be expected to do what is requested? I'm still on python 2.7.
What does time.sleep(10) in python do during daylight savings time?
0.066568
0
0
769
41,421,114
2017-01-02T03:44:00.000
5
0
1
0
python
41,421,202
1
true
0
0
What your probably referring to are the S32 and U32 data types of Numpy arrays. They specfic to Numpy, and are not built in to Python. To answer your question, the difference between the two is that if the dtype of a Numpy array is S32, that means that one of your arrays contain strings. While on the other hand, a dtype of U32 means that one of your arrays contains unicode.
1
0
0
In python, what's the difference between S and U datatype? Can't find the documentation. Both are string types. Right?
What's the difference between S32 VS U32?
1.2
0
0
6,881
41,422,606
2017-01-02T07:03:00.000
1
0
0
1
windows,python-3.x,python-3.5,mbcs
61,595,144
2
false
0
0
Just change the encode to 'latin-1' (encoding='latin-1') Using pure Python: open(..., encoding = 'latin-1') Using Pandas: pd.read_csv(..., encoding='latin-1')
1
0
0
I am trying to create a duplicate file finder for Windows. My program works well in Linux. But it writes NUL characters to the log file in Windows. This is due to the MBCS default file system encoding of Windows, while the file system encoding in Linux is UTF-8. How can I convert MBCS to UTF-8 to avoid this error?
MBCS to UTF-8: How to encode in Python
0.099668
0
0
6,389
41,426,805
2017-01-02T12:25:00.000
0
0
0
1
python,file,jenkins,path
41,439,647
1
false
0
0
maybe you can use the $WORKSPACE var to have a full path for the fopen command.
1
0
0
I'm trying to open a file from a python script which am running from jenkins. Both the files (which am trying to open and the python script) are in same location. But when am running the script, am getting error, fopen: No such file or directory am running export PATH="/file path:$PATH" in jenkins before running my script. But still am getting the fopen error. Am able to run the script from terminal
Jenkins showing fopen: No such file or directory, but file exists
0
0
0
697
41,427,307
2017-01-02T12:58:00.000
1
0
1
0
python,visual-studio,visual-studio-2015,intellisense
41,427,308
1
true
0
0
It turns out that I had added new Search Paths to my project that were inside installed Python libraries. (In my case, the library in question was SymPy.) For whatever reason, Visual Studio's Intellisense chokes when you do this. Removing these paths and then closing and re-opening the solution fixed the issue.
1
0
0
For some reason, Visual Studio 2015's Python Intellisense completely broke in my project. Even something as simple as import sys; sys. doesn't pop up the members of sys. Sometimes it freezes or even crashes the IDE completely, making it run out of memory or otherwise behaving strangely. I even tried clearing the database and refreshing it, but nothing helped. What might be causing this?
Python Intellisense suddenly broke; refreshing doesn't fix it
1.2
0
0
76
41,427,500
2017-01-02T13:10:00.000
40
0
1
0
python,virtualenv
41,799,834
2
true
0
0
Typically the steps you always takes are: git clone <repo> cd <repo> pip install virtualenv (if you don't already have virtualenv installed) virtualenv venv to create your new environment (called 'venv' here) source venv/bin/activate to enter the virtual environment pip install -r requirements.txt to install the requirements in the current environment
1
21
0
Creating a virtualenv will create a virtual python environment with preinstalled pip, setuptools and wheels. Is there a way to specify what packages to pre-install in that virtualenv apart from those 3 default ones? Either with CLI arguments, a file, or environment variables of some sort. I.e. is there something along the lines of virtualenv venv && venv/bin/pip install -r requirements.txt which can be run in one command?
Creating a virtualenv with preinstalled packages as in requirements.txt
1.2
0
0
26,082
41,428,357
2017-01-02T14:07:00.000
2
0
1
1
pyinstaller,cx-freeze,python-3.6
41,429,495
1
true
0
0
The bytecode format changed for Python 3.6 but I just pushed a change to cx_Freeze that adds support for it. You can compile it yourself or wait for the next release -- which should be sometime this week.
1
1
0
To utilize the inherent UTF-8 support for windows console, I wanted to freeze my script in python 3.6, but I'm unable to find any. Am I missing something, or none of the freezing modules updated for 3.6 yet? Otherwise I'll just keep a 3.5.2 frozen version and a 3.6 script version for computers with English consoles. Thanks.
Freezing Python 3.6
1.2
0
0
636
41,428,534
2017-01-02T14:19:00.000
0
0
1
0
python,nltk
41,428,604
2
false
0
0
I don't think it makes any difference since the corpora are independent and you'll have to load each one separately to use it. You can download all of them if you wish. And of course, there is the assumption that you're not going to make a wildcard import for all of them.
1
0
0
Assuming server space is not a constraint, is it still advised to download selective corpora and not all ? I am aware, it would add to the time of certain operations .e.g creation of virtualenv. But will there be some performance difference of nltk if selective corpora are downloaded, or all are downloaded ?
Is there any disadvantage of downloading all corpora in nltk?
0
0
0
787
41,431,766
2017-01-02T18:35:00.000
1
0
1
0
python,multithreading,file-io
41,433,003
1
false
0
0
Files don't work that way. You can read a file or write it, but you cannot easily delete the first line from a file. You'd have to go to the start of the file and overwrite it with the data from the second and further lines and then truncate it at the end. A better alternative would be to keep a list of lines in a module. In that module, define two functions; One to add lines to the end of the list (using the append method). The second to remove lines from the beginning of the list and return them (using the pop method with 0 as the argument). You should add a third function to write the list to a file (using the file's writelines method). If desired, you can call this function in both of the other functions so that the list is written to file whenever it has changed. Last but not least you should have a function to load a list of lines from a file. Since you want to use this from a multi-threaded program this module should use a threading.Lock to make sure that only one thread at a time can modify the list.
1
0
0
I have one thread writing a line to the end of a file every so often and I was hoping to have another thread read from the beginning of the file and then delete what it had read. I need it to be in a file so that when the program ends it can pick up where it left off. The problem is that I'm not sure how to delete the first line and then refresh the file.
Python one thread writing to end of file and another thread reading from beginning of file
0.197375
0
0
509
41,432,445
2017-01-02T19:40:00.000
1
0
1
0
django,python-2.7
41,433,294
1
false
1
0
Create a new virtualenv using Python 2.7. Use the -p flag to point to the python installation you want for that virtual environment, and then pip install django within that virtual environment.
1
0
0
The django app running on localhost in a virtualenv uses the default python version 2.7.3 that is under /usr/bin/ but I installed Python 2.7.9 under ~/.opt/bin/python2.7. I updated the $PATH but I want the django app to use the locally installed python version by default. Please help me understand how to make that happen. Thank you.
Updating which python my django app uses
0.197375
0
0
96
41,436,068
2017-01-03T03:36:00.000
0
0
0
0
python,theano
41,482,177
2
false
0
0
THEANO_FLAGS='device=gpu1' python myscript.py is working for me. Are you sure you have no space character after THEANO_FLAGS, like this: THEANO_FLAGS ='device=gpu1' python myscript.py. This would raise the THEANO_FLAGS: Command not found
1
0
0
My settings of .theanorc file is device = gpu0, but I want to know if I can run one program with gpu0, and run another with gpu1, I tried THEANO_FLAGS='device=gpu1' python myscript.py but it raised THEANO_FLAGS: Command not found.
THEANO_FLAGS: Command not found
0
0
0
280
41,437,381
2017-01-03T06:05:00.000
0
0
1
0
python,apache-spark,pyspark,apache-spark-sql,pyspark-sql
41,437,546
2
false
0
0
When you do sqlContext.read.json this is translated behind the scenes to an expression which is evaluated by scala code. This means the json parsing would be done by the JVM.
1
0
0
sqlContext.read.json("...path.to.file...") I'm writing a Spark script in Python using pyspark. Does the JSON parsing happen in Python or on the JVM? If Python, does it use the C simplejson extension, or is it native Python? I'm doing a lot of JSON parsing so performance here matters.
Does PySpark JSON parsing happen in Python or JVM?
0
0
0
308
41,441,638
2017-01-03T10:41:00.000
3
0
0
0
python,tensorflow,theano,keras,keras-layer
41,454,359
1
false
0
0
It is much more than model reuse, the functional API allows you to easily define models where layers connect to more than just the previous and next layers. You can connect layers to any other layers as you wish, so siamese networks, densely connected networks and such become possible. The old Graph API allowed the same level of connectivity but it was a PITA due to its use of layer node names to define connectivity. The sequential model is just a sequential set of layers, and new neural network architectures at this time are moving away from such pattern.
1
3
0
What extra can be done using Keras functional API, which could not be done using keras sequential models? Apart from the fact that a simple model can be reused for a time bases data using “TimeDistributed” layer wrapper ?
What is extra with Keras functional API?
0.53705
0
0
908
41,442,012
2017-01-03T11:02:00.000
0
0
1
0
python,gps,gis
41,442,131
1
false
0
0
Check out pyproj, geopandas, and rtree.
1
0
0
I have a series of GPS points which collectively form a polyline. Each of these GPS points has a time stamp and I can therefore compute things like journey time and average speed along the poly line. I now wish to map the resulting polyline onto a road network. However, for obvious reasons the GPS points don't line up with the actual infrastructure and I must attempt to match them across. Is there a python library for doing this?
Segment to polyline matching python or GIS
0
0
0
275
41,442,259
2017-01-03T11:16:00.000
2
0
0
0
python,python-2.7,wxpython
48,085,369
2
false
0
1
wx.DatePickerCtrl is not included with the current download of wxPython. Just add an import wx.adv and you will be fine.
1
4
0
TLDR first: When using "wx.adv.DatePickerCtrl(self)", get "AttributeError: 'module' object has no attribute 'adv'" longer story: Just learning wxPython, trying to write a date picker using DatePickerCtrl. Found example with 'wx.DatePickerCtrl'. apparently it is only valid for version 2.8 (which I could not find anywhere). Quick search shows it been replaced by wx.adv.DatePickerCtrl(self) in version 3. Now get the above massage (AttributeError: 'module' object has no attribute 'adv') (system: windows 10, python 2.7.10 32bit, wx 3.0.2.0 msw) Can anyone help?
wxpython does not have 'adv'
0.197375
0
0
2,732
41,447,048
2017-01-03T15:38:00.000
0
0
0
0
python,optimization,scipy,integer,minimum
41,447,225
1
false
0
0
That is actually a way harder problem speaking math, the same algorithm will not be capable! This problem is np-hard. Maybe check out pyglpk... And check out mixed integer programming.
1
4
1
I'd like to minimize some objective function f(x1,x2,x3) in Python. Its quite a simple function but the problem is that the design vector x=[x1,x2,x3] constains integers only. So for example I'd like to get the result: "f is minimum for x=[1, 3, 2]" and not: "f is minimum for x=[1.12, 3.36, 2.24]" since this would not make any sense for my problem. Is there any way to rig scipy.minimize to solve this kind of problem? Or is there any other Python library capable of doing this?
Scipy.optimize.minimize using a design vector x that contains integers only
0
0
0
267
41,447,383
2017-01-03T15:56:00.000
2
0
1
0
python,pandas,number-formatting,separator
59,636,728
6
false
0
0
If you want "." as thousand separator and "," as decimal separator this will works: Data = pd.read_Excel(path) Data[my_numbers] = Data[my_numbers].map('{:,.2f}'.format).str.replace(",", "~").str.replace(".", ",").str.replace("~", ".") If you want three decimals instead of two you change "2f" --> "3f" Data[my_numbers] = Data[my_numbers].map('{:,.3f}'.format).str.replace(",", "~").str.replace(".", ",").str.replace("~", ".")
1
10
1
Assuming that I have a pandas dataframe and I want to add thousand separators to all the numbers (integer and float), what is an easy and quick way to do it?
Easy way to add thousand separator to numbers in Python pandas DataFrame
0.066568
0
0
20,270
41,451,632
2017-01-03T20:16:00.000
0
0
1
0
python,windows,pyinstaller
45,683,663
2
false
0
0
pyInstaller works on Mac. I use it for a project of mine. For Mac you have to have an Apple developer account to sign an application. 3rd party certificate authorities like Certum are no longer accepted. For Windows apparently they use some sort of reputation type system. So even after you sign an exe it has to be downloaded and run enough times before it stops throwing the error. As for Windows Store... you have to develop an app using the Universal Windows Platform API for them to allow it on the store, so I don't think a python script would qualify even if it was packaged as an exe.
1
2
0
I have a Python script which I ran pyInstaller on to create a portable windows exe which runs on Windows 7,8, and 10 devices. I signed the exe with Certum Open Source code signature. The app works great but I am finding: Windows Smartscreen warns users that I am an unknown developer and makes it too scary for people to run my app. Norton quarentines my app. These are not good hoops for my users to have to jump through. I'm wondering what I can do to immediately address these. As a secondary goal I am hoping there might be a way to submit the script to the Windows store. I am guessing I might need to have an installer for it instead of have it run as a portable exe? Do I need to create an appx? If so, what would the entry point be for a Python script frozen by pyInstaller? In case it isn't obvious, I'm not a Windows programmer so I'm a bit lost.
Python script as a safe exe and maybe even a Windows Store App?
0
0
0
1,036
41,454,355
2017-01-04T00:17:00.000
0
0
0
0
python,mysql
41,454,481
2
false
0
0
You can use mysql.connector on the server. However, you will have to install it first. Do you have root (admin) access? If no, you might need help from the server admin.
1
0
0
I have been using the mysql.connector module with Python 2.7 and testing locally using XAMPP. Whenever I upload my script to the server, I am getting an import error for the mysql.connector module. I am assuming this is because, unlike my local machine, I have not installed the mysql.connector module on the server. My question is: can I somehow use the mysql.connector module on the server or is this something only for local development? I have looked into it, and apparently do not have SSH access for my server, only for the database. As well, if I cannot use the mysql.connector module, how do I connect to my MySQL database from my Python script on the server?
Can you use Python mysql.connector on actual Server?
0
1
0
95
41,459,860
2017-01-04T09:02:00.000
0
0
0
0
python,python-2.7,neural-network,pybrain
41,730,024
2
false
0
0
I am struggling with a similar problem. So far I am using net._setParameters command to fix the weights after each training step, but there should be a better answer.. It might help for the meantime, I am waiting for the better answer as well :-)
1
1
1
I am quite new to neural networks and trying to use pybrain to build and train a network. I am building my network manually with full connections between all layers (input, two hidden layers, output) and then set some weights to zero using _SetParameters as I don't want connections between some specific nodes. My problem is that the weights that are zero at the beginning are adapted in the same way as all other weights and therefore no more zero after training the network via backprop. How can I force the "zero-weights" to stay zero through the whole process? Thanks a lot for your answers. Fiona
Python/Pybrain: How can I fix weights of a neural network during training?
0
0
0
562
41,460,501
2017-01-04T09:37:00.000
0
0
1
0
python,tkinter,python-multithreading
41,462,518
1
true
0
0
Make SETTINGS global variable. Since it's user updated, there won't be thread safety problem when it's modified. And just (only) READ the variable in any other places you want.
1
1
0
I'm building a tkinter app which runs a daemon thread alongside it's mainloop(), where most of the shared data between the threads is going through queues. I've created the other thread class in another .py file, and I'm importing it in the main file. In the main file, I have a SETTINGS dict which the user update using the GUI, and I need the other thread to be able to read it while not modifying it. I thought about RLock, but from my understanding it is usually used by both of the threads and might be a little confusing to understand in the future. I'm looking for something simple, must be a python builtin solution. EDIT: I'll add that the other thread will access the dict all the time, and shouldn't be aware when the main thread updates it.
Python threads: How to represent data that a thread should read-only?
1.2
0
0
63
41,461,708
2017-01-04T10:35:00.000
0
0
0
0
python,mongodb,heroku,flask,python-rq
41,469,303
2
false
1
0
It turns out that the solution that worked for is to save the data to Amazon S3 storage, and then pass the URI to function in the background task.
2
5
0
I am running a Flask server which loads data into a MongoDB database. Since there is a large amount of data, and this takes a long time, I want to do this via a background job. I am using Redis as the message broker and Python-rq to implement the job queues. All the code runs on Heroku. As I understand, python-rq uses pickle to serialise the function to be executed, including the parameters, and adds this along with other values to a Redis hash value. Since the parameters contain the information to be saved to the database, it quite large (~50MB) and when this is serialised and saved to Redis, not only does it take a noticeable amount of time but it also consumes a large amount of memory. Redis plans on Heroku cost $30 p/m for 100MB only. In fact I every often get OOM errors like: OOM command not allowed when used memory > 'maxmemory'. I have two questions: Is python-rq well suited to this task or would Celery's JSON serialisation be more appropriate? Is there a way to not serialise the parameter but rather a reference to it? Your thoughts on the best solution are much appreciated!
Large memory Python background jobs
0
0
0
1,222
41,461,708
2017-01-04T10:35:00.000
7
0
0
0
python,mongodb,heroku,flask,python-rq
41,469,731
2
true
1
0
Since you mentioned in your comment that your task input is a large list of key value pairs, I'm going to recommend the following: Load up your list of key/value pairs in a file. Upload the file to Amazon S3. Get the resulting file URL, and pass that into your RQ task. In your worker task, download the file. Parse the file line-by-line, inserting the documents into Mongo. Using the method above, you'll be able to: Quickly break up your tasks into manageable chunks. Upload these small, compressed files to S3 quickly (use gzip). Greatly reduce your redis usage by requiring much less data to be passed over the wires. Configure S3 to automatically delete your files after a certain amount of time (there are S3 settings for this: you can have it delete automatically after 1 day, for instance). Greatly reduce memory consumption on your worker by processing the file one line at-a-time. For use cases like what you're doing, this will be MUCH faster and require much less overhead than sending these items through your queueing system. Hope this helps!
2
5
0
I am running a Flask server which loads data into a MongoDB database. Since there is a large amount of data, and this takes a long time, I want to do this via a background job. I am using Redis as the message broker and Python-rq to implement the job queues. All the code runs on Heroku. As I understand, python-rq uses pickle to serialise the function to be executed, including the parameters, and adds this along with other values to a Redis hash value. Since the parameters contain the information to be saved to the database, it quite large (~50MB) and when this is serialised and saved to Redis, not only does it take a noticeable amount of time but it also consumes a large amount of memory. Redis plans on Heroku cost $30 p/m for 100MB only. In fact I every often get OOM errors like: OOM command not allowed when used memory > 'maxmemory'. I have two questions: Is python-rq well suited to this task or would Celery's JSON serialisation be more appropriate? Is there a way to not serialise the parameter but rather a reference to it? Your thoughts on the best solution are much appreciated!
Large memory Python background jobs
1.2
0
0
1,222
41,465,836
2017-01-04T14:10:00.000
1
0
1
0
json,mongodb,python-2.7
41,470,959
2
false
0
0
The issue is that “_id” is actually an object and not natively deserialized. By replacing the _id with a string as in mydocument['_id'] ='123 fixed the issue.
1
5
0
I am using MongoDB 3.4 and Python 2.7. I have retrieved a document from the database and I can print it and the structure indicates it is a Python dictionary. I would like to write out the content of this document as a JSON file. When I create a simple dictionary like d = {"one": 1, "two": 2} I can then write it to a file using json.dump(d, open("text.txt", 'w')) However, if I replace d in the above code with the the document I retrieve from MongoDB I get the error ObjectId is not JSON serializable Suggestions?
Create JSON file from MongoDB document using Python
0.099668
1
0
5,356
41,466,768
2017-01-04T14:54:00.000
0
0
0
0
python,html,nginx,web-crawler
41,653,258
1
false
1
0
You can't crawl webpages if you don't know how to get to them. If I understood what you meant, you want to access pages that are accessible in a directory whose index page is not (because you get a 403). Before you give up, you can try the following: check if the main search engines link to the pages inside the directory that you seem to know about (because if you know you have access to those .html you probably know at least one of them). The page that includes that link may include other links to files inside that directory as well. For instance, in google, use the link: operator: link:www.abc.com/a/b/the_file_you_know_exists check if the website is indexed in the main search engines. For instance, in google, use the site: operator: site:www.abc.com/a/b/ check if the website is archived in archive.org: http://web.archive.org/web/*/www.abc.com/a/b/ check if you can find it in other web archives using memento: http://timetravel.mementoweb.org/reconstruct/*/www.abc.com/a/b/ try to find other possible filenames such as index1.html, index_old.html, index.html_old, contact.html and so on. You could create a long list of the possible filenames to try but this also depends on what you know about the website. This may give you possible pages from that website that still exist or existed in the past.
1
1
0
I have a web-page here that I need to crawl. It looks like this: www.abc.com/a/b/, and I know that under the /b directory, there are some files with .html extensions I need. I know that I have access to those .html files, but I have no access to www.abc.com/a/b/. So, without knowing the .html file name, how can I crawl those .html pages?
How do I use crawler if I know the target web-page and file extension but not knowing the file name?
0
0
0
50
41,469,446
2017-01-04T17:04:00.000
1
0
0
0
python,django,python-requests
41,469,658
1
false
1
0
I was trying to pass the company id as company_uuid. I changed it to just company_id, and it worked perfectly.
1
0
0
I'm getting the following error making a POST request to ServiceM8: {'Content-Type': 'text/html;charset=UTF-8', 'Content-Length': '343', 'Connection': 'keep-alive', 'Date': 'Wed, 04 Jan 2017 16:57:31 GMT', 'Server': 'Apache', 'X-Frame-Options': 'SAMEORIGIN', 'X-Cache': 'Error from cloudfront', 'Via': '1.1 49ccc390fa499ab821b632cf67d38720.cloudfront.net (CloudFront)', 'X-Amz-Cf-Id': 'DCkbBH5hfQ-ZyeyefPZJAyaVhKar_oD3n_VDZ8TYS97CyLpG4r5YGQ=='} I'm currently using: Django==1.10.4 requests==2.12.4
Error Making POST to ServiceM8
0.197375
0
0
41
41,471,776
2017-01-04T19:26:00.000
3
0
0
0
python,sqlite,task,scheduler,exit
41,575,621
1
false
0
0
Turns out it was my script breaking. This is the error code (oddly enough there's not much documentation) you get when your python program ends with code -1 (exits without finishing properly or has some unhandled exception). It was intermittent because I was checking a web page and sometimes that web server just didn't respond for any number of reason. Leaving this here for posterity. If you get this error code in task scheduler, write some logging and error handling into your script because it may be a weird problem you didn't think of.
1
2
0
Having some odd trouble scheduling a task for a python script. Specifically this script and the problem is intermittent, which made me hesitant to pose the question because I'm very confused. I have other scheduled scripts that run fine. This one is the only one modifying a SQLite database though. I call the script daily, I've done this several ways with the same result. I finally settled on Action "start a program", Program/script: "python" (it is in my path, but i've also directly called py.exe and pyw.exe, with the same result). Add arguments: "scriptname.py". Start in "location of script and database file" which the account I'm using in the scheduler has full read/write/execute access to. And I've instructed this to work whether or not the user is logged in. I use this same operation for several other scripts and they are fine, this one just doesn't work sometimes. It always runs, but every few days it exits with code 2147942401 instead of 0. On these days the database is not updated, so I suppose it had trouble writing? I'm not sure. It seems this error code in windows is associated with invalid function, but I can manually run the script and everything is fine. And half the days (not exactly half, seemingly randomly), it doesn't work. This never happened until about 3 weeks ago. Nothing changed that I'm aware of, everything has been running fine for months and then bam, exit code 2147942401. It did it several days in a row, and then no problems for a few days. Never a problem running task (or script) manually. It is set to run with highest privileges. Anyone seen anything like this?
Windows Task Scheduler, python script, code 2147942401
0.53705
1
0
7,724
41,471,887
2017-01-04T19:34:00.000
5
0
0
0
python,excel,matplotlib,python-docx
41,472,883
2
false
0
0
The general approach that's currently supported is to export the chart from matplotlib or wherever as an image, and then add the image to the Word document. While Word allows "MS Office-native" charts to be created and embedded, that functionality is not in python-docx yet.
1
7
1
python beginner here with a simple question. Been using Python-Docx to generate some reports in word from Python data (generated from excel sheets). So far so good, but would like to add a couple of charts to the word document based on the data in question. I've looked at pandas and matplotlib and all seem like they would work great for what I need (just a few bar charts, nothing crazy). But can anyone tell me if it is possible to create the chart in python and have it output to the word document via docx?
How Can I Write Charts to Python DocX Document
0.462117
0
0
10,764
41,472,689
2017-01-04T20:29:00.000
1
0
1
1
python,python-2.7,python-3.x
41,472,744
1
false
0
0
You'll have to specify the Python 3 version of easy_install. The easiest way to do this is to give its full path on the command line. It should be in the executable directory of the Python 3 installation you did (i.e. the same directory as the Python 3 interpreter itself). You should not remove the system-installed Python 2 in an attempt to get easy_install to refer to Python 3, because the operating system relies on that version of Python being installed.
1
0
0
I have installed python3 on Mac and I am trying to install pip. While installing pip with command sudo easy_install pip it installs the pip for python 2.x which by default comes with Mac. Is there any way I can install pip for python3? Also, is it necessary to keep the older version of python installed as well?
Install pip on Mac for Python3 with Python2 already installed
0.197375
0
0
984
41,474,862
2017-01-04T23:18:00.000
0
0
1
0
python
41,474,905
1
false
0
0
and conditions return the last truthy value or the first falsy one. So if n is falsy, n will remain whatever value it was. If n is truthy, it will be cast to an int.
1
0
0
This looks like a short circuit way of writing code but I just can't understand it. Is there a specific way to read this kind of short circuit. e.g: n = n and int(n) n = n or int(n)
How short circuit code works in python
0
0
0
56
41,476,490
2017-01-05T02:34:00.000
0
0
1
0
python,string
41,476,538
3
false
0
0
And if you want to use join only , so you can do like thistest="test string".split() "_".join(test) This will give you output as "test_string".
1
1
0
I want to transform the string 'one two three' into one_two_three. I've tried "_".join('one two three'), but that gives me o_n_e_ _t_w_o_ _t_h_r_e_e_... how do I insert the "_" only at spaces between words in a string?
How to join() words from a string?
0
0
0
10,301
41,476,663
2017-01-05T02:58:00.000
0
0
1
0
python,fuzzy-search,fuzzy-logic
41,477,598
1
true
0
0
To get you started, here is an answer which can provide matches on either the full name or the university - you could extend it to include fuzzy search using a library like fuzzywuzzy: For both lists, split each string into a [full name, university] list (if some of the strings don't contain the '|' character, you might need to wrap this in a try, except or an if statement): new_list = [item.split('|') for item in old_list] Run the following command to match on either element (assuming that one list is called list1 and the other list is called list2): matches = [val for val in list1 for item in list2 if val[0] == item[0] or val[1] == item[1]]
1
0
0
I've seen lots of Q&A on this topic, but none contain the type of output I'm looking for. Any words of wisdom on this would be very much appreciated! I have 2 lists... both lists contain 1 column, consisting of Full Name|University (i.e., name and university, concatenated, and separated by a pipe) There's not always an exact match, due to nicknames and university abbreviations. I want to compare each record in list 1 with each record in list 2, and find the closest match. I then want to produce an output file with 3 columns: Every item from list 1, The closest match from list 2, and the match %. Does anyone have sample code they could share? Thanks!
Python: Comparing 2 sets of data, yield best match and match %
1.2
0
0
815
41,482,733
2017-01-05T10:35:00.000
0
0
1
0
python,python-2.7,nltk
41,486,968
2
true
0
0
Neither modify their logic or computation in any iterative loop. In NLTK, tokenzation by default is rule based, using Regular Expressions, to split tokens from a sentence POS tagging by default uses a trained model for English, and will therefore give the same POS tag per token for the given trained model. If that model is trained again, it will change. Therefore the basic answer to your question is no
2
0
1
Does Python's NLTK toolkit return different results for each iteration of: 1) tokenization 2) POS tagging? I am using NLTK to tag a large text file. The tokenized list of tuples has a different size every time. Why is this?
Does NLTK return different results on each run?
1.2
0
0
110
41,482,733
2017-01-05T10:35:00.000
0
0
1
0
python,python-2.7,nltk
41,487,255
2
false
0
0
Both the tagger and the tokenizer are deterministic. While it's possible that iterating over a Python dictionary would return results in a different order in each execution of the program, this will not affect tokenization -- and hence the number of tokens (tagged or not) should not vary. Something else is wrong with your code.
2
0
1
Does Python's NLTK toolkit return different results for each iteration of: 1) tokenization 2) POS tagging? I am using NLTK to tag a large text file. The tokenized list of tuples has a different size every time. Why is this?
Does NLTK return different results on each run?
0
0
0
110
41,485,130
2017-01-05T12:28:00.000
1
0
1
0
python,django,concurrency
41,549,703
2
true
1
0
as e4c5 mentioned, conventionally settings.py is pretty light on logic. The loading mechanism for settings is pretty obscure and, I personally, like to stay away from things that are difficult to understand and interact with :) You absolutely have to care about concurrency. How are you running your application? It's tricky because in the dev env you have a simple server and usually handle only a handful of requests at the same time (and a couple years ago the dev server was single threaded) If you're running your application using a forking server, how will you share data between processes? one process won't even see the other processes settings.py changes. I'm not even sure of how it would look like with a threading server, but it would probably at least require a source code audit of your web server to understand the specifics of how requests are handled and how memory is shared. Using a DB is by far the easiest solution, (you should be able to use an in memory db as an option too memcache/redis/etc). DB's provide concurrency support out the box and will be a lot more easier to reason about and provides primitives for concurrent accessing of data. And in the case of redis, which is single threaded you won't even have to worry about concurrent accesses to your shared IP addresses
2
0
0
I am not sure whether I have to care about concurrency, but I didn't find any documentation about it. I have some data stored at my settings.py like ip addresses and each user can take one or give one back. So I have read and write operations and I want that only one user read the file at the same moment. How could I handle this? And yes, I want to store the data at the settings.py. I found also the module django-concurrency. But I couldn't find anything at the documentation.
Django: Concurrent access to settings.py
1.2
0
0
143
41,485,130
2017-01-05T12:28:00.000
1
0
1
0
python,django,concurrency
41,485,205
2
false
1
0
And yes, I want to store the data at the settings.py. No you definitely don't want to do that. the settings.py file is configuring django and any pluggable apps that you may use with it. it's not intended to be used as a place for dumping data. Data goes into a database. And don't forget that the settings.py file is usually read only once.
2
0
0
I am not sure whether I have to care about concurrency, but I didn't find any documentation about it. I have some data stored at my settings.py like ip addresses and each user can take one or give one back. So I have read and write operations and I want that only one user read the file at the same moment. How could I handle this? And yes, I want to store the data at the settings.py. I found also the module django-concurrency. But I couldn't find anything at the documentation.
Django: Concurrent access to settings.py
0.099668
0
0
143
41,485,251
2017-01-05T12:33:00.000
0
0
0
1
python,django,docker,containers,celery
41,668,121
1
false
1
0
You can shell into the running container and check things out. Is the celery process still running, etc... docker exec -ti my-container-name /bin/bash If you are using django, for example, you could go to your django directory and do manage.py shell and start poking around there. I have a similar setup where I run multiple web services using django/celery/celerybeat/nginx/... However, as a rule I run one process per container (kind of exception is django and gunicorn run in same container). I then share things by using --volumes-from. For example, the gunicorn app writes to a .sock file, and the container has its own nginx config; the nginx container does a --volumes-from the django container to get this info. That way, I can use a stock nginx container for all of my web services. Another handy thing for debugging is to log to stdout and use docker's log driver (splunk, logstash, etc.) for production, but have it log to the container when debugging. That way you can get a lot of information from 'docker logs' when you've got it under test. One of the great things about docker is you can take the exact code that is failing in production and run it under the microscope to debug it.
1
1
0
I have a micro-services architecture of let say 9 services, each one running in its own container. The services use a mix of technologies, but mainly Django, Celery (with a Redis Queue), a shared PostgreSQL database (in its own container), and some more specific services/libraries. The micro-services talk to each other through REST API. The problem is that, sometimes in a random way, some containers API doesn't respond anymore and get stuck. When I issue a curl request on their interface I get a timeout. At that moment, all the other containers answer well. There is two stucking containers. What I noticed is that both of the blocking containers use: Django django-rest-framework Celery django-celery An embedded Redis as a Celery broker An access to a PostgreSQL DB that stands in another container I can't figure out how to troubleshoot the problem since no relevant information is visible in the Services or Docker logs. The problem is that these API's are stuck only at random moments. To make it work again, I need to stop the blocking container, and start it again. I was wondering if it could be a python GIL problem, but I don't know how to check this hypothesis... Any idea about how to troubleshot this?
Troubleshooting API timeout from Django+Celery in Docker Container
0
0
0
472
41,485,507
2017-01-05T12:46:00.000
1
0
1
0
python,python-3.x,asynchronous,concurrency,python-asyncio
41,502,152
1
true
0
0
var1, var2 = loop.run_until_complete(asyncio.gather(task1, task2)) According to the docs, gather retains the order of the sequence it was passed
1
2
0
What I want to achieve is : tasks = [call(url) for url in urls] call is an async method / coroutine in Python3.5 to perform GET requests , let's say aiohttp. So basically all calls to call are async. Now I can run asyncio.wait(tasks) and later access the result in futures one by one. BUT, what I want is, lets assume there are 2 url only, then : a, b = call(url1), call(url2) Something like how you do it in Koa by yielding an array. Any help how to do this if it can be done ??
Set result of 2 or more Async HTTP calls into named variables
1.2
0
1
73
41,487,708
2017-01-05T14:38:00.000
0
0
0
0
python,apache-spark,cloudera
41,497,154
2
false
0
0
You are installing spark Python 1.6 which depends on Python 2.6 I think the current stable version is 2.x and the package for that is pyspark. Try installing that. It might require Python 3.0 but thats easy enough to install. You'll probably need to reinstall the other spark packages as well to make sure they are the right version.
1
0
1
I have a problem installing spark-python on CentOS. When I installed it using yum install spark-python, I get the following error message. Error: Package: spark-python-1.6.0+cdh5.9.0+229-1.cdh5.9.0.p0.30.el5.noarch (cloudera-cdh5) Requires: python26 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest I already installed other spark packages (spark-master, spark-worker...) but it only occurred installing spark-python. Can anyone help me?
What is the matter when I installing spark-python on CentOS
0
0
0
141
41,489,105
2017-01-05T15:47:00.000
2
0
0
0
python,django
41,490,025
1
false
1
0
If all you wanted to do was create a static page (something simple that queries a database and displays results with no user input) then PHP would definitely be much easier. It's much, much, MUCH easier to deploy than web apps-- just drop your .php files in /var/www/html and you're set. Apache runs on anything. Once you're in the realm of needing user accounts, management, etc. or possibly even expansion of scope in the future then Django becomes a more likely candidate. I love Django but deployment of Python/Ruby apps can be a major pain (if not impossible) depending on many factors in your target environment. It looks deceptively easy to get something running when you're using the built-in webserver where everything just works, but in production you may find your host doesn't offer (or you can't compile) mod_wsgi, only has mod_python (incompatible with modern Django), won't run nginx, can't use gunicorn, etc. Every cheap web host offers PHP support out of the box. If you want support for apps written in Python/Ruby, sometimes you need to pay for additional services (if not running a VPS or EC2 instance). Basically, there are more factors involved than just the number of pages you intend to write. You really need to evaluate what you're buying into with either avenue.
1
1
0
I have a small app I would like to build as a project to learn more about web development. Its an app where people can register etc, and add information about themselves (database needed) and their location on a webmap (using leaflet library). The app will be a single page application that the user navigate to via a link on the site that today is already live. I got a comment from one i know not to use php. Since I am learning python I was thinking maybe to use that for the server side bits. Is it a good idea to use django or other python framework when it will only be used on a single page of the site, is it even possible (very green on this)? Or should i just stick to php for a project this size? Any input would appreciated
Use django for one page on a website?
0.379949
0
0
2,061
41,498,803
2017-01-06T04:11:00.000
1
0
0
0
python,django,excel
41,498,821
2
false
1
0
I think a combination of Pandas and openpyxl will do the trick!
1
0
0
Looking for an Excel library for Django and Python with specific requirements. There looks to be a number of libraries for Django and Python that enable the user to upload an Excel document into the database. What I am wondering is if there is a library that allows you to create an Excel document and export with conditional formatting, live formulas, creating tabs, and VLOOKUPS? The company I work for produces Excel reports for our analysts to review that requires these types of things. Researching as we are exploring other solutions than using Access, which is it pretty easy to control Excel from.
Django/Python Library for importing and producing Excel documents?
0.099668
1
0
199
41,500,455
2017-01-06T06:49:00.000
0
0
0
1
python,linux,api,wget
41,500,665
2
false
1
0
This is tricky not knowing with which options you are calling wget and no log output, but since it seems to be a dns issue I would explicitly pass the --dns-servers=your.most.reliable.server to wget. If it persists I would also pass --append-output=logfile and examine logfile for further clues.
2
1
0
Im running python code solution (automation) in linux As part of the test im calling different api (rest) and connecting to my sql db. I'm running the solution 24/7 The soultion does Call api with wget Every 1 min samples the db with query for 60 min max Call api again with wget Every 1 min samples dc for 10 mins max. This scenario runs 24/7 Problem is that after 1 hr/ 2hr (inconsistency-can happen after 45 mins for instance) the solution exit with error Temporary failutre in name resolution. It can happen even after 2 perfect cycle as I described above. After this failure I'm trying to call with wget tens of times and ends with the same error. After some time it covered by it self. Want to mention that when it fails with wget on linux, Im able to call the api via POSTMAN via windows with no problem. The api calls are for our system (located in aws) and im using dns of our elb.. What could be the oroblem for this inconsistency? Thanks
Temporary failure in name resolution -wget in linux
0
0
1
5,012
41,500,455
2017-01-06T06:49:00.000
0
0
0
1
python,linux,api,wget
62,781,058
2
false
1
0
You can ignore the fail: wget http:/host/download 2>/dev/null
2
1
0
Im running python code solution (automation) in linux As part of the test im calling different api (rest) and connecting to my sql db. I'm running the solution 24/7 The soultion does Call api with wget Every 1 min samples the db with query for 60 min max Call api again with wget Every 1 min samples dc for 10 mins max. This scenario runs 24/7 Problem is that after 1 hr/ 2hr (inconsistency-can happen after 45 mins for instance) the solution exit with error Temporary failutre in name resolution. It can happen even after 2 perfect cycle as I described above. After this failure I'm trying to call with wget tens of times and ends with the same error. After some time it covered by it self. Want to mention that when it fails with wget on linux, Im able to call the api via POSTMAN via windows with no problem. The api calls are for our system (located in aws) and im using dns of our elb.. What could be the oroblem for this inconsistency? Thanks
Temporary failure in name resolution -wget in linux
0
0
1
5,012
41,506,392
2017-01-06T13:05:00.000
2
0
1
0
python-2.7,oop
41,512,408
1
true
0
0
Perhaps someone else has a better answer, but this is how I understand it. The metaphor of a blueprint for building a house works well here. Say you have a housing development with many different houses that look essentially the same with slight variations. Building a house requires that you do essentially the same thing each time, then adding customizations. Your class declarations are like blueprints, telling your Python program everything it needs to know about a house. However, your __init__ method provides the instructions for the absolute basic requirements for that object. Just like you can't have a house without a door, you can't have a Student object or a Pet object without a few basic properties like name, age. Your __init__ method will tell Python what it needs to do whenever you create a new Student or Pet, just like a blueprint will tell a general contractor that every house needs a door. The __init__ method also establishes the object's self variable. self allows you to be specific about variable assignment for a single copy of a class. Hope this helps!
1
1
0
I just started learning Python from Learn Python The Hard Way by Zed A. Shaw. However, I am confused about when should one use the init method? Is it mandatory to use it? What happens of I don't?
When to use and when not to use __init__ in Python 2.7
1.2
0
0
38
41,506,824
2017-01-06T13:30:00.000
0
0
1
0
python,tkinter,exe,python-3.5,pyinstaller
41,507,288
1
false
0
1
I've had some success with pyinstaller and a program using enaml (Qt backend) for the GUI. For pyinstaller it helps to use the --debug option or make a .spec file and set debug=True. Also, pyinstaller has the option to make a single folder with an exe in the folder instead of a single exe, this might be easier to debug. You can distribute your single-folder programme by making a single-exe installer with software like InnoSetup.
1
0
0
How do I convert the following files into one executable? A main.py file that imports five other python scripts Five python scripts that each have their own GUI A background image is used in the main.py file I am using Python 3.5.2. I have tried py2exe, cx_Freeze and pyinstaller but none seem to work, or I am doing something very wrong. Please could you help with clear steps. It seems I have to downgrade to Python 3.4 in order to convert successfully but I don't really want to downgrade. I am using tkinter for GUI and the Python math module for rounding-off numbers.
How to covert multiple Python 3.5.2 scripts and an image into one executable?
0
0
0
119
41,509,856
2017-01-06T16:25:00.000
1
0
0
0
python,amazon-web-services,aws-lambda
41,510,034
2
false
1
0
No, this breaks the lambda paradigm of having a fully built container ready to go. Also, anything you'd do with xvfb is probably going to be slow. As a general rule lambdas should execute in under a second, otherwise you should just have a server. I would recommend creating a docker container and making an auto-scaling group.
1
2
0
I would like to offload some code to AWS Lambda that grabs a part of a screenshot of a URL and stores that in S3. It uses chromium-browser which in turn needs to run in xvfb on Ubuntu. I believe I can just download the Linux 64-bit version of chromium-browser and zip that up with my app. I'm not sure if I can do that with xvfb. Currently I use apt-get install xvfb, but I don't think you can do this in AWS Lambda? Is there any way to use or install xvfb on AWS Lambda?
Can I use xvfb with AWS Lambda?
0.099668
0
1
2,074
41,510,454
2017-01-06T16:55:00.000
2
0
1
0
pip,ipython
66,058,100
2
false
0
0
When Python behaves as a system level command we use this exclamatory mark. Example: !python '/content/test.py'-> to run python file named test.py within a colab notebook
1
17
0
Just a quick example, typing pip list doesn't work but !pip list does. Is there some syntax regarding the exclamation point and using modules in the ipython shell?
Why does pip need an exclamation point to use in iPython?
0.197375
0
0
13,557
41,511,597
2017-01-06T18:03:00.000
2
0
0
0
python,bokeh
41,511,835
2
false
0
0
As of Bokeh 0.12.4 it is only possible to remove it, not change it, directly from the python library. This can be done by setting the property logo=None on a plot.toolbar.
1
2
1
Bokeh plots include a Bokeh favicon in the upper right of most plots. Is it possible to replace this icon with another icon? If so, how?
How to change Bokeh favicon to another image
0.197375
0
0
716
41,516,828
2017-01-07T01:20:00.000
1
0
0
0
python,sockets,wireless,ethernet
41,569,946
1
false
0
0
Yes you can. So you get a timeout when you try to connect to a wireless device. There are several steps you can take in order to troubleshoot this. Make sure your device has a program running that is listening to the port you want to connect to. Identify if the device can answer ICMP packets in general and can be pinged in particular. Try to ping the device. If ping succeeds, it means that basic connectivity is established and the problem is somewhere higher in the OSI stack. - I can ping the device - great, it means that the problem is somewhere in TCP or Application Layer of the TCP/IP stack. Make sure the computer, the device, and intermediate networking equipment allow for TCP connections to the particular host and port. Then proceed to your application and the device software. Add some code to the question, post the stack trace you get or ask another one on SO. - I can't ping the device - great. There's no connectivity between the devices and you're to identify the reason. I) Draw a network diagram. How many intermediate network devices are placed in between the computer and the device? What are they, routers, switches? (Just in case, home grade wifi modem is a router.) Get an idea of how IP datagrams should travel across the net. II) You said that the device can be used to configure an IP network. At least for troubleshooting purposes I would ignore this option and rely on a static IP or your router's DHCP server. Using an existing DHCP will ensure there's no IP misconfigurations. III) Review routing tables of all the devices you have. Do they have an appropriate default gateway? Does a router knows how to pass the packets to the device. You're probably in trouble if the computer and the device are in the same subnet but attached to different network interfaces. Split the network in two subnets if needed and set up static routes between them on the router. You can also use wireshark to see if data you send leaves the computer or is dropped right there by some nasty firewall. There's a lot of caveats in getting a LAN working. You may want to ask questions on networking.stackexchange if these simple steps doesn't help you or if you have major troubles following them. Or just leave a comment here, I'd be happy to help.
1
0
0
My goal is to have remote control of a device on a WLAN. This device has software that enables me to configure this wireless network (IP, mask, gateway, dns). I can successfully connect this device, and my computer to a common network. Since both machines share the same network, I made the assumption that I would be able to open up a socket between them. Knowing the IP and port of the device that I am attempting to control remotely I used the following code, only to receive a timeout: import socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(('192.168.xxx.xxx', XXXX)) (I am using python 2.7 on mac OS 10.11.6) The network that I am connected to is on a different subnet that the IP that I assigned to my device. I also tried this having an IP on the same subnet as my network. There could be a number of things keeping me from opening a socket. That's not really what I'm after. The heart of my question is whether or not I can use python's 'socket' module to connect to a device wirelessly.
can I use python's 'socket' module to connect to a wireless ethernet host?
0.197375
0
1
2,085
41,517,033
2017-01-07T01:58:00.000
0
0
0
0
python,image,pyglet
41,517,667
3
true
0
1
So, the only 'awfull' solution I currently have is: To verticaly flip the image on the drive Load the image as a texture and flip it back with get_texture() Put it into an ImageGrid() Reverse() the ImageGrid-sequence
1
0
0
there. Using pyglet.image.ImageGrid(), is there any way to start off the grid from the top left, instead of the bottom left?
Pyglet.image.ImageGrid() - indexing from top left
1.2
0
0
248
41,518,093
2017-01-07T05:21:00.000
1
0
1
0
python,ipython,jupyter-notebook,jupyter
54,119,040
2
false
0
0
Try installing nb_conda in your environment, by going to your command line conda activate your environment and conda install nb_conda. Make sure you also have ipykernel installed in your environment, then deactivate and reactivate your environment and try again.
1
7
0
I currently use a Mac. I recently created a new python virtual environment and installed jupyter. When I activate jupyter notebook within the virtual environment, it says it cannot find any python kernels. I have another virtual environment that also has jupyter installed and it works perfectly fine. Can anyone help? Also, I'm not sure where the Kernels are even located on my machine. Library/Jupyter only has a runtime folder.
Can't Find Jupyter Notebook Kernel
0.099668
0
0
6,341
41,518,351
2017-01-07T05:58:00.000
7
0
0
0
python,numpy
61,167,164
5
false
0
0
reshape() is able to change the shape only (i.e. the meta info), not the number of elements. If the array has five elements, we may use e.g. reshape(5, ), reshape(1, 5), reshape(1, 5, 1), but not reshape(2, 3). reshape() in general don't modify data themselves, only meta info about them, the .reshape() method (of ndarray) returns the reshaped array, keeping the original array untouched. resize() is able to change both the shape and the number of elements, too. So for an array with five elements we may use resize(5, 1), but also resize(2, 2) or resize(7, 9). The .resize() method (of ndarray) returns None, changing only the original array (so it seems as an in-place change).
1
27
1
I have just started using NumPy. What is the difference between resize and reshape for arrays?
What is the difference between resize and reshape when using arrays in NumPy?
1
0
0
29,555
41,519,202
2017-01-07T07:56:00.000
-1
0
1
0
python,python-3.4
49,356,344
2
false
0
0
Press the key Ctrl+F6 then you can restart the powershell. Just like the 'clear' used in terminal, it clears the all variables you've assigned values for.
1
0
0
I used some of the codes such as clear cls clc but none of them gave me the desired result. Is there any command that can clear the screen of the idle?
how to clear the screen of the idle3(python3 shell)?
-0.099668
0
0
560
41,520,206
2017-01-07T10:05:00.000
2
0
1
0
python,python-3.x,python-imaging-library,pillow
41,520,265
1
false
0
1
jpg and png are just compression techniques for saving an image to a file. An image as an object, is just an array of RGB(or any other colorspace/format used by the library which you used to read the file) values of all the pixels. So technically, you can use the image object as the common format for working with other tools. But you need to keep in mind about the colorspace which is used by each library. Like, OpenCV considers an image object in BGR format, so you need to convert the image object to this format before you use it in OpenCV.
1
1
0
I know that PILLOW can convert an image from say jpg to png using the save method but is there a way to convert the image to another format and just keep it as an Image object without actually saving it as another file? So I want to convert a user supplied image to common format for working with in the program because certain tools I am using only support png.
How to convert an image with PILLOW temporarily?
0.379949
0
0
300
41,520,282
2017-01-07T10:15:00.000
2
0
1
0
python,anaconda,jupyter
55,470,963
7
false
0
0
There are multiple options to fix this, i am still investigating on the root cause. However, you can try the solution given below.. if the Jupyter notebook version is 5.1.0 & above, you can uninstall using << conda uninstall notebook >> and then install Jupyter notebook from Anaconda Command prompt using << conda install notebook=5.0.0 >> This will help you launch the Anaconda Navigator from base environment itself. Second option.., Create another environment in conda << conda env create -f {name of yml file}.yml >>. After creating, open the Anaconda navigator UI, switch the environment to the newly created environment and launch Jupyter ( this will work even with latest Jupyter notebook version 5.3.7 as well), it will work. I am still investigating why the latest version is not opening with the base environment. however, we can use solution 1 or 2 based on your preference.
3
13
0
I just installed Anaconda, in my Surface Pro 3, with Windows 10, using the provided installer for 64-bit. When I try to launch "jupyter notebook" I always get the following message: Microsoft Windows [Version 10.0.14393] (c) 2016 Microsoft Corporation. All rights reserved. C:\Users\Carlos>jupyter notebook Traceback (most recent call last): File "C:\Program Files\Anaconda3\Scripts\jupyter-notebook-script.py", line 3, in import notebook.notebookapp File "C:\Program Files\Anaconda3\lib\site-packages\notebook\notebookapp.py", l ine 32, in from zmq.eventloop import ioloop File "C:\Program Files\Anaconda3\lib\site-packages\zmq__init__.py", line 34, in from zmq import backend File "C:\Program Files\Anaconda3\lib\site-packages\zmq\backend__init__.py", l ine 40, in reraise(*exc_info) File "C:\Program Files\Anaconda3\lib\site-packages\zmq\utils\sixcerpt.py", lin e 34, in reraise raise value File "C:\Program Files\Anaconda3\lib\site-packages\zmq\backend__init__.py", l ine 27, in _ns = select_backend(first) File "C:\Program Files\Anaconda3\lib\site-packages\zmq\backend\select.py", lin e 26, in select_backend mod = import(name, fromlist=public_api) File "C:\Program Files\Anaconda3\lib\site-packages\zmq\backend\cython__init__ .py", line 6, in from . import (constants, error, message, context, ImportError: DLL load failed: The specified module could not be found. I tried to uninstall/install again several times, I tried to install it just for me or for all the users in the computer, I tried to update anaconda first...with no success. Any clue? Thanks!
Can't open Jupyter notebook with Anaconda
0.057081
0
0
77,056
41,520,282
2017-01-07T10:15:00.000
19
0
1
0
python,anaconda,jupyter
41,521,017
7
true
0
0
It seems to be a problem with the default installation of Anaconda. So, I removed the pyzmq package, which seems to be the problematic one. This is what I have done: conda uninstall pyzmq (This also removes jupyter related packages!) conda install pyzmq (to reinstall it) conda install jupyter (to reinstall jupyter related packages) Now I can open Jupyter Notebook!
3
13
0
I just installed Anaconda, in my Surface Pro 3, with Windows 10, using the provided installer for 64-bit. When I try to launch "jupyter notebook" I always get the following message: Microsoft Windows [Version 10.0.14393] (c) 2016 Microsoft Corporation. All rights reserved. C:\Users\Carlos>jupyter notebook Traceback (most recent call last): File "C:\Program Files\Anaconda3\Scripts\jupyter-notebook-script.py", line 3, in import notebook.notebookapp File "C:\Program Files\Anaconda3\lib\site-packages\notebook\notebookapp.py", l ine 32, in from zmq.eventloop import ioloop File "C:\Program Files\Anaconda3\lib\site-packages\zmq__init__.py", line 34, in from zmq import backend File "C:\Program Files\Anaconda3\lib\site-packages\zmq\backend__init__.py", l ine 40, in reraise(*exc_info) File "C:\Program Files\Anaconda3\lib\site-packages\zmq\utils\sixcerpt.py", lin e 34, in reraise raise value File "C:\Program Files\Anaconda3\lib\site-packages\zmq\backend__init__.py", l ine 27, in _ns = select_backend(first) File "C:\Program Files\Anaconda3\lib\site-packages\zmq\backend\select.py", lin e 26, in select_backend mod = import(name, fromlist=public_api) File "C:\Program Files\Anaconda3\lib\site-packages\zmq\backend\cython__init__ .py", line 6, in from . import (constants, error, message, context, ImportError: DLL load failed: The specified module could not be found. I tried to uninstall/install again several times, I tried to install it just for me or for all the users in the computer, I tried to update anaconda first...with no success. Any clue? Thanks!
Can't open Jupyter notebook with Anaconda
1.2
0
0
77,056
41,520,282
2017-01-07T10:15:00.000
1
0
1
0
python,anaconda,jupyter
67,532,109
7
false
0
0
I just wasn't able to start Jupyter notebook directly after the installation. Using a new terminal window was the solution for me.
3
13
0
I just installed Anaconda, in my Surface Pro 3, with Windows 10, using the provided installer for 64-bit. When I try to launch "jupyter notebook" I always get the following message: Microsoft Windows [Version 10.0.14393] (c) 2016 Microsoft Corporation. All rights reserved. C:\Users\Carlos>jupyter notebook Traceback (most recent call last): File "C:\Program Files\Anaconda3\Scripts\jupyter-notebook-script.py", line 3, in import notebook.notebookapp File "C:\Program Files\Anaconda3\lib\site-packages\notebook\notebookapp.py", l ine 32, in from zmq.eventloop import ioloop File "C:\Program Files\Anaconda3\lib\site-packages\zmq__init__.py", line 34, in from zmq import backend File "C:\Program Files\Anaconda3\lib\site-packages\zmq\backend__init__.py", l ine 40, in reraise(*exc_info) File "C:\Program Files\Anaconda3\lib\site-packages\zmq\utils\sixcerpt.py", lin e 34, in reraise raise value File "C:\Program Files\Anaconda3\lib\site-packages\zmq\backend__init__.py", l ine 27, in _ns = select_backend(first) File "C:\Program Files\Anaconda3\lib\site-packages\zmq\backend\select.py", lin e 26, in select_backend mod = import(name, fromlist=public_api) File "C:\Program Files\Anaconda3\lib\site-packages\zmq\backend\cython__init__ .py", line 6, in from . import (constants, error, message, context, ImportError: DLL load failed: The specified module could not be found. I tried to uninstall/install again several times, I tried to install it just for me or for all the users in the computer, I tried to update anaconda first...with no success. Any clue? Thanks!
Can't open Jupyter notebook with Anaconda
0.028564
0
0
77,056
41,521,392
2017-01-07T12:22:00.000
5
0
1
0
python,flask
41,521,516
2
true
1
0
Fixed by running python3 -m flask.
1
3
0
I have just installed a fresh copy of Ubuntu 16.04 LTS, and installed flask using pip3 install flask. When I run pip3 list Flask 0.12 appears in the list. However, when I attempt to run flask, I get the error flask: command not found. I have also installed using pip and not pip3 but to no avail. Any suggestions?
flask: command not found - flask 0.12
1.2
0
0
9,753
41,522,781
2017-01-07T14:58:00.000
1
0
0
0
android,python,linux,windows,kivy
41,527,028
2
false
0
1
currently no, for now linux is the only option we have
1
1
0
How to make APK file from Kivy and python? I know it's possible to use buildozer and python-for-android, but it's only possible on Linux OS. So, is there anyway to do it on windows? I use Python 3.4.4 and Kivy 1.9.1
How to make APK standalone from Kivy and python on WINDOWS?
0.099668
0
0
371
41,524,320
2017-01-07T17:25:00.000
0
0
1
1
python,linux,virtualenv,virtualenvwrapper
57,632,537
6
false
0
0
I'm currently having the same problem. Virtualenv was created in Windows, now I'm trying to run it from WSL. In virtualenv I renamed python.exe to python3.exe(as I have only python3 command in WSL). In $PATH my virtualenv folder is first, there is no alias for python. I receive which python3 /usr/bin/python3. In /usr/bin/python3 there is symlink `python3 -> python3.6. I suppose it doesn't matter for order resolution.
2
30
0
I had a problem where python was not finding modules installed by pip while in the virtualenv. I have narrowed it down, and found that when I call python when my virtualenv in activated, it still reaches out to /usr/bin/python instead of /home/liam/dev/.virtualenvs/noots/bin/python. When I use which python in the virtualenv I get: /home/liam/dev/.virtualenvs/noots/bin/python When I look up my $PATH variable in the virtualenv I get: bash: /home/liam/dev/.virtualenvs/noots/bin:/home/liam/bin:/home/liam/.local/bin:/home/liam/bin:/home/liam/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin: No such file or directory and yet when I actually run python it goes to /usr/bin/python To make things more confusing to me, if I run python3.5 it grabs python3.5 from the correct directory (i.e. /home/liam/dev/.virtualenvs/noots/bin/python3.5) I have not touched /home/liam/dev/.virtualenvs/noots/bin/ in anyway. python and python3.5 are still both linked to python3 in that directory. Traversing to /home/liam/dev/.virtualenvs/noots/bin/ and running ./python, ./python3 or ./python3.5 all work normally. I am using virtualenvwrapper if that makes a difference, however the problem seemed to occur recently, long after install virtualenv and virtualenvwrapper
Virtualenv uses wrong python, even though it is first in $PATH
0
0
0
18,833
41,524,320
2017-01-07T17:25:00.000
1
0
1
1
python,linux,virtualenv,virtualenvwrapper
54,101,050
6
false
0
0
On Cygwin, I still have a problem even after I created symlink to point /usr/bin/python to F:\Python27\python.exe. Here, after source env/Scripts/activate, which python is still /usr/bin/python. After a long time, I figured out a solution. Instead of using virtualenv env, you have to use virtualenv -p F:\Python27\python.exe env even though you have created a symlink.
2
30
0
I had a problem where python was not finding modules installed by pip while in the virtualenv. I have narrowed it down, and found that when I call python when my virtualenv in activated, it still reaches out to /usr/bin/python instead of /home/liam/dev/.virtualenvs/noots/bin/python. When I use which python in the virtualenv I get: /home/liam/dev/.virtualenvs/noots/bin/python When I look up my $PATH variable in the virtualenv I get: bash: /home/liam/dev/.virtualenvs/noots/bin:/home/liam/bin:/home/liam/.local/bin:/home/liam/bin:/home/liam/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin: No such file or directory and yet when I actually run python it goes to /usr/bin/python To make things more confusing to me, if I run python3.5 it grabs python3.5 from the correct directory (i.e. /home/liam/dev/.virtualenvs/noots/bin/python3.5) I have not touched /home/liam/dev/.virtualenvs/noots/bin/ in anyway. python and python3.5 are still both linked to python3 in that directory. Traversing to /home/liam/dev/.virtualenvs/noots/bin/ and running ./python, ./python3 or ./python3.5 all work normally. I am using virtualenvwrapper if that makes a difference, however the problem seemed to occur recently, long after install virtualenv and virtualenvwrapper
Virtualenv uses wrong python, even though it is first in $PATH
0.033321
0
0
18,833