Q_Id
int64 337
49.3M
| CreationDate
stringlengths 23
23
| Users Score
int64 -42
1.15k
| Other
int64 0
1
| Python Basics and Environment
int64 0
1
| System Administration and DevOps
int64 0
1
| Tags
stringlengths 6
105
| A_Id
int64 518
72.5M
| AnswerCount
int64 1
64
| is_accepted
bool 2
classes | Web Development
int64 0
1
| GUI and Desktop Applications
int64 0
1
| Answer
stringlengths 6
11.6k
| Available Count
int64 1
31
| Q_Score
int64 0
6.79k
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 15
29k
| Title
stringlengths 11
150
| Score
float64 -1
1.2
| Database and SQL
int64 0
1
| Networking and APIs
int64 0
1
| ViewCount
int64 8
6.81M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
34,835,172 | 2016-01-17T04:40:00.000 | 2 | 0 | 1 | 0 | python,oop,simulation | 34,835,543 | 2 | true | 0 | 0 | If you're running a simulation, it is certainly reasonable design to have a single "simulation engine", with various components. As long as you don't implement these as application-wide singletons, you will be fine. This is actually a great example of what the advice to avoid singletons is actually all about! Not having these as singletons will allow, for example, running several simulations at once within the same process.
One of the common designs for a system such as yours is an event-based design. With such a design, you'll have a single event manager component for the simulation. It will support registering functions to be called given certain conditions, e.g. a given amount of simulation time has passed. You can then register your update_age() events to be fired off at intervals for each of the Actors in your simulation.
If you go this route, remember that you will need to be able to remove registered event handlers for Actors that are no longer relevant, e.g. if they die in the simulation. This can be done by creating a unique ID for each registered event, which can be used to remove it later. | 1 | 2 | 0 | I'm making a program that simulates governments and families in the medieval ages. People (represented by objects of the class Actor) are born, grow old, have kids, and die.
This means I need to track quite a few objects, and figure out some to e.g. call update_age() for every tracked person every year/month/week.
This brings up several problems. I need to find some way to iterate over the set of all tracked Actors. I also need to be able to dynamically add to that set, to account for births.
My first idea was to make an object Timekeeper with a method that calls update_age() for every object in the set of tracked objects. Then, in the main program loop, I would call the Timekeeper's method. However, this makes Timekeeper a singleton, a concept which is not always a good idea. Since I am only a novice programmer, I'd like to learn good design patterns now, rather than learn wrong.
It still leaves me with the problem of how to get a set/list/dictionary of all the tracked people to update. | How to track and update a great many values efficiently? | 1.2 | 0 | 0 | 57 |
34,836,049 | 2016-01-17T07:12:00.000 | 1 | 0 | 0 | 0 | python,django,python-3.x,django-forms,pythonanywhere | 34,837,989 | 2 | true | 1 | 0 | As it says in my comment above, it turns out that the problem with the database resulted from running an upgrade of Django from 1.8 to 1.9. I had forgotten about this. After rolling my website back to Django 1.8, the database migrations ran correctly.
The reason why I could not access the website turned out to be because I had to edit the wsgi.py file, but I was editing the wrong version. The nginx localhost web server I was using keeps it in the different folder location than PythonAnyhwere's implementation. I uploaded the file from my localhost copy and edited it according to the instructions on PythonAnywhere's help system without realizing it was not being read by PythonAnywhere's server. What I really needed to do was edit the correct file by accessing it through the web tab on their control panel. Once I edited this file, the website front end began to work as expected. | 1 | 1 | 0 | I have been working on a localhost copy of my Django website for a little while now, but finally decided it was time to upload it to PythonAnywhere. The site works perfectly on my localhost, but I am getting strange errors when I do the initial migrations for the new site. For example, I get this:
mysql.connector.errors.DatabaseError: 1264: Out of range value for
column 'applied' at row 1
'applied' is not a field in my model, so this error has to be generated by Django making tables for its own use. I have just checked in the MySQL manager for my localhost and the field 'applied' appears to be from the table django_migrations.
Why is Django mishandling setting up tables for its own use? I have dropped and remade the database a number of times, but the errors persist. If anyone has any idea what would cause this I would appreciate your advice very much.
My website front end is still showing the Hello World page and the Admin link comes up with a page does not exist error. At this stage I am going to assume this is related to the database errors.
EDIT: Additional information about why I cannot access the front-end of the site:
It turns out when I am importing a pre-built site into PythonAnywhere, I have to edit my wsgi.py file to point to the application. The trouble now is that I don't know exactly what to put there. When I follow the standard instructions in the PythonAnywhere help files nothing seems to change. There website is also seems to be very short on detailed error messages to help sort it out. Is there perhaps a way to turn off their standard hello world placeholder pages and see server error messages instead? | Strange error during initial database migration of a Django site | 1.2 | 1 | 0 | 463 |
34,841,822 | 2016-01-17T18:11:00.000 | 0 | 1 | 0 | 0 | python,python-2.7,tweepy | 47,680,085 | 2 | false | 0 | 0 | You can add a systemd .service file, which can have the added benefit of:
logging (compressed logs at a central place, or over network to a log server)
disallowing access to /tmp and /home-directories
restarting the service if it fails
starting the service at boot
setting capabilities (ref setcap/getcap), disallowing file access if the process only needs network access, for instance | 1 | 2 | 0 | I have coded a Python Script for Twitter Automation using Tweepy. Now, when i run on my own Linux Machine as python file.py The file runs successfully and it keeps on running because i have specified repeated Tasks inside the Script and I also don't want to stop the script either. But as it is on my Local Machine, the script might get stopped when my Internet Connection is off or at Night. So i couldn't keep running the Script Whole day on my PC..
So is there any way or website or Method where i could deploy my Script and make it Execute forever there ? I have heard about CRON JOBS before in Cpanel which can Help repeated Tasks but here in my case i want to keep running my Script on the Machine till i don't close the script .
Are their any such solutions. Because most of twitter bots i see are running forever, meaning their Script is getting executed somewhere 24x7 . This is what i want to know, How is that Task possible? | Is it Possible to Run a Python Code Forever? | 0 | 0 | 1 | 1,461 |
34,845,704 | 2016-01-18T00:51:00.000 | 0 | 0 | 1 | 1 | python,pandas,module | 34,845,928 | 1 | false | 0 | 0 | May be you are using different Python versions in IDLE and the command line, if this is the case, you should install Pandas for the Python version that you are using in IDLE | 1 | 0 | 0 | This is a beginner question. I am using "import pandas as pd" in IDLE,
but got the following error message "ImportError: No module named 'pandas",
I don't know how to install the the pandas in IDLE. I run the same code in MAC linux command window, it worked. Not sure why not working in IDLE.
Thanks for the help! | import pandas using IDLE error | 0 | 0 | 0 | 716 |
34,846,316 | 2016-01-18T02:19:00.000 | 0 | 0 | 1 | 0 | python,shell,interpreter | 34,846,378 | 2 | false | 0 | 0 | python launch interpreter. You can easly test script on it, and after create a file *.py that you can use on CRON (for ex)
When you type python and import Django it do
Open Python interpreter
Import Django library on it (to use it)
If an error raise, it seems than Django wasn't installed on computer | 2 | 1 | 0 | I have a very basic question: If we want to run a script called script.py, we go to shell and type "python script.py". However, if we want to check, for example, if Django is installed or not, we first go into Python interpreter by typing "python" in the shell, and while we get the >>> then we type import Django. What is the conceptual difference? For example, in the second case, why directly running "python import Django" in the shell does not work? | When to use python interpreter vs shell | 0 | 0 | 0 | 1,068 |
34,846,316 | 2016-01-18T02:19:00.000 | 2 | 0 | 1 | 0 | python,shell,interpreter | 34,846,334 | 2 | false | 0 | 0 | python import Django tries to run a Python script named import with an argument Django.
python -c 'import Django' would attempt to execute the Python statement import Django as if you had typed it from the Python interpreter directly. | 2 | 1 | 0 | I have a very basic question: If we want to run a script called script.py, we go to shell and type "python script.py". However, if we want to check, for example, if Django is installed or not, we first go into Python interpreter by typing "python" in the shell, and while we get the >>> then we type import Django. What is the conceptual difference? For example, in the second case, why directly running "python import Django" in the shell does not work? | When to use python interpreter vs shell | 0.197375 | 0 | 0 | 1,068 |
34,856,882 | 2016-01-18T14:17:00.000 | 1 | 0 | 0 | 0 | python,algorithm,maps,shapes,polygons | 34,888,265 | 1 | true | 0 | 0 | The simplest approach to this problem would be to cluster the polygons by nearest neighbours. This step is optional and only used to make the search for intersecting polygons more efficient. Instead the clustering can as well be omitted, which would require an exhaustive search for intersecting polygons.
In the next step you can replace two intersecting polygons A and B by three polygons as follows: a polygon that consists of the area of A without the intersection-area with the weight of A, an equivalent polygon for B, and a third polygon that covers the intersection-area of A and B with the added weights of A and B as weight. Replace A and B by the three generated polygons and update cluster. Repeat this step until no intersecting rectangles can be found and you're done. | 1 | 1 | 0 | I have a number of polygons which are in the form of a list of coordinates. Each of these polygons represents an area on a global map and each has a weight.
I need to find the area on the map where this weight is the highest. This means that where polygons overlap the weight will be the sum of the two polygons for the intersection area. I would like to make the calculation as efficient as possible. Any help would be greatly appreciated. | Finding overlapping weighted polygons 'highest' area | 1.2 | 0 | 0 | 509 |
34,857,074 | 2016-01-18T14:25:00.000 | 0 | 0 | 0 | 0 | python,pyspark,spark-streaming,rdd | 34,874,093 | 1 | false | 0 | 0 | So what i did was to define a function that checks if I have seen that name in the past and then use the .filter(myfunc) to only work with the names i want...
The problem now is that in each new streaming window the function is being applied from the beggining , so if i have seen the name John in the first window 7 times , i will keep it only once , but then if i have seen the name John in the second window 5 times i will keep it again once...
I want to keep the name John once for all the streaming application...
Any thoughts on that ? | 1 | 1 | 1 | I am receiving data from Kafka into a Spark Streaming application. It comes in the format of Transformed DStreams. I then keep only the features i want.
features=data.map(featurize)
which gives me the "name","age","whatever".
I then want to keep only the name of all the data
features=data.map(featurize).map(lambda Names: Names["name"]
Now, when i print this command, i get all the names coming from the streaming application, but i want to work on each one separately.
More specifically, I want to check each name and if I have already came across it in the past i want to apply a function on it. Otherwise i will just continue with my application. So I want each name to be a string so that I can insert it into my function that checks if one string has been seen in the past.
I know that foreach will give me each RDD , but still I want to work on each name of the RDDs separately.
Is there any way in pyspark to do so? | work on distinct elements of RDD-pyspark | 0 | 0 | 0 | 184 |
34,859,041 | 2016-01-18T16:02:00.000 | 0 | 0 | 1 | 0 | python,virtualenv,pythonpath,virtualenvwrapper | 34,860,555 | 1 | false | 0 | 0 | For any command that needed the PYTHONPATH for the current directory I made a makefile that had $(shell export PYTHONPATH=$PYTHONPATH:$(pwd)) before I ran the actual command. That solved my problem. | 1 | 0 | 0 | I'm using virtualenwrapper for my virtual environments. And when developed a package for pypi it worked fine developing in python3. However when I tried testing it with a python2 environment that I created with virtualenwrapper I can no longer import the modules I want to.
When checking the python path for my python3 environment it contains /Users/jonathan/projects/myproject whereas the python2 environment contains no such mention.
Do I need to set the path explicitly when working with a python2 environment somewhere? | Missing path in Pythonpath for python 2, but it exists for python 3 | 0 | 0 | 0 | 389 |
34,860,281 | 2016-01-18T17:07:00.000 | 3 | 0 | 0 | 0 | python,gpu,tensorflow | 34,868,531 | 1 | true | 0 | 0 | The problem disappeared after I installed an older version (352.55) of nvidia driver. | 1 | 3 | 1 | I am running the cifar10 multi-GPU example from the tensorflow repository. I am able to utilize more than one GPUs. My ubuntu PC has two Titan X's, I see memory are fully occupied by the process on both GPUs. However, only one GPU is actually computing. I obtain no speedup. I have tried tensorflow 0.5.0 and 0.6.0 pip binaries. I have also tried compiled from source.
EDIT:
The problem disappeared after I installed an older version of nvidia driver. | unable to run tensorflow on multiple GPUs | 1.2 | 0 | 0 | 749 |
34,860,722 | 2016-01-18T17:33:00.000 | 5 | 0 | 1 | 0 | python,regex | 34,867,080 | 2 | false | 0 | 0 | Regexes search through strings one character at a time. If a match is found at a character position the regex advances to the next part of the pattern. If a match is not found, the regex tries alternation (different variations) if available. If all alternatives fail, it backtracks and tries alternating the previous part and so on until either an entire match is found or all alternatives fail. This is why some seemingly simple regexes will match a string quickly, but fail to match in exponential time. In your example you only have one part to your pattern.
You are searching for [\w]?. The ? means "one or zero of prior part" and is equivalent to {0,1}. Each of 'h', 'e', 'l', 'l' & 'o' matches [\w]{1}, so the pattern advances and completes for each letter, restarting the regex at the beginning because you asked for all the matches, not just the first. At the end of the string the regex is still trying to find a match. [\w]{1} no longer matches but the alternative [\w]{0} does, so it matches ''. Modern regex engines have a rule to stop zero-length matches from repeating at the same position. The regex tries again, but this time fails because it can't find a match for [\w]{1} and it has already found a match for [\w]{0}. It can't advance through the string because it is at the end, so it exits. It has run the pattern 7 times and found 6 matches, the last one of which was empty.
As pointed out in a comment, if your regex was \w?? (I've removed [ and ] because they aren't necessary in your original regex), it means find zero or one (note the order has changed from before). It will return '', 'h', '', 'e', '', 'l', '', 'l', '', 'o' & ''. This is because it now prefers to find zero but it can't find two zero-length matches in a row without advancing. | 2 | 25 | 0 | What causes the '' in ['h', 'e', 'l', 'l', 'o', ''] when you do re.findall('[\w]?', 'hello'). I thought the result would be ['h', 'e', 'l', 'l', 'o'], without the last empty string. | What causes the '' in ['h', 'e', 'l', 'l', 'o', ''] when you do re.findall('[\w]?', 'hello') | 0.462117 | 0 | 0 | 1,453 |
34,860,722 | 2016-01-18T17:33:00.000 | 40 | 0 | 1 | 0 | python,regex | 34,860,788 | 2 | true | 0 | 0 | The question mark in your regex ('[\w]?') is responsible for the empty string being one of the returned results.
A question mark is a quantifier meaning "zero-or-one matches." You are asking for all occurrences of either zero-or-one "word characters". The letters satisfy the "-or-one word characters" match. The empty string satisfies the “zero word characters” match condition.
Change your regex to '\w' (remove the question mark and superfluous character class brackets) and the output will be as you expect. | 2 | 25 | 0 | What causes the '' in ['h', 'e', 'l', 'l', 'o', ''] when you do re.findall('[\w]?', 'hello'). I thought the result would be ['h', 'e', 'l', 'l', 'o'], without the last empty string. | What causes the '' in ['h', 'e', 'l', 'l', 'o', ''] when you do re.findall('[\w]?', 'hello') | 1.2 | 0 | 0 | 1,453 |
34,861,431 | 2016-01-18T18:17:00.000 | 0 | 0 | 0 | 0 | python,django | 34,861,485 | 2 | false | 1 | 0 | Nothing. As long as your apps live in the different folders, they are completely independent apps for Django. Just make sure they both are loaded in your settings.INSTALLED_APPS.
* Catch #1: If you have the identical template tags files, rename them so they would become polls_tags.py and polls2_tags.py.
* Catch #2: Don't forget to rename your templates so that templates/polls/index.html' becomes 'templates/polls2/index.html. | 1 | 2 | 0 | I have followed the guidelines for starting up learning django, but i have a question. if I want to add a new app onto the polls app they instructed, called poll2, can I just copy + paste the polls folder? (this is for example if I want to make an identical app, with same functionality). Is there anything else special i need to do, other than make admin.py load poll2 along with polls? | Adding new apps to django | 0 | 0 | 0 | 405 |
34,863,164 | 2016-01-18T20:06:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,seaborn | 48,646,471 | 3 | false | 0 | 0 | This works as well:
sudo pip3 install seaborn | 2 | 2 | 0 | I tried to use pip install seaborn but it says Requirement already satisfied......
I installed both python 2.7 and 3.5 and in python 2.7, I have install the seaborn and I tried to install it in 3.5 but it gives me this error.
If I didn't install it in python 3.5, seaborn does not work in python 3.5.
Anyone help? | Error when installing Seaborn with Python 3.5 | 0.066568 | 0 | 0 | 7,280 |
34,863,164 | 2016-01-18T20:06:00.000 | 1 | 0 | 1 | 0 | python,python-3.x,seaborn | 38,082,652 | 3 | false | 0 | 0 | I just experienced the same problem (I'm using Python 3.5 and Ubuntu 15.04).
First I would advice you to uninstall whatever Seaborn installation you might have and then run 'sudo apt-get install python3-seaborn'
Using 'sudo apt-get install seaborn' will do the trick. Please note that you will need to update seaborn afterwards (I got version 0.6).
Running pip3 install (pip3 for Python 3) with the -U command will update Seaborn.
'pip3 install seaborn -U'
Hope it clears the problem for you :) | 2 | 2 | 0 | I tried to use pip install seaborn but it says Requirement already satisfied......
I installed both python 2.7 and 3.5 and in python 2.7, I have install the seaborn and I tried to install it in 3.5 but it gives me this error.
If I didn't install it in python 3.5, seaborn does not work in python 3.5.
Anyone help? | Error when installing Seaborn with Python 3.5 | 0.066568 | 0 | 0 | 7,280 |
34,864,038 | 2016-01-18T21:03:00.000 | 1 | 0 | 1 | 0 | python,django,python-2.7,python-3.x | 34,864,106 | 1 | true | 0 | 0 | I'd imagine your environment variables are set up to use the python2.7 environment variable for python and the path to the python3.3 pip for that, you either need to adjust those or use the full paths when using the tool as you require. | 1 | 0 | 0 | I have installation of python 2.7 and 3.3 side by side (C:\Python27 and C:\Python33). I am now trying to install virtualenv.
Python2.7 is my default interpreter. Whenever I open a command prompt and type 'python' it brings up "Python 2.7.10 (default, May 23 2015, 09:40:32) [MSC v.1500 32 bit (Intel)] on win32" for me. But when I am firing "pip install virtualenv", it is installing virtualenv inside python3.3 folder.
I am quite surprised that my active interpreter is python2.7, but virtualenv installation is somehow getting inside python3.3 folder instead of expected python2.7 folder. Can anyone please explain this anomaly and suggest me how to install virtualenv inside python 2.7 ? | Installing virtualenv for Python2.7 | 1.2 | 0 | 0 | 1,818 |
34,864,672 | 2016-01-18T21:48:00.000 | -1 | 0 | 1 | 0 | python,ipython,ipython-notebook,jupyter,jupyter-notebook | 34,864,812 | 2 | true | 0 | 0 | The best thing to do for repeated code you want all your notebook to access is to add it to the profile directory. The notebook will load all scripts from that directory in order, so it's recommended you name files 01-<projname>.py if you want them to load in a certain order. All files in that directory will be loaded via exec which executes the file as though it were in your context, it's not a module load so globals will squash each other and all of the model context will be in your local namespace afterwards (similar to an import * effect).
To find your profile directory the docs recommend you use ipython locate profile <my_profile_name>. This will tell you where you can place the script. | 1 | 3 | 0 | I'd like to write a program using Python in Jupiter. To make things easy, it'd be better off writing a few subroutines (functions) and probably some user-defined classes first before writing the main script. How do I arrange them in Jupiter? Just each sub function/class for a new line and write sequentially and then write main script below to call subroutines? I just wonder if this is the right way to use Jupyter.
I am new to Jupyter and Python, but in Matlab, for instance, I would create a folder which contains all sub functions to be used. And I will also write a script inside the same folder to call these functions to accomplish the task. However, how do I achieve this in Python using Jupyter? | In Jupyter notebook, how do I arrange subroutines in order to write a project efficiently? | 1.2 | 0 | 0 | 4,032 |
34,869,018 | 2016-01-19T05:21:00.000 | 0 | 0 | 0 | 0 | python,macos,opencv | 34,869,483 | 1 | true | 0 | 0 | I've found the reaseon, there is a file named time.py in the same foder. I'm sure that's the reason I failed to import numpy.
Plus, if i put the file time.py in the same folder and run python test.py, then I got the message "TypeError: 'module' object is not callable"
Next, without closing the console, and delete the time.py, then "import numpy", I got the message "ImportError: cannot import name multiarray"
close and open the console again, it works.
But this time, I did't see "ImportError: numpy.core.multiarray failed to import" ,But it does work! | 1 | 0 | 1 | When I put my python code in "~/Downloads/" folder, it works.
However, I failed and gave me the message "ImportError: numpy.core.multiarray failed to import" when I put the test.py file in a deep location like "/Git/Pyehon/....." Why?
I run this on Mac | Python/OpenCV/Mac ImportError: numpy.core.multiarray failed to import | 1.2 | 0 | 0 | 858 |
34,869,690 | 2016-01-18T14:01:00.000 | 0 | 0 | 1 | 0 | batch-file,python | 34,869,721 | 1 | false | 0 | 0 | No, there is no goto or label or other equivalent in vanilla python.
Your best bet is to stick with regular control flow statements (e.g. if, else if, else, for, while, break, continue, ...) and functions (which add return to the list of control flow statements as well ...) | 1 | 0 | 0 | We can use goto and :label in batch.
Is there anything like it in Python? | Is there a label/goto in python? | 0 | 0 | 0 | 1,062 |
34,871,128 | 2016-01-19T07:45:00.000 | 1 | 0 | 0 | 0 | python,pandas | 39,321,804 | 2 | false | 0 | 0 | For me this actually worked :
df1=df1[pd.notnull(df1['Cloumn Name'])] | 1 | 3 | 1 | I have a Pandas DataFrame with a multiIndex. The index consists of a date and a text string. Some of the values are NaN and when I use dropna(), the row disappears as expected. However, when I look at the index using df.index, the dropped dates are still there. This is problematic as when I use the to_panel function, the dropped dates reappear.
Am I using dropna incorrectly or how can I resolve this? | Pandas dropna does not work as expected on a MultiIndex | 0.099668 | 0 | 0 | 1,798 |
34,871,994 | 2016-01-19T08:38:00.000 | 2 | 0 | 0 | 1 | python,permissions,pip,sudo | 41,135,807 | 6 | false | 1 | 0 | if you have 2 versions of pip for example /user/lib/pip and /user/local/lib/pip belongs to python 2.6 and 2.7. you can delete the /user/lib/pip and make a link pip=>/user/local/lib/pip.
you can see that the pip commands called from "pip" and "sudo" pip are different. make them consistence can fix it. | 5 | 13 | 0 | Please help me.
server : aws ec2
os : amazon linux
python version : 2.7.10
$ pip --version
pip 7.1.2 from /usr/local/lib/python2.7/site-packages (python 2.7)
It's OK.
But...
$ sudo pip --version
Traceback (most recent call last):
File "/usr/bin/pip", line 5, in
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3020, in
working_set = WorkingSet._build_master()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 616, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 629, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 807, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: pip==6.1.1 | failed in "sudo pip" | 0.066568 | 0 | 0 | 12,302 |
34,871,994 | 2016-01-19T08:38:00.000 | 0 | 0 | 0 | 1 | python,permissions,pip,sudo | 47,222,853 | 6 | false | 1 | 0 | Assuming two pip versions are present at /usr/bin/pip & /usr/local/bin/pip where first is present for sudo user & second for normal user.
From sudo user you can run below command so it will use higher version of pip for installation.
/usr/local/bin/pip install jupyter | 5 | 13 | 0 | Please help me.
server : aws ec2
os : amazon linux
python version : 2.7.10
$ pip --version
pip 7.1.2 from /usr/local/lib/python2.7/site-packages (python 2.7)
It's OK.
But...
$ sudo pip --version
Traceback (most recent call last):
File "/usr/bin/pip", line 5, in
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3020, in
working_set = WorkingSet._build_master()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 616, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 629, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 807, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: pip==6.1.1 | failed in "sudo pip" | 0 | 0 | 0 | 12,302 |
34,871,994 | 2016-01-19T08:38:00.000 | 0 | 0 | 0 | 1 | python,permissions,pip,sudo | 34,874,730 | 6 | false | 1 | 0 | As you can see with sudo you run another pip script.
With sudo: /usr/bin/pip which is older version;
Without sudo: /usr/local/lib/python2.7/site-packages/pip which is the latest version.
The error you encountered is sometimes caused by using different package managers, common way to solve it is the one already proposed by @Ali:
sudo easy_install --upgrade pip | 5 | 13 | 0 | Please help me.
server : aws ec2
os : amazon linux
python version : 2.7.10
$ pip --version
pip 7.1.2 from /usr/local/lib/python2.7/site-packages (python 2.7)
It's OK.
But...
$ sudo pip --version
Traceback (most recent call last):
File "/usr/bin/pip", line 5, in
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3020, in
working_set = WorkingSet._build_master()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 616, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 629, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 807, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: pip==6.1.1 | failed in "sudo pip" | 0 | 0 | 0 | 12,302 |
34,871,994 | 2016-01-19T08:38:00.000 | 17 | 0 | 0 | 1 | python,permissions,pip,sudo | 34,872,132 | 6 | false | 1 | 0 | Try this:
sudo easy_install --upgrade pip
By executing this you are upgrading the version of pip that sudoer is using. | 5 | 13 | 0 | Please help me.
server : aws ec2
os : amazon linux
python version : 2.7.10
$ pip --version
pip 7.1.2 from /usr/local/lib/python2.7/site-packages (python 2.7)
It's OK.
But...
$ sudo pip --version
Traceback (most recent call last):
File "/usr/bin/pip", line 5, in
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3020, in
working_set = WorkingSet._build_master()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 616, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 629, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 807, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: pip==6.1.1 | failed in "sudo pip" | 1 | 0 | 0 | 12,302 |
34,871,994 | 2016-01-19T08:38:00.000 | 24 | 0 | 0 | 1 | python,permissions,pip,sudo | 39,518,909 | 6 | false | 1 | 0 | I had the same problem.
sudo which pip
sudo vim /usr/bin/pip
modify any pip==6.1.1 to pip==8.1.2 or the version you just upgrade to.
It works for me. | 5 | 13 | 0 | Please help me.
server : aws ec2
os : amazon linux
python version : 2.7.10
$ pip --version
pip 7.1.2 from /usr/local/lib/python2.7/site-packages (python 2.7)
It's OK.
But...
$ sudo pip --version
Traceback (most recent call last):
File "/usr/bin/pip", line 5, in
from pkg_resources import load_entry_point
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3020, in
working_set = WorkingSet._build_master()
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 616, in _build_master
return cls._build_from_requirements(__requires__)
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 629, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 807, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: pip==6.1.1 | failed in "sudo pip" | 1 | 0 | 0 | 12,302 |
34,872,610 | 2016-01-19T09:09:00.000 | 2 | 0 | 1 | 0 | python,django,ubuntu | 34,872,786 | 2 | false | 1 | 0 | A lot of application still require Python 2.7 and are not yet compatible with Python3. So it really depends on what you do on the server (Only running Django?).
One solution would be to use virtualenv so that you do not depend on which python version is installed in your server, and you totally control all the packages.
Look for django + virtualenv, you will find a lot of tutorials. | 1 | 0 | 0 | I am on ubuntu 15.10. I notice that i have many python versions installed. Is it safe now to remove 2.7 completely? And how to make 3.5 the default one? I ask this because i think it messes up my django installation because django gets intsalled in share directory. | can python 2.7 be removed completely now? | 0.197375 | 0 | 0 | 57 |
34,873,578 | 2016-01-19T09:55:00.000 | 10 | 0 | 0 | 0 | python,flask,parallel-processing,gunicorn | 34,875,434 | 1 | true | 1 | 0 | It will create 4 gunicorn workers to handle the one flask app. If you spin 4 instances of a flask app (with docker for example) you will need to run gunicorn 4 times. Finally to handle all those flask instances you will need a Nginx server in front of it acting as a load balancer.
For example, if one user is doing a registration routine that takes a lot of time due to multiple querys to the database you still have another worker to send the request to the flask instance.
I get our point, but Flask is not WSGI ready, which is the stardard. Gunicorn is playing that role in production so you get more reliability instead of using the Develpment standard Werkzeug server that comes with it. In other words, Gunicorn is just a wrapper on you flask object. It just handles the requests and let Flask do its thing. | 1 | 7 | 0 | I`m new to this, and misunderstand how Gunicorn + Flask works.
When i run Gunicorn with 4 workers it creates 4 instances of my Flask app, or it will create 4 processes that handle web requests from Nginx and one instance of Flask app?
If i make simple implementation of memory cache(dictionary for example) in my app, will gunicorn create more than one instances of app and therefore more than one instances of cache? | How many instances of app Gunicorn creates | 1.2 | 0 | 0 | 2,571 |
34,873,949 | 2016-01-19T10:10:00.000 | 5 | 0 | 1 | 0 | python,hadoop,pip,yum | 34,876,964 | 1 | false | 0 | 0 | Packages which are part of your distribution should be preferred, because they have been tested to work properly on your system. These packages are installed system-wide.
However if a suitable RPM package is not provided, go ahead and install it from e.g. PyPi or github with pip, but deploy virtual Python environments whenever possible. With virtual envs you don't have to install third-party packages system-wide. You will have several smaller sets of packages which are much better manageable as one set. | 1 | 7 | 0 | I've just started administering a Hadoop cluster. We're using Bright Cluster Manager up to the O/S level (CentOS 7.1) and then Ambari together with Hortonworks HDP 2.3 for Hadoop.
I'm constantly getting requests for new python modules to be installed. Some modules we've installed at setup using yum and as the cluster has progressed some modules have been installed using pip.
What is the "right" way to do this? Always use yum and not be able to provide the latest and greatest modules? Always use pip and not have one point of truth (yum) showing which packages are installed? Or is it fine to use both pip and yum together?
I'm just worried that I'm filling the system with junk and too many versions of python modules. Any suggestions? | Python package installation: pip vs yum, or both together? | 0.761594 | 0 | 0 | 6,435 |
34,875,393 | 2016-01-19T11:18:00.000 | 0 | 0 | 0 | 0 | python,django | 34,882,612 | 1 | false | 1 | 0 | No, dev server is just a simple server that accepts request, passes it to the django app and returns a response from the app. It is something different than you can find in some JavaScript libraries or frameworks, where data are held in browser and you only hot reload the source code and library regenerates the page using the same data. | 1 | 0 | 0 | I wonder is there some optional configuration to the dev server to autorefresh page when files changed. I know that django dev server autoreload project when some changes appear but what i am looking for is refreshing the webpage like it is in for example meteor. I was googling a little and find some apps and plugins to ff and chrome.
Django is designed to web development so i suspect that such feature should be in the core of dev server. Is it? | Autoreload webpage when source changed | 0 | 0 | 0 | 72 |
34,877,607 | 2016-01-19T13:06:00.000 | 0 | 0 | 0 | 0 | python,django,tastypie | 34,888,130 | 2 | false | 1 | 0 | I'd need to see:
The code making the API calls.
Any changes you've made to the resources.
What sort of stack this is deployed on.
If you happen to be using greenthreads, multiple workers, and/or multiple servers, it's possible the 2 requests are actually being processed out of order.
I strongly recommend changing your code to not perform concurrent actions on a remote resource; wait until one request is done before starting the next. | 1 | 1 | 0 | I am using Django 1.7.3 as my framework and Tastypie 0.11.1 as rest api library.
I have a basic model with name field and an api for creating this model.
My problem is with critical sections ( race conditions ) when trying to create the model.
I have tried transaction.atomic and set ATOMIC_REQUESTS = True on db level and yet when I am sending two requests as a race I receive two identical rows.
Is there a way to ensure that Tastypie save function will be atomic ? or any way to ensure that requests will be atomic ? | Django Tastypie atomic operation | 0 | 0 | 0 | 268 |
34,880,558 | 2016-01-19T15:22:00.000 | 2 | 0 | 1 | 0 | python,python-2.7,python-3.x,theano | 34,882,530 | 4 | false | 0 | 0 | You haven't told on which OS you're running this, but it look likes a debian base linux, maybe ubuntu?
If so, I'd try with:
sudo apt-get install python3-numpy or
sudo apt-get install python2-numpy.
This would also work with python-pip2 and python-pip3.
After this, you could effectively use "pip2" or "pip3" to install your packages without having to go through the OS "prebuild" modules (but the os version of the packages are usually my prefered way to install them, if the exists in the repo) | 1 | 1 | 0 | There are python 2.7 and python 3.2 on my computer. The default version is 2.7 because using python -V gives 2.7 as the version.
But when I use apt-get install numpy, scipy,pip why it install them into the python3.2 folder. After that I used pip to install the module into the 3.2 folder.
I also installed Theano this way but in the end it showed a message saying that there is no module named Theano installed although it is in the python 3.2 folder. | Installing Python 2.x and python 3.x on the same computer | 0.099668 | 0 | 0 | 1,221 |
34,881,105 | 2016-01-19T15:49:00.000 | 5 | 0 | 1 | 0 | python-3.x,anaconda,python-idle,conda | 47,986,798 | 3 | false | 0 | 0 | Type idle3 instead of idle from your conda env. | 1 | 6 | 0 | For running python2 all I do is activate the required conda environment and just type idle. It automatically opens IDLE for python 2.7. But I can't figure out how to do this for Python 3. I have python 3.5 installed in my environment.
I used conda create -n py35 anaconda for install installing python 3.5 . | How to run IDLE for python 3 in a conda environment? | 0.321513 | 0 | 0 | 14,423 |
34,882,862 | 2016-01-19T17:09:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,numpy | 34,883,536 | 1 | true | 0 | 0 | Without seeing any code, this is what I would try.
Make an identically sized 2D array with just Booleans all set to True (available) by default
When your code randomly generates an X,Y location in your 2D array, check the Availability array first:
If the value at that location is True (available), return that value in the other Array (whatever values are stored there) and then set that available value to False.
If the value at that location is False (not available), keep trying the next value in the array until you find one available. (Do this instead of hitting the random number generator again. The less elements available, the more you'd have to "re-roll" which would eventually become painfully slow.)
Make sense?
EDIT: I can think of at least 2 other ways of doing this that might be faster or more efficient, but this is the simple verson. | 1 | 0 | 1 | I have very big multi dimension array for example 2d inside for loop.
I would like to return one element from this array at each iteration and this element should not returned before. I mean return an element once in the iteration. | How do I return a random element from 2 numpy array without repeats? | 1.2 | 0 | 0 | 38 |
34,883,612 | 2016-01-19T17:47:00.000 | 1 | 1 | 0 | 1 | android,python,linux,shared-libraries | 34,883,727 | 1 | false | 0 | 1 | Most likely not. It's very probably the Android you pull it from is running on the ARM architecture, and therefore the .so library was compiled for that architecture.
Unless your desktop machine is also on the ARM architecture (it's most likely x86 and it would have to be specific such as ARMv7) the .so binary will be incompatible on your desktop.
Depending on what the .so library actually is, you may be able to grab the source code and compile it for your x86 machine.
Disclaimer: Even if you obtain a library compiled for the same architecture as your desktop (from x86 phone), there is no guarantee it will work. It may rely on other libraries provided only by Android, and this may be the start of a very deep rabbit hole. | 1 | 3 | 0 | I have an .so file which I pulled from an Android APK (Not my app, so I don't have access to the source, just the library)
I want to use this shared object on my 32 bit Ubuntu machine, and call some functions from it (Preferably with Python) . Is it possible to convert an Android .so to a Linux .so?
Or is there any simple solution to accessing the functions in the .so without resorting to a hefty virtual machine or something?
Thanks | How to use Android shared library in Ubuntu | 0.197375 | 0 | 0 | 1,289 |
34,884,896 | 2016-01-19T19:01:00.000 | 1 | 0 | 0 | 1 | python,tomcat,docker,application-server | 34,886,874 | 2 | false | 1 | 0 | You can use Docker Machine to create a Docker development environment on Mac or Windows. This is really good for trial and error. There is no need to for Ubuntu VM.
Docker container does one thing only. So your application would consist of multiple containers, one for each component. You've also clearly identified the different containers for your application. Here is how the workflow might look like:
Create a Dockerfile for Tomcat container, nginx, postgres, tornado
Deploy the application to Tomcat in Dockerfile or by mapping volumes
Create image for each of the container
Optionally push these images to Docker hub
If you plan to deploy these containers on multiple hosts then create an overlay network
Use Docker Compose to start these containers together. It would use the network created previously. Alternatively you can also use --x-networking for Docker Compose to create the network. | 1 | 3 | 0 | I'm trying to "dockerize" my java web application and finally run the docker image on EC2.
My application is a WAR file and connects to a database. There is also a python script which the application calls via REST. The python side uses the tornado webserver
Question 1:
Should I have the following Docker containers?
Container for Application Server (Tomcat 7)
Container for HTTP Server (nginx of httpd)
Container for postgres db
Container for python script (this will have tornado web server and my python script).
Question 2:
What is the best way to build dockerfile? I will have to do trial and error for what commands need to be put into the dockerfile for each container. Should I have an ubuntu VM on which I do trial and error and once I nail down which commands I need then put them into the dockerfile for that container? | how many docker containers should a java web app w/ database have? | 0.099668 | 0 | 0 | 1,322 |
34,885,784 | 2016-01-19T19:54:00.000 | 5 | 0 | 1 | 0 | python,list | 34,885,830 | 2 | false | 0 | 0 | input always returns text - even there are only digits.
Use int(Telephone) to convert to integer or Telephone.isdigit() to check if there are only digits in text.
BTW: both methods are useless if user use spaces or - in phone number ;)
Maybe you will have to remove spaces and - using replace() before you use int() or isdigit() | 1 | 0 | 0 | t ask the user for input and when they type letters it comes up with an error message (as i want) but even when user enters number it comes up with error. I want it to come out of this loop and on to next for name if correct data is inputed. | Python 3.4.2 loops not | 0.462117 | 0 | 0 | 70 |
34,886,081 | 2016-01-19T20:12:00.000 | 0 | 0 | 1 | 0 | python-2.7 | 34,921,873 | 2 | false | 0 | 0 | OK
So I did some heavy research and the conclusion is
if u want to access a file saved at different location
use
f = open('E:/somedir/somefile.txt', 'r')
r = f.read()
NOTE: Dont use '\' that were I went wrong.Our system addresses uses '\' So be careful | 1 | 0 | 0 | I have been experimenting with python by creating some programs .The thing is, I have no idea how to import something OUT of the default python directory. | How do I import files from other directory in python 2.7 | 0 | 0 | 0 | 713 |
34,894,261 | 2016-01-20T07:45:00.000 | 3 | 1 | 1 | 0 | python,eclipse,pydev | 51,273,106 | 1 | false | 0 | 0 | In Navigator view, right-click the project name and select PyDev > Set as source folder (add to PYTHONPATH).
After that I find PyDev search works. Very handy as a quicker search function. | 1 | 1 | 0 | I have Eclipse Luna Release (4.4.0), and I have been using it for years.
For the first time, only on a specific project, the PyDev search doesn't work properly. As an example, if I try to search for the name of a function that I know to be there, it doesn't find it, and there are no typos. This happens for most of the search, even if some of them give the expected result.
The weirdest thing is that if I use the file search then it works. Why is that? Do you know any ways to solve? | PyDev Search doesn't work properly on Eclipse | 0.53705 | 0 | 0 | 720 |
34,895,738 | 2016-01-20T09:09:00.000 | 0 | 0 | 0 | 1 | python,mysql,amazon-web-services,nas | 34,899,601 | 1 | false | 1 | 0 | You also can use MongoDb , it provides several API, and also you can store file in S3 bucket with the use of Multi-Part Upload | 1 | 0 | 0 | As part of a big system, I'm trying to implement a service that (among other tasks) will serve large files (up to 300MB) to other servers (running in Amazon).
This files service needs to have more than one machine up and running at each time, and there are also multiple clients.
Service is written in Python, using Tornado web server.
First approach was using MySQL, but I figured I'm going to have hell saving such big BLOBs, because of memory consumption.
Tried to look at Amazon's EFS, but it's not available in our region.
I heard about SoftNAS, and am currently looking into it.
Any other good alternatives I should be checking? | Serving large files in AWS | 0 | 1 | 0 | 191 |
34,896,043 | 2016-01-20T09:23:00.000 | 0 | 0 | 0 | 0 | python-2.7,debian,wget | 34,896,780 | 1 | false | 0 | 0 | If the files are different in size then wget isn't getting the right file. Many websites now rely on javascript to handle links which wget can't emulate. I suspect that if you look at the file with less you'll see some HTML source as opposed to the start of a zipfile. | 1 | 0 | 0 | I'm using wget to download Excel file with xlsx extension. The thing is that when I want to deal with the file using openpyxl, I get the above mentioned error. But when I download the file manually using fire fox, I don't have any problems.
So I checked the difference between the two downloaded files. I found that the manually one's size is much bigger (269.2 kB) compared to the wget one (7.3 kB), though both files show the same content when open by Excel 2013
I don't add any options for the wget just use it like wget <downloadLink>
What's wrong with wget and Excel files? | wget causes "BadZipfile: File is not a zip file" for openpyxl | 0 | 1 | 0 | 1,390 |
34,898,422 | 2016-01-20T11:10:00.000 | 0 | 0 | 1 | 0 | python,pycharm,rdkit | 68,302,546 | 2 | false | 0 | 0 | Another option is to select the existing virtual environment when you create a new project in PyCharm. Once you go through the steps that Anna laid out above, the "Previously configured interpreter" section of the "Create Project" screen should show the ~/anaconda/envs/my-rdkit-env/bin/python as an option. | 1 | 5 | 0 | So, I am trying to add RDKit to my project in PyCharm. I have found that if you are using interpreter /usr/bin/python2.7 PyCharm will try to install stuff using the pip. While, RDKit requires conda. I have tried to change the interpreter to conda, but RDKit is either not on the list or it can't open the URL with the repo. Does anyone know how to fix that?
By the way, is it possible while keeping the interpreter /usr/bin/python2.7 to make it use anything else (not pip), while installing stuff? | How to add RDKit to project in PyCharm? | 0 | 0 | 0 | 3,938 |
34,903,129 | 2016-01-20T14:46:00.000 | 1 | 0 | 1 | 0 | python,debugging,pdb | 38,818,813 | 1 | false | 0 | 0 | Use run, or restart, which is an alias of run. This will restart the program with all the new changes but preserve the breakpoints and other debugger information. | 1 | 4 | 0 | I am debugging a python program using the python debugger pdb.
E.g. python -m pdb myscript.py
Is there a way to rerun the script propagating the new changes in myscript.py? The command run does not do this.
In gdb, I believe there was a way to compile (from within gdb) and the restart the debugging session.
I was hoping for a similar feature in pdb, so that I do not need to exit pdb and then start it again just to get my changes in myscript.py to be propagated. | Restarting the python debugger, pdb, while propagating program changes | 0.197375 | 0 | 0 | 1,090 |
34,903,359 | 2016-01-20T14:57:00.000 | -2 | 0 | 0 | 0 | python,appium,robotframework | 34,927,269 | 1 | false | 1 | 0 | Switch to (webview) context resolved this issue. | 1 | 1 | 0 | I am trying to automate native android app using robot framework + appium with AppiumLibrary and was able to successfully open application ,from there my struggle begins, not able to find any element on the screen through UI automator viewer since the app which I was testing is web-view context and it shows like a single frame(no elements in it is being identified) . I have spoken to dev team and they gave some html static pages where I could see some element id's for that app. So I have used those id's ,But whenever I ran the test it throws error as element doesn't match . The same app is working with java + appium testNG framework. Only difference I could see between these two is, using java + appium framework complete html code is getting when we call page source method for the android driver object but in robot its returning some xml code which was displayed in UI automator viewer(so this xml doesn't contain any HTML source code with element id's and robot is searching the id's in this xml code and hence it is failing). I am totally confused and got stuck here. Can some one help me on this issue. | robot framework with appium ( not able identify elements ) | -0.379949 | 0 | 1 | 825 |
34,905,744 | 2016-01-20T16:46:00.000 | 0 | 0 | 0 | 0 | python,django,postgresql,amazon-ec2 | 34,922,249 | 1 | false | 1 | 0 | Ok, thanks for your answers, I used :
find . -name "postgresql.conf" to find the configuration find, which was located into the "/etc/postgresql/9.3/main" folder. There is also pg_lsclusters if you want to show the directory data.
Then I edited that file putting the new path, restarted postgres and imported my old DB. | 1 | 1 | 0 | I have a Django website running on an Amazon EC2 instance. I want to add an EBS. In order to do that, I need to change the location of my PGDATA directory if I understand well. The new PGDATA path should be something like /vol/mydir/blabla.
I absolutely need to keep the data safe (some kind of dump could be useful).
Do you have any clues on how I can do that ? I can't seem to find anything relevant on the internet.
Thanks | Django PostgreSQL : migrating database to a different directory | 0 | 1 | 0 | 62 |
34,906,243 | 2016-01-20T17:10:00.000 | 1 | 0 | 1 | 0 | python,python-2.7,python-3.x,pip | 34,907,113 | 3 | true | 0 | 0 | First make a new environment variable:
Go to your system properties
Under Advanced tab click Environment Variables...
Under System variables section click New...
Variable name: (whatever you can remember for example p27s)
Variable value: your python 2.7 scripts folder ("C:\Python27\Scripts\")
From now on whenever you want to install a package for python 2.7 you can do it this way: %your_variable_name%pip install package_name
For example: C:>%p27s%pip install Pyro4
This way you can install any package for python 2.7 and use default pip for python 3.4 | 2 | 0 | 0 | I am running Windows x64 bit.
I downloaded the Pyro4 package via pip install Pyro4. It downloaded the packages successfully and they are all present in my "C:\Python34\Scripts" folder as I've kept Python3.4 as default.
Now when I went to the that "C:\Python27\Scripts" folder, the Pyro4 package is not to be found. This is as expected, but I would like to work on both Python 2.7 and 3.4 as Pyro4 is compatible in both.
How do I change my pip command to download the package to Python 2.7's installation scripts directory? | How to configure pip to install packages for python 2.x and python 3.x separately | 1.2 | 0 | 0 | 1,369 |
34,906,243 | 2016-01-20T17:10:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,python-3.x,pip | 34,906,420 | 3 | false | 0 | 0 | You will need to go to your environment variables in the control panel and change the path from C:\Python34\Scripts to C:\Python27\Scripts. After that change, when you type 'python' in the command prompt it will be using Python 2.7. Next, install pip like you initially did. | 2 | 0 | 0 | I am running Windows x64 bit.
I downloaded the Pyro4 package via pip install Pyro4. It downloaded the packages successfully and they are all present in my "C:\Python34\Scripts" folder as I've kept Python3.4 as default.
Now when I went to the that "C:\Python27\Scripts" folder, the Pyro4 package is not to be found. This is as expected, but I would like to work on both Python 2.7 and 3.4 as Pyro4 is compatible in both.
How do I change my pip command to download the package to Python 2.7's installation scripts directory? | How to configure pip to install packages for python 2.x and python 3.x separately | 0 | 0 | 0 | 1,369 |
34,906,338 | 2016-01-20T17:14:00.000 | 5 | 0 | 1 | 0 | python,installation,anaconda,conda | 34,906,375 | 3 | false | 0 | 0 | The Python installation on a Mac is not affected at all when installing Anaconda. However, Anaconda manipulates the $PATH environment variable. No need to uninstall Python. | 1 | 26 | 0 | I found an old windows xp machine running Python 2.5.2. I would like to use Anaconda instead. Can I just install Anaconda on it and do I have to uninstall Python 2.5.2? Similarly, I have a Mac system with Python 2.7.9 working with some NLT libraries and I'd like to get Anaconda running on it too. What's the best course of action to get Anaconda over an existing system that already has python? | Installing anaconda over existing python system? | 0.321513 | 0 | 0 | 37,278 |
34,907,014 | 2016-01-20T17:47:00.000 | 1 | 0 | 0 | 0 | python,django,model | 51,751,466 | 3 | false | 1 | 0 | i would suggest storing json as a string in the database, that way it can be as extendable as you want and the field list can go very long.
Edit:
If you are using other damn backends you can use Django-jsonfield. If you are using Postgres then it has a native jsonfield support for enhanced querying, etc.
Edit 2:
Using django mongodb connector can also help. | 1 | 3 | 0 | I am just starting with Django and want to create a model for an application.
I find Djangos feature to
- automatically define validations and html widget types for forms according to the field type defined in the model and
- define a choice set for the field right in the model
very usefull and I want to make best use of it. Also, I want to make best use of the admin interface.
However, what if I want to allow the user of the application to add fields to the model? For example, consider a simple adress book. I want the user to be able to define additional atributes for all of his contacts in the admin settings, i.e. add a fax number field, so that a fax number can be added to all contacts.
from a relational DB perspective, I would have a table with atributes (PK: atr_ID, atr_name, atr_type) and an N:N relation between atributes and contacts with foreign keys from atributes and contacts - i.e. it would result in 3 tables in the DB. right?
but that way I cannot define the field types directly in the Django model. Now what is best practice here? How can I make use of Djangos functionality AND allow the user to add aditional/custom fields via the admin interface?
Thank you! :)
Best
Teconomix | Django: allow user to add fields to model | 0.066568 | 0 | 0 | 3,132 |
34,910,086 | 2016-01-20T20:41:00.000 | 0 | 0 | 0 | 0 | python,pygame,resizable,surface,blit | 58,898,639 | 3 | false | 0 | 1 | Recently with pygame2.0 You can use the SCALED flag | 1 | 7 | 0 | If I set a pygame window to resizable and then click and drag on the border of the window the window will get larger but nothing blit onto the surface will get larger with it. (Which is understandable) How would I make it so that when I resize a window all blit objects resize with it and fill the window properly?
For example: Say I have a window of 200 x 200 and I blit a button at window_width/2 and window_height/2. The button would be in the center of the window at 100 x 100. Now if I resize the window to 300 x 300 the button stays at 100 x 100 instead of 150 x 150.
I tried messing around with pygame.Surface.get_width ect, but had no luck.
Basically I'm trying to resize a program's window and have all blit images stay proportionate. | Pygame. How do I resize a surface and keep all objects within proportionate to the new window size? | 0 | 0 | 0 | 12,690 |
34,911,264 | 2016-01-20T21:48:00.000 | 0 | 0 | 1 | 0 | python,nlp,nltk | 46,003,443 | 1 | false | 0 | 0 | Summing it up, you have the following options:
Correcting the tag in the post-processing - a bit ugly but quick and easy.
Employ an external Name Entity Recognizer (Stanford NER as @Bob Dylan has thoughtfully suggested) - this one is more involved, particularly because Stanford NER is in java and is not particularly fast.
Retrain a POS Tagger on domain-specific data (do you have a large enough annotated dataset to use it for that?)
Use WSD (Word Sense Disambiguation) approach - for a start you need to have a good domain dictionary to use. | 1 | 5 | 1 | I'm doing some NLP where I'm finding out when patients were diagnosed with multiple sclerosis.
I'd like to use nltk to tell me that the noun of a sentence was multiple sclerosis. Problem is, doctors frequently refer to multiple sclerosis as MS which nltk picks up as a proper noun.
For example, this sentence, "His MS was diagnosed in 1999." Is tagged as: [('His', 'PRP$'), ('MS', 'NNP'), ('was', 'VBD'), ('diagnosed', 'VBN'), ('in', 'IN'), ('1999', 'CD'), ('.', '.')]
MS should be a noun here. Any suggestions? | Is there a way to tell NLTK that a certain word isn't a proper noun but a noun? | 0 | 0 | 0 | 436 |
34,911,638 | 2016-01-20T22:11:00.000 | 0 | 0 | 0 | 1 | python,django,amazon-web-services,celery | 34,942,922 | 1 | true | 1 | 0 | Knoob mistake:
Turns out that I had another environment with similar code consuming from the same rabbitMQ server. Seems this other environment was picking up the retries. | 1 | 0 | 0 | Am trying to implement retries in one of my celery tasks which works fine on my local development environment but doesn't execute retries when deployed to AWS beanstalk. | Celery Retry not working on AWS Beanstalk running Docker ver.1.6.2(Multi container) | 1.2 | 0 | 0 | 83 |
34,912,784 | 2016-01-20T23:40:00.000 | 1 | 0 | 1 | 1 | python,installation,pip,upgrade,six | 34,912,892 | 2 | false | 0 | 0 | I, too, have had some issues with installing modules, and I sometimes find that it helps just to start over. In this case, it looks like you already have some of the 'six' module, but isn't properly set up, so if sudo pip uninstall six yields the same thing, go into your directory and manually delete anything related to six, and then try installing it. You may have to do some digging where you have your modules are stored (or have been stored, as pip can find them in different locations). | 1 | 5 | 0 | When I run sudo pip install --upgrade six I run into the issue below:
2016-01-20 18:29:48|optim $ sudo pip install --upgrade six
Collecting six
Downloading six-1.10.0-py2.py3-none-any.whl
Installing collected packages: six
Found existing installation: six 1.4.1
Detected a distutils installed project ('six') which we cannot uninstall. The metadata provided by distutils does not contain a list of files which have been installed, so pip does not know which files to uninstall.
I have Python 2.7, and I'm on Mac OS X 10.11.1.
How can I make this upgrade successful?
(There are other kind of related posts, but they do not actually have a solution to this same error.)
EDIT:
I am told I can remove six manually by removing things from site-packages. These are the files in site-packages that begin with six:
six-1.10.0.dist-info, six-1.9.0.dist-info, six.py, six.py.
Are they all correct/safe to remove?
EDIT2:
I decided to remove those from site-packages, but it turns out the existing six that cannot be installed is actually in
/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python.
There I see the files:
six-1.4.1-py2.7.egg-info, six.py, six.pyc
but doing rm on them (with sudo, even) gives Operation not permitted.
So now the question is, how can I remove those files, given where they are? | Python - Cannot upgrade six, issue uninstalling previous version | 0.099668 | 0 | 0 | 6,127 |
34,914,142 | 2016-01-21T02:03:00.000 | 0 | 0 | 0 | 0 | python,tkinter | 34,914,501 | 1 | true | 0 | 1 | I do not see the associated code that actually displays the current position of the snake on the screen and remove it after movement, but this is where you can change the size if you make the length of the snake variable and have it drawn and removed in an iterate fashion. When food is eaten, you can simply increase the size of the snake length variable and pause the erasing of the snake movement as it proceeds along its vector until the desired growth has occurred, at which time removal can proceed at the new length rate. Please clarify the part of the code that actually renders the snakes position. | 1 | 1 | 0 | I'm currently creating a Snake game on Python using the TKinter library.
So right now, I've implemented the movements, the food system, and the score system. I still need some help on how I can make the snake grow when it eats the food. | Need help for a Python Snake game | 1.2 | 0 | 0 | 1,681 |
34,915,058 | 2016-01-21T03:48:00.000 | 0 | 0 | 0 | 0 | python,google-chrome,selenium | 34,937,195 | 1 | false | 1 | 0 | I figured it out. I was being dumb. I saved off the html as a file and opened that file with chrome and it displayed the normal page. I just didn't see the fact that it was a normal page looking at it directly. Thanks all 15 people for your time. | 1 | 0 | 0 | Question: yikyak.com returns some sort of "browser not supported" landing page when I try to view source code in chrome (even for the page I'm logged in on) or when I write it out to the Python terminal. Why is this and what can I do to get around it?
Edit for clarification: I'm using the chrome webdriver. I can navigate around the yik yak website by clicking on it just fine. But whenever I try to see what html is on the page, I get an html page for a "browser not reported" page.
Background: I'm trying to access yikyak.com with selenium for python to download yaks and do fun things with them. I know fairly little about web programming.
Thanks!
Secondary, less important question: If you're already here, are there particularly great free resources for a super-quick intro to the certification knowledge I need to store logins and stuff like that to use my logged in account? That would be awesome. | Trying to view html on yikyak.com, get "browser out of date" page | 0 | 0 | 1 | 62 |
34,919,957 | 2016-01-21T09:35:00.000 | 0 | 0 | 0 | 0 | javascript,python,azure,azure-api-management | 35,052,618 | 3 | false | 1 | 0 | It seems that we cannot hide the subscription id in the JS code. Because JS code should send http request with this key and we can get the subscription ID with Fiddler.
Alternatively approach is, we can send this http request in the server-end. We can call the server-end method use the Ajax, server-end call the Azure Management API and we can write the subscription into the server end. Using this method, others can not see the subscription ID in JS code. | 2 | 1 | 0 | we are using Azure API management which maps to python flask api. We are making the javascript ajax calls (Azure APIs). We are now placing the subscription key directly in the query parameter of the ajax calls.
Now anyone who have access to this key (by pressing developer tools or view source), can access the apis' as well.
Is there a way to hide the subscription key in ajax calls? | Secure Azure API rest calls in javascript | 0 | 0 | 0 | 421 |
34,919,957 | 2016-01-21T09:35:00.000 | 1 | 0 | 0 | 0 | javascript,python,azure,azure-api-management | 35,021,640 | 3 | false | 1 | 0 | You can use JSON Web Token (JWT) in the request, which has a signature and expiration time. | 2 | 1 | 0 | we are using Azure API management which maps to python flask api. We are making the javascript ajax calls (Azure APIs). We are now placing the subscription key directly in the query parameter of the ajax calls.
Now anyone who have access to this key (by pressing developer tools or view source), can access the apis' as well.
Is there a way to hide the subscription key in ajax calls? | Secure Azure API rest calls in javascript | 0.066568 | 0 | 0 | 421 |
34,921,411 | 2016-01-21T10:39:00.000 | 5 | 0 | 0 | 0 | python-2.7,web-applications,tkinter | 34,924,538 | 2 | true | 0 | 1 | You will have to rewrite your app. There is simply no way to convert a tkinter application to run on the web. You could potentially use pyjs to convert some of the business logic, but the entire GUI will have to be rewritten. | 1 | 6 | 0 | I have wrote python(2.7) GUI desktop application using TKinter library and it is working fine. Now i want to convert it into web application. I have looked into pyjaco and pyjamas but not getting it done.
How can i convert it into Web App ?
Thanks in advance.. | How to Convert python Tkinter desktop App to Web App | 1.2 | 0 | 0 | 9,730 |
34,921,554 | 2016-01-21T10:45:00.000 | 0 | 1 | 1 | 0 | python-2.7 | 35,057,956 | 1 | false | 0 | 0 | Is the dir(ClassInQuestion) function the one you are looking for? You should get all Methods and Properties. | 1 | 1 | 0 | I have a few modules that i want to import dynamically.
These modules contain classes that in turn have their own methods.
Is there a way to list the classes methods and more over get access to their internal variables.
Thanks | How to dynamically list all methods from a class whithin a module | 0 | 0 | 0 | 29 |
34,925,511 | 2016-01-21T13:45:00.000 | 2 | 0 | 1 | 0 | java,python,git,github,packaging | 34,925,646 | 1 | true | 1 | 0 | Since you'll generally be compiling and distributing them separately, I'd suggest separate repos. They are -in that case- separate projects.
One artifact per project keeps thing nice and simple. One build command per output. Having multiple projects in one repo means a complex directory structure, lots of build tool customisation (in the case of,say, Maven) and possibly complex build commands.
It does mean -however- that any communication changes will need to be made to two projects but as client and server are in different languages you'd need to do that anyway. | 1 | 1 | 0 | I'm developing two programs for a project, one client-side and one server-side, where the client program is in python and the server is in java.
My question is are there guidelines (e.g. by github, subversion, etc.) stating that these two should or should not co-exist in the same git repo? | Should a Java program and python program that are related co-exist in same git repo? | 1.2 | 0 | 0 | 666 |
34,925,917 | 2016-01-21T14:04:00.000 | 1 | 0 | 0 | 0 | python,django,asynchronous,global-variables | 34,926,130 | 2 | true | 1 | 0 | You can store your token in django cache, it will be faster from database or disk storage in most of the cases.
Another approach is to use redis.
You can also calculate your token:
save some random token in settings of both servers
calculate token based on current timestamp rounded to 10 seconds, for example using:
token = hashlib.sha1(secret_token)
token.update(str(rounded_timestamp))
token = token.hexdigest()
if token generated on remote server when POSTing request match token generated on local server, when getting response, request is valid and can be processed. | 2 | 0 | 0 | I'm using django to develop a website. On the server side, I need to transfer some data that must be processed on the second server (on a different machine). I then need a way to retrieve the processed data. I figured that the simplest would be to send back to the Django server a POST request, that would then be handled on a view dedicated for that job.
But I would like to add some minimum security to this process: When I transfer the data to the other machine, I want to join a randomly generated token to it. When I get the processed data back, I expect to also get back the same token, otherwise the request is ignored.
My problem is the following: How do I store the generated token on the Django server?
I could use a global variable, but I had the impression browsing here and there on the web, that global variables should not be used for safety reason (not that I understand why really).
I could store the token on disk/database, but it seems to be an unjustified waste of performance (even if in practice it would probably not change much).
Is there third solution, or a canonical way to do such a thing using Django? | Django, global variables and tokens | 1.2 | 0 | 0 | 382 |
34,925,917 | 2016-01-21T14:04:00.000 | 1 | 0 | 0 | 0 | python,django,asynchronous,global-variables | 34,926,433 | 2 | false | 1 | 0 | The simple obvious solution would be to store the token in your database. Other possible solutions are Redis or something similar. Finally, you can have a look at distributed async tasks queues like Celery... | 2 | 0 | 0 | I'm using django to develop a website. On the server side, I need to transfer some data that must be processed on the second server (on a different machine). I then need a way to retrieve the processed data. I figured that the simplest would be to send back to the Django server a POST request, that would then be handled on a view dedicated for that job.
But I would like to add some minimum security to this process: When I transfer the data to the other machine, I want to join a randomly generated token to it. When I get the processed data back, I expect to also get back the same token, otherwise the request is ignored.
My problem is the following: How do I store the generated token on the Django server?
I could use a global variable, but I had the impression browsing here and there on the web, that global variables should not be used for safety reason (not that I understand why really).
I could store the token on disk/database, but it seems to be an unjustified waste of performance (even if in practice it would probably not change much).
Is there third solution, or a canonical way to do such a thing using Django? | Django, global variables and tokens | 0.099668 | 0 | 0 | 382 |
34,925,948 | 2016-01-21T14:06:00.000 | 2 | 0 | 0 | 0 | python,django,django-admin | 34,926,127 | 1 | false | 1 | 0 | This might happen due to the following combination of circumstances:
The view you are accessing requires authentication (check for the @login_required decorator on the view)
Therefore, when you access anonymously it is trying to redirect you to the login page (check your LOGIN_REDIRECT_URL setting in settings.py)
Then, when your browser tries to reach this login page, it is not found (404)
So remove @login_required if it isn't really necessary, or make sure your login redirect is well configured and pointing to a url that actually provides a login page. | 1 | 0 | 0 | I have a weird problem with my Django application and I have no idea where to look to fix it. Whenever I run my application, before I can login to the application and do stuff on it I have to login to the admin site or else it throws a "Page not found (404)" error when I try to login to the application as a normal (non-admin) user.
Any ideas on what may be causing this and how I can fix it? | Django: I have to login to admin site before the application | 0.379949 | 0 | 0 | 42 |
34,930,986 | 2016-01-21T17:59:00.000 | 0 | 0 | 0 | 0 | python,matplotlib,size,seaborn | 34,931,076 | 1 | false | 0 | 0 | Seaborn sizing options vary by the plot type, which can be a bit confusing, so this is a useful universal approach.
First run this: import matplotlib as plt
Then add the line plt.figure(figsize=(9, 9)) in the notebook cells for each of the plots. You can adjust the integer values as you see fit. | 1 | 1 | 1 | seaborn has a conveninent keyword named size=, that aims to make the plots a certain size. However, the plots significantly differ in size depending on the xy-ticks and the axis labels. What is the best way to generate plots with exactly the same dimensions regardless of ticks and axis labels? | seaborn plots the same size? | 0 | 0 | 0 | 219 |
34,933,833 | 2016-01-21T20:38:00.000 | 1 | 0 | 0 | 0 | python,apache-flink | 37,719,013 | 2 | false | 1 | 0 | For users who are unaware, Apache Flink added this feature couple of months back.
Here is the short doc from Flink :-
The default parallelism can be overwriten for an entire job by calling setParallelism(int parallelism) on the ExecutionEnvironment or by passing -p to the Flink Command-line frontend. It can be overwritten for single transformations by calling setParallelism(int parallelism) on an operator. | 1 | 1 | 0 | I execute my program with a dop > 1 but I do not want multiple output files. In Java myDataSet.writeAsText(outputFilePath, WriteMode.OVERWRITE).setParallelism(1);is working as expected.
But when I try the same in Python it does not work. This is my code: myDataSet.write_text(output_file, write_mode=WriteMode.OVERWRITE).set_degree_of_parallelism(1)
Is there a possibilty to achieve this behaviour in Python? | Set degree of parallelism for a single operation in Python | 0.099668 | 0 | 0 | 370 |
34,934,223 | 2016-01-21T21:00:00.000 | 7 | 0 | 1 | 0 | markdown,ipython-notebook,jupyter,jupyter-notebook | 34,934,421 | 1 | true | 0 | 0 | Make the hashtags come after the hyphen, like so - ## item1
item1 | 1 | 6 | 0 | I'm using markdown in jupyter, so I create bullet lists the usual way :
- Item1
- Item2
which appears as :
Item1
Item2
Now say I want the font of the list to be bigger, e.g. like the one provided by heading 2 (##), is there a way to do it? What I tried
# - Item1
# - Item2
fails and appears simply as header text and not as a bigger list :
- Item1
- Item2 | How to create a list with bigger font in markdown | 1.2 | 0 | 0 | 3,021 |
34,935,100 | 2016-01-21T21:57:00.000 | 0 | 0 | 1 | 0 | python,python-2.7,tkinter,tk | 34,935,285 | 2 | false | 0 | 1 | I use both. root.destroy() is used to destroy all windows (parent, child) within the root tkinter instance, but it doesn't end the python program.
sys.exit() stops all active applications used by python.
In short, if your python code runs a tkinter GUI exclusively and it's functionality ends after the window is closed then use both root.destroy() followed by sys.exit() to effectively end the session. | 1 | 2 | 0 | Is there a problem with using sys.exit() to stop a Tkinter program?
I know normally people use root.destroy() why is that? | Can I use sys.exit() to stop a Tkinter program | 0 | 0 | 0 | 2,199 |
34,936,039 | 2016-01-21T23:02:00.000 | 0 | 0 | 0 | 0 | python-2.7,python-3.x,selenium-webdriver,robotframework | 34,997,071 | 2 | false | 1 | 0 | With python 2.7.9 you can only install robotframework 2.9
With python 3.X you can install robotframework 3.x+ but as Bryan Oakley said, Selenium2Library is not yet supported ;) | 1 | 0 | 0 | Env: Windows 10 Pro
I installed python 2.7.9 and using pip installed robotframework and robotframework-selenium2library and it all worked fine with no errors.
Then I was doing some research and found that unless there is a reason for me to use 2.x versions of Python, I should stick with 3.x versions. Since 3.4 support already exists for selenium2library (read somewhere), so I decided to switch to it.
I uninstalled python 2.7.9 and installed python 3.4 version. When I installed robotframerwork, I am getting the following:
C:\Users\username>pip install robotframework
Downloading/unpacking RobotFramework
Running setup.py (path:C:\Users\username\AppData\Local\Temp\pip_build_username\RobotFramework\setup.py) egg_info for package RobotFramework
no previously-included directories found matching 'src\robot\htmldata\testdata'
Installing collected packages: RobotFramework
Running setup.py install for RobotFramework
File "C:\Python34\Lib\site-packages\robot\running\timeouts\ironpython.py", line 57
raise self._error[0], self._error[1], self._error[2]
^
SyntaxError: invalid syntax
File "C:\Python34\Lib\site-packages\robot\running\timeouts\jython.py", line 56
raise self._error[0], self._error[1], self._error[2]
^
SyntaxError: invalid syntax
no previously-included directories found matching 'src\robot\htmldata\testdata'
replacing interpreter in robot.bat and rebot.bat.
Successfully installed RobotFramework
Cleaning up...
When I did pip list I do see robotframework is installed.
C:\Users\username>pip list
pip (1.5.4)
robotframework (3.0)
setuptools (2.1)
Should I be concerned and stick to Python 2.7.9? | python version for robot framework selenium2library (Windows10) | 0 | 0 | 1 | 2,741 |
34,937,925 | 2016-01-22T02:21:00.000 | 2 | 0 | 0 | 0 | python,tkinter,terminal,window | 34,939,110 | 1 | false | 0 | 1 | Save your file with ".pyw" extension instead of ".py". | 1 | 1 | 0 | When I run my Tkinter application, a command line/terminal opens. How can I run my application without it invoking a command line/terminal session? | How to run a Tkinter program without opening a terminal | 0.379949 | 0 | 0 | 1,518 |
34,939,193 | 2016-01-22T04:46:00.000 | 0 | 0 | 1 | 0 | python,ipython | 34,939,512 | 4 | false | 0 | 0 | By default ipython and jupyter set the top of the working tree to the current directory at the time of launching the notebook server. You can change this by setting c.NotebookApp.notebook_dir in the either the .ipython/profile_XXX/ipython_notebook_config.py or .jupyter/jupyter_notebook_config.py (*nix/Mac - not sure where these are located on Windows).
As long as the top of working tree includes the subdirectory with your scripts then you can just use the cmd/explorer to move the .ipynb to your scripts directory and then browse http://localhost:XXXX/tree to open the ipython notebook. | 1 | 0 | 0 | I am currently working on a machine learning project, and I would like to save my IPython files with the rest of my scripts. However, I have been unable to find any information on how to change the path that IPython files are saved to. "ipython locate" only gives me the location they are saved to, and does not appear to give me a way to change it, and the iPython editor does not have a file selector that I can use to change the save path. I am using Windows 10. Any help would be appreciated. | Save IPython file to a particular directory | 0 | 0 | 0 | 1,705 |
34,939,388 | 2016-01-22T05:06:00.000 | 1 | 1 | 0 | 0 | python,import,pickle,dill | 37,116,604 | 3 | true | 0 | 0 | The import latency is most likely due to loading the dependent shared objects of the GEOS-library.
Optimising this could maybe done, but it would be very hard. One way would be to build a statically compiled custom python interpreter with all DLLs and extension modules built in. But maintaining that would be a major PITA (trust me - I do it for work).
Another option is to turn your application into a service, thus only incurring the runtime-cost of starting the interpreter up once.
It depends on your actual problem if this is suitable. | 1 | 1 | 1 | Can pickle/dill/cpickle be used to pickle an imported module to improve import speed? The Shapely module for example takes 5 seconds on my system to find and load all of the required dependencies, which I'd really like to avoid.
Can I pickle my imports once, then reuse that pickle instead of having to do slow imports every time? | Can Python's pickle/cpickle/dill speed up imports? | 1.2 | 0 | 0 | 627 |
34,945,959 | 2016-01-22T11:43:00.000 | 0 | 0 | 1 | 0 | python,c++,python-3.x,boost | 44,780,469 | 1 | false | 0 | 0 | I had the same problem, these are options that worked well for me:
Go to boost/python/detail/config.hpp and change BOOST_LIB_NAMe to boost_python3 instead of boost_python.
Turn auto Linkage of by defining BOOST_ALL_NO_LIB and then explicitly set boost_python3...lib as linker dependency. | 1 | 1 | 0 | I used dependency walker and found out that VS did not link to boost_python3_... but to boost_python_.... I removed the non 3 version, but now the linker complains that it cannot find boost_python-vc140-mt-1_60.lib. What do I have to do do link with the Python 3 boost library? Or are the non 3 versions also used for python 3? | Boost Python, Visual Studio links to wrong boost dll | 0 | 0 | 0 | 152 |
34,946,330 | 2016-01-22T12:05:00.000 | 1 | 0 | 0 | 0 | python,sockets,network-programming | 34,946,501 | 2 | false | 0 | 0 | When a new connection is made to your server, your protocol will have to specify some way for the client to authenticate. Ultimately there is nothing that the network infrastructure can do to determine what sort of process initiated the connection, so you will have to specify some exchange that allows the server to be sure that it really is talking to a valid client process. | 2 | 1 | 0 | I'm writing a Socket Server in Python, and also a Socket Client to connect to the Server.
The Client interacts with the Server in a way that the Client sends information when an action is invoked, and the Server processes the information.
The problem I'm having, is that I am able to connect to my Server with Telnet, and probably other things that I haven't tried yet. I want to disable connection from these other Clients, and only allow connections from Python Clients. (Preferably my custom-made client, as it sends information to communicate)
Is there a way I could set up authentication on connection to differentiate Python Clients from others?
Currently there is no code, as this is a problem I want to be able to solve before getting my hands dirty. | Only allow connections from custom clients | 0.099668 | 0 | 1 | 38 |
34,946,330 | 2016-01-22T12:05:00.000 | 0 | 0 | 0 | 0 | python,sockets,network-programming | 34,952,884 | 2 | true | 0 | 0 | @holdenweb has already given a good answer with basic info.
If a (terminal) software sends the bytes that your application expects as a valid identification, your app will never know whether it talks to an original client or anything else.
A possible way to test for valid clients could be, that your server sends an encrypted and authenticated question (should be different at each test!), e.g. something like "what is 18:37:12 (current date and time) plus 2 (random) hours?"
Encryption/Authentication would be another issue then.
If you keep this algorithm secret, only your clients can answer it and validate themselves successfully. It can be hacked/reverse engineered, but it is safe against basic attackers. | 2 | 1 | 0 | I'm writing a Socket Server in Python, and also a Socket Client to connect to the Server.
The Client interacts with the Server in a way that the Client sends information when an action is invoked, and the Server processes the information.
The problem I'm having, is that I am able to connect to my Server with Telnet, and probably other things that I haven't tried yet. I want to disable connection from these other Clients, and only allow connections from Python Clients. (Preferably my custom-made client, as it sends information to communicate)
Is there a way I could set up authentication on connection to differentiate Python Clients from others?
Currently there is no code, as this is a problem I want to be able to solve before getting my hands dirty. | Only allow connections from custom clients | 1.2 | 0 | 1 | 38 |
34,949,364 | 2016-01-22T14:44:00.000 | 1 | 0 | 0 | 1 | python,tornado,upgrade | 34,960,704 | 1 | false | 0 | 0 | Easy way, do it with nginx.
Start a latest tornado server.
Redirect all new connections to the new tornado server.(Change nginx configure file and reload with nginx -s reload)
Tell the old tornado server shutdown itself if all connections are closed.
Hard way
If you want to change your server on the fly, maybe you could find a way by reading nginx's source code, figure out how nginx -s reload works, but I think you need to do lots of work. | 1 | 0 | 0 | I have an HTTP server created by the Tornado framework. I need to update/reload this server without any connection lost and shutdown.
I have no idea how to do it.
Could you get me any clue? | Graceful reload of python tornado server | 0.197375 | 0 | 0 | 583 |
34,950,994 | 2016-01-22T16:07:00.000 | 0 | 0 | 0 | 0 | python,selenium,selenium-webdriver | 34,969,465 | 1 | false | 1 | 0 | No, its not possible.
Only way is to use several browsers.
As example:
run phantomJS(headless browser)
run do needed actions
run firefox, used perform login
copy cookies after login
paste cookies to phantomJS
close firefox | 1 | 0 | 0 | How do I only display the screen of my webdriver in a specific case using selenium?
I only want to display the window to the user when the title of the page is for exemple "XXX", so then he type something one the window and than the window close again, and the robot continue making what he should make on background.
Is it possible?
Thanks, | How to display the webdriver window only in one specific condition using selenium (python)? | 0 | 0 | 1 | 33 |
34,956,823 | 2016-01-22T22:14:00.000 | 1 | 1 | 1 | 0 | python,linux,raspberry-pi,modbus,plc | 34,964,777 | 5 | false | 0 | 0 | I don't know if you can do this in the specific configuration you are discussing; in fact you don't say which PLC you are using so I doubt any respondant can tell you.
But under the assumption you can technically connect the pieces, you will probably discover the performance is not adequate to really carry out reliable mechanical control.
Normally PLCs run through their program hundreds of times per second, each time sampling inputs and computing new outputs. This is fast enough so mechanics effectively see "smooth" control. (5 Hz would likely cause mechanical chatter and jerky movements of hardware).
If you "involve" Python to compute that, somehow you have pay bus communication times to/from the PLC to the Python, the Python wakeup time, Python execution time, and Python message packing/unpacking time. I doubt you can achieve all of this at several hundred times per second reliably (what happens when the OS interrupts Python to write 10M of data onto the disk for some other background process)?
If you insist in involving Python somehow, it should act only in an advisory role. That is, the PLC does all the work (e.g., you need that "ladder logic/..." to be written) but the Python code sends occasional messages to the PLC to change its overall behavior, e.g, control mode, feed rates, etc. | 3 | 2 | 0 | Trying to figure out the best way of controlling industrial PLC's with Raspberry Pi/linux server - specifically using python and pymodbus (modbusTCP) over ethernet...
Once the PLC internal registry is mapped properly to modbus, can software written in python take the place of ladder logic programming in the PLC and control it completely?
Or will ladder logic/ native PLC code still need to be written? | can python software take place of logic ladder program in PLC through modbus? | 0.039979 | 0 | 0 | 13,702 |
34,956,823 | 2016-01-22T22:14:00.000 | 1 | 1 | 1 | 0 | python,linux,raspberry-pi,modbus,plc | 39,781,980 | 5 | false | 0 | 0 | Well let's assume that you have really efficient code. And you created some dictionaries, did some lambda. You can cycle through a logic set of 2000 IO points in 5ms.
I do this in Lua everyday. PLC hardware is FPGA based. But never scan faster than 10ms. Using data slows them down. And usually end up at a 25ms scan.
Python and Lua programmed correctly can scan at 1-2ms over 2600 lines of code.
You need a C wrapper to run the scan. Use TCP modbus devices. And never more than 32 IO per IP address. It's actually very easy.
Those who do not know PLC's or only know PLC's will steer you in the wrong direction. Do your homework. Learn Lua. And then prove them wrong.
Hope that helps. | 3 | 2 | 0 | Trying to figure out the best way of controlling industrial PLC's with Raspberry Pi/linux server - specifically using python and pymodbus (modbusTCP) over ethernet...
Once the PLC internal registry is mapped properly to modbus, can software written in python take the place of ladder logic programming in the PLC and control it completely?
Or will ladder logic/ native PLC code still need to be written? | can python software take place of logic ladder program in PLC through modbus? | 0.039979 | 0 | 0 | 13,702 |
34,956,823 | 2016-01-22T22:14:00.000 | 6 | 1 | 1 | 0 | python,linux,raspberry-pi,modbus,plc | 34,964,033 | 5 | true | 0 | 0 | You should not replace PLC logic with your linux server. You need real time OS for that. Even running real time OS and controlling PLC with it is a bad idea. PLC-s have all kind of checks built in for controlling inputs/outputs, program cycle, internal diagnostics and so on. They are a tool meant specifically for that task. IMHO ladder logic is easier to learn than real time OS.
You should use your server as HMI - human machine interface, that sends control data to PLC and displays it back to the user.
If your project is for learning experience or personal project then you should of course do whatever you feel like. | 3 | 2 | 0 | Trying to figure out the best way of controlling industrial PLC's with Raspberry Pi/linux server - specifically using python and pymodbus (modbusTCP) over ethernet...
Once the PLC internal registry is mapped properly to modbus, can software written in python take the place of ladder logic programming in the PLC and control it completely?
Or will ladder logic/ native PLC code still need to be written? | can python software take place of logic ladder program in PLC through modbus? | 1.2 | 0 | 0 | 13,702 |
34,959,031 | 2016-01-23T02:42:00.000 | 1 | 0 | 0 | 0 | python,django,rest,oauth | 35,040,802 | 2 | true | 1 | 0 | Solution found!
In fact, the reason why /o/application was accessible, is because I had a super admin session open.
Everything is great, then :) | 1 | 5 | 0 | I am currently writing a REST API using Django rest framework, and oauth2 for authentication (using django-oauth-toolkit). I'm very happy with both of them, making exactly what I want.
However, I have one concern. I'm passing my app to production, and realized there might be a problem with the /o/applications/ view, which is accessible to everyone!
I found myself surprised to not see anything in the doc about it, neither when I try to google it. Did I miss something?
Some ideas where to either making a custom view, requiring authentication as super-user (but this would be weird, as it would mix different kind of authentication, wouldn't it?), or add a dummy route to 401 or 403 view to /o/applications/.
But these sound quite hacky to me... isn't it any official "best" solution to do it? I'd be very surprised if I'm the first one running into this issue, I must have missed something...
Thanks by advance! | Disable or restrict /o/applications (django rest framework, oauth2) | 1.2 | 0 | 0 | 873 |
34,961,061 | 2016-01-23T07:57:00.000 | 7 | 0 | 1 | 0 | python,python-3.x,pip | 34,961,095 | 2 | false | 0 | 0 | Much like the python/python3 executables, pip also has two variants. You can use pip3 to install Python 3 packages. | 1 | 7 | 0 | OS: Mac OS X 10.11.3
I have Python 2.7.10 and Python 3.4.3.
When I pip install some_package pip installs package for Python 2.7.10, but I want to install package for Python 3.4.3.
How I can set default Python or install package for 3.4.3? | How to set default python version for pip? | 1 | 0 | 0 | 6,188 |
34,966,288 | 2016-01-23T17:04:00.000 | 0 | 0 | 1 | 0 | python-3.x,for-loop,multidimensional-array | 34,966,384 | 1 | false | 0 | 0 | 1) it knows it's a new line because your text file (usually) has new line character(s) in it at the end of each line (not visible unless you set your editor to show all hidden character(s))
2) there are a few different ways to do the same thing
3) the split() returns a list, so each line will be a list of words and your 'document' will be a list of lists. | 1 | 0 | 0 | I recently started working with Python, and i'm trying things out. I know some basic instructions in Python, and how they work. But most of the time i don't know the exceptions and small details of those instructions.
I'm trying to make an array, and to put a textfile in that array. i use this code:
document = []
with open('inputfile.txt') as f:
for line in f:
document.append(line.strip().split(' '))
print(document)
What this does is placing the inputfile in variable "f", and then, for "line" in "f" in appends that line as a separated array. i know that the ".strip()" gets rid of the "\n", and the ".split(' ')" tears sentences apart in seperate words. My questions are:
1.) Why does python know that the "line" variable indicates a new line? in other words: why does it "do something" for each line, and not eg. for each word? it works with any word, so it's not that line is some kind of special syntax.
2.) Can i change this to something else?
3.) Why is each line added as a new array (thus creating a 2D array)? why isn't all of the processed text crammed into one array? (I know it's better this way, but that's not the point. The point is: Why?) | functioning of the for loop when processing a text file (Python) | 0 | 0 | 0 | 56 |
34,968,223 | 2016-01-23T20:03:00.000 | 3 | 0 | 0 | 0 | python,numpy,pandas | 57,711,857 | 2 | false | 0 | 0 | Numpy's count_nonzero function is efficient for this.
np.count_nonzero(df["c"]) | 1 | 6 | 1 | I have a df that looks something like:
a b c d e 0 1 2 3 5 1 4 0 5 2 5 8 9 6 0 4 5 0 0 0
I would like to output the number of numbers in column c that are not zero. | counting the number of non-zero numbers in a column of a df in pandas/python | 0.291313 | 0 | 0 | 21,246 |
34,970,818 | 2016-01-24T00:34:00.000 | 0 | 0 | 0 | 0 | python,neural-network,time-series,keras,recurrent-neural-network | 45,060,104 | 1 | true | 0 | 0 | I think this has more to do with your particular dataset than Bi-LSTMs in general.
You're confusing splitting a dataset for training/testing vs. splitting a sequence in a particular sample. It seems like you have many different subjects, which constitute a different sample. For a standard training/testing split, you would split your dataset between subjects, as you suggested in the last paragraph.
For any sort of RNN application, you do NOT split along your temporal sequence; you input your entire sequence as a single sample to your Bi-LSTM. So the question really becomes whether such a model is well-suited to your problem, which has multiple labels at specific points in the sequence. You can use a sequence-to-sequence variant of the LSTM model to predict which label each time point in the sequence belongs to, but again you would NOT be splitting the sequence into multiple parts. | 1 | 1 | 1 | When it comes to normal ANNs, or any of the standard machine learning techniques, I understand what the training, testing, and validation sets should be (both conceptually, and the rule-of-thumb ratios). However, for a bidirectional LSTM (BLSTM) net, how to split the data is confusing me.
I am trying to improve prediction on individual subject data that consists of monitored health values. In the simplest case, for each subject, there is one long time series of values (>20k values), and contiguous parts of that time series are labeled from a set of categories, depending on the current health of the subject. For a BLSTM, the net is trained on all of the data going forwards and backwards simultaneously. The problem then is, how does one split a time series for one subject?
I can't just take the last 2,000 values (for example), because they might all fall into a single category.
And I can't chop the time series up randomly, because then both the learning and testing phases would be made of disjointed chunks.
Finally, each of the subjects (as far as I can tell) has slightly different (but similar) characteristics. So, maybe, since I have thousands of subjects, do I train on some, test on some, and validate on others? However, since there are inter-subject differences, how would I set up the tests if I was only considering one subject to start? | Training, testing, and validation sets for bidirectional LSTM (BLSTM) | 1.2 | 0 | 0 | 1,028 |
34,971,379 | 2016-01-24T01:58:00.000 | 7 | 0 | 1 | 1 | python,anaconda | 37,636,425 | 3 | false | 0 | 0 | use the "activate" batch file
activate c:\anaconda3
activate c:\anaconda2 | 2 | 4 | 0 | Is there an easy way to switch between using Anaconda (Python 2) and Anaconda3 (Python 3) from the command line? I am on Windows 10. | Switching between Anaconda and Anaconda3 | 1 | 0 | 0 | 14,234 |
34,971,379 | 2016-01-24T01:58:00.000 | 0 | 0 | 1 | 1 | python,anaconda | 46,770,419 | 3 | false | 0 | 0 | If you are using Linux/Mac OS, edit your ~/.bashrc. For example, if you do not wanna use anaconda3, comment the line which add path_to_anaconda3 to your system PATH. | 2 | 4 | 0 | Is there an easy way to switch between using Anaconda (Python 2) and Anaconda3 (Python 3) from the command line? I am on Windows 10. | Switching between Anaconda and Anaconda3 | 0 | 0 | 0 | 14,234 |
34,975,419 | 2016-01-24T11:53:00.000 | 1 | 0 | 0 | 0 | python-2.7,cluster-analysis,igraph,k-means | 34,978,549 | 1 | false | 0 | 0 | Your approach doesn't work because the fast greedy community detection expects similarities as weights, not distances.
(Actually, this is probably only one of the reasons. The other is that the community detection algorithms in igraph were designed for sparse graphs. If you have calculated all the distances between all pairs of points, your graph is dense, and these algorithms will not be suitable). | 1 | 1 | 1 | I have a series of points (long, lat)
1) Found the haversine distance between all the points
2) Saved this to a csv file (source, destination, weight)
3) Read the csv file and generated weighted a graph (where weight is the haversine distance)
4) Used igraphs community detection algorithm - fastgreedy
I was expecting clusters with low distance to be highly each other, I was expecting something similar to kmeans (without the distinct partitions in space) but there was no order in my results.
Question:
Why does the community detection algorithm not give me results similar kmeans clustering? If im using the same points/ distances between points then why is there so much overlap between the communities? I'm just looking for some intuition as to why this isnt work as I expected.
Thanks | igraph community detection result has too much overlap | 0.197375 | 0 | 0 | 347 |
34,976,025 | 2016-01-24T12:52:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,task-queue | 34,976,107 | 2 | false | 1 | 0 | Check the size of the payload (arguments) you are sending to the task queue.
If it's more than a few KB in size you need to store it in the datastore and send the key of the object holding the data to the task queue | 2 | 0 | 0 | Encounter an error "RequestTooLargeError: The request to API call datastore_v3.Put() was too large.".
After looking through the code, it happens on the place where it is using task queue.
So how can I split a large queue task into several smaller ones? | How to decrease to split data put in task queue, Google app engine with Python | 0 | 0 | 0 | 96 |
34,976,025 | 2016-01-24T12:52:00.000 | 0 | 0 | 0 | 1 | python,google-app-engine,task-queue | 34,977,778 | 2 | true | 1 | 0 | The maximum size of a task is 100KB. That's a lot of data. It's hard to give specific advice without looking at your code, but I would mention this:
If you pass a collection to be processed in a task in a loop, than the obvious solution is to split the entire collection into smaller chunks, e.g. instead of passing 1000 entities to one task, pass 100 entities to 10 tasks.
If you pass a collection to a task that cannot be split into chunks (e.g. you need to calculate totals, averages, etc.), then don't pass this collection, but query/retrieve it in the task itself. Every task is saved back to the datastore, so you don't win much by passing the collection to the task - it has to be retrieved from the datastore anyway.
If you pass a very large object to a task, pass only data that the task actually needs. For example, if your task sends an email message, you may want to pass Email, Name, and Message, instead of passing the entire User entity which may include a lot of other properties.
Again, 100KB is a lot of data. If you are not using a loop to process many entities in your task, the problem with the task queue may indicate that there is a bigger problem with your data model in general if you have to push around so much data every time. You may want to consider splitting huge entities into several smaller entities. | 2 | 0 | 0 | Encounter an error "RequestTooLargeError: The request to API call datastore_v3.Put() was too large.".
After looking through the code, it happens on the place where it is using task queue.
So how can I split a large queue task into several smaller ones? | How to decrease to split data put in task queue, Google app engine with Python | 1.2 | 0 | 0 | 96 |
34,976,058 | 2016-01-24T12:56:00.000 | 19 | 0 | 1 | 0 | python,pycharm | 55,835,599 | 7 | false | 0 | 0 | In some cases, this is because Pycharm scans and indexes the PYTHONPATH. I figured out that some shared script I was running got changed by some nincompoop (may his severed head soon decorate our moat) and the /homes directory got into the PYTHONPATH.
How to get it out:
Go to File->Settings->Project:[your project]->Project Interpreter
On the right hand side you'll see a cogwheel, click it, then select Show all...
In the next window, your environment will be selected. There are a few icons on the right hand side of this window, one of them is a directory tree. Click it.
You'll find a list of all interpreter paths. Remove the directory that is causing your problem, dance a little victory dance, and resume work. | 5 | 73 | 0 | I am using PyCharm Community Edition 5.0.1
It was working fine till yesterday. But it has been stuck at 'Scanning files to index' for a very long time now. Since yesterday.
I have tried re-installing it, and also tried invalidating cache.
I can make changes to programs and use it as a text editor but unable to run any file. | Pycharm: "scanning files to index" is taking forever | 1 | 0 | 0 | 47,293 |
34,976,058 | 2016-01-24T12:56:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 56,941,797 | 7 | false | 0 | 0 | Try to make sure you have no compressed files in your directory, as removing this might show significant improvement in speed. it worked for me! | 5 | 73 | 0 | I am using PyCharm Community Edition 5.0.1
It was working fine till yesterday. But it has been stuck at 'Scanning files to index' for a very long time now. Since yesterday.
I have tried re-installing it, and also tried invalidating cache.
I can make changes to programs and use it as a text editor but unable to run any file. | Pycharm: "scanning files to index" is taking forever | 0 | 0 | 0 | 47,293 |
34,976,058 | 2016-01-24T12:56:00.000 | 9 | 0 | 1 | 0 | python,pycharm | 34,976,393 | 7 | false | 0 | 0 | Maybe there are some issues in project files? Try to remove .idea folder inside your project (but this will also purge all project settings). | 5 | 73 | 0 | I am using PyCharm Community Edition 5.0.1
It was working fine till yesterday. But it has been stuck at 'Scanning files to index' for a very long time now. Since yesterday.
I have tried re-installing it, and also tried invalidating cache.
I can make changes to programs and use it as a text editor but unable to run any file. | Pycharm: "scanning files to index" is taking forever | 1 | 0 | 0 | 47,293 |
34,976,058 | 2016-01-24T12:56:00.000 | 83 | 0 | 1 | 0 | python,pycharm | 34,976,320 | 7 | true | 0 | 0 | Exclude the folders you do not want to index. You can do this by right-clicking the folder you want to exclude, then choose Mark Directory As > Excluded and PyCharm will not index those files. | 5 | 73 | 0 | I am using PyCharm Community Edition 5.0.1
It was working fine till yesterday. But it has been stuck at 'Scanning files to index' for a very long time now. Since yesterday.
I have tried re-installing it, and also tried invalidating cache.
I can make changes to programs and use it as a text editor but unable to run any file. | Pycharm: "scanning files to index" is taking forever | 1.2 | 0 | 0 | 47,293 |
34,976,058 | 2016-01-24T12:56:00.000 | 0 | 0 | 1 | 0 | python,pycharm | 71,699,540 | 7 | false | 0 | 0 | In my case, I tried every solution that are mentioned above or anything in the internet.
Lastly, I check environment variables and I removed old entries related with python.
Then it stopped indexing. | 5 | 73 | 0 | I am using PyCharm Community Edition 5.0.1
It was working fine till yesterday. But it has been stuck at 'Scanning files to index' for a very long time now. Since yesterday.
I have tried re-installing it, and also tried invalidating cache.
I can make changes to programs and use it as a text editor but unable to run any file. | Pycharm: "scanning files to index" is taking forever | 0 | 0 | 0 | 47,293 |
34,978,896 | 2016-01-24T17:17:00.000 | -1 | 0 | 0 | 0 | python,sqlite,sqlalchemy | 34,979,208 | 3 | false | 0 | 0 | If you didn't make a decision until now for what kind of database you'll use, I advise you to pick mongodb as database server and mongoengine module for persist data, it's what you need, mongoengine has a DictField you can store in it a python dict directly and it's very easy to learn. | 1 | 0 | 0 | I have a series of python objects, each associated with a different user, e.g., obj1.userID = 1, obj2.userID = 2, etc. Each object also has a transaction history expressed as a python dict, i.e., obj2.transaction_record = {"itemID": 1, "amount": 1, "date": "2011-01-04"} etc.
I need these objects to persist, and transaction records may grow over time. Therefore, I'm thinking of using an ORM like sqlalchemy to make this happen.
What kind of database schema would I need to specify to store these objects in a database?
I have two alternatives, but neither seems like the correct thing to do:
Have a different table for each user:
CREATE TABLE user_id ( itemID INT PRIMARY KEY, amount INT, date CHARACTER(10) );
Store the transaction history dict as a BLOB of json:
CREATE TABLE foo ( userID INT PRIMARY KEY, trasaction_history BLOB);
Is there a cleaner way to implement this? | What kind of database schema would I use to store users' transaction histories? | -0.066568 | 1 | 0 | 358 |
34,979,145 | 2016-01-24T17:42:00.000 | 9 | 0 | 1 | 0 | python,django,pycharm | 34,993,725 | 3 | true | 1 | 0 | You can clean out old PyCharm interpreters that are no longer associated with a project via Settings -> Project Interpreter, click on the gear in the top right, then click "More". This gives you a listing where you can get rid of old virtualenvs that PyCharm thinks are still around. This will prevent the "(1)", "(2)" part.
You don't want to make the virtualenv into the content root. Your project's code is the content root.
As a suggestion:
Clear out all the registered virtual envs
Make a virtualenv, outside of PyCharm
Create a new project using PyCharm's Django template
You should then have a working example. | 3 | 3 | 0 | How do I remove all traces of legacy projects from PyCharm?
Background: I upgraded from PyCharm Community Edition to PyCharm Pro Edition today.
Reason was so I could work on Django projects, and in particular, a fledgling legacy project called 'deals'.
I deleted the legacy project folders.
I then opened the Pro Edition and went through the steps of creating a Django project called 'deals' with a python3.4 interpreter in a virtualenv.
It didn't work, I got an error message saying something about a missing file, and in the PyCharm project explorer, all I could see was
deals
.ideas
So I deleted it (ie. deleted the folders in both ~/.virtualenvs/deals and ~/Projects/deals).
I tried again, although this time I got an interpreter with a number suffix, ie. python3.4 (1).
I continued, and got the same empty file structure.
I deleted both folders again, went and cleaned out the intepreters in Settings > Project Interpreters .
I then tried again, getting 'new' interpreters,until I finally had python3.4 (5)
Plus, along the way I also invalidated the caches and restarted.
(ie. File > Invalidate Caches/Restart)
Then to prove if it works at all, I tried a brand new name 'junk'.
This time it worked fine, and I could see the Django folders in the PyCharm explorer. Great.
But I really want to work on a project called 'deals'.
So I deleted all the 'deal's folders again, and tried to create a deals Django project again.
Same result.
After googling, I went to the Settings > Project Structure > + Content Root, and pointed it to the folder at ~/.virtual/deals.
Ok, so now I could see the files in the virtual env, but there's no Django files, and plus, the project folder was separate to the virtualenv folder, eg
deals
deals (~/project/deals) <- separate
deals (~/.virtualenvs/deals) <- separate
deals
init.py
settings.py
urls.py
wsgi.py
manage.py
Really stuck now.
Any advice on how to get this working please?
Eg. how do I
(i) get it back to 'cleanskin' so that I can start up a Django project and get the proper folders in the project space.
(ii) get it working with virtualenv, and ensure that the interpreter doesn't have a number suffix, such as python3.4(6)
Many thanks in advance,
Chris | PyCharm & VirtualEnvs - How To Remove Legacy | 1.2 | 0 | 0 | 16,475 |
34,979,145 | 2016-01-24T17:42:00.000 | 0 | 0 | 1 | 0 | python,django,pycharm | 60,949,461 | 3 | false | 1 | 0 | In addition to the answer above, which removed the Venv from the Pycharm list, I also had to go into my ~/venvs directory and delete the associated directory folder in there.
That did the trick. | 3 | 3 | 0 | How do I remove all traces of legacy projects from PyCharm?
Background: I upgraded from PyCharm Community Edition to PyCharm Pro Edition today.
Reason was so I could work on Django projects, and in particular, a fledgling legacy project called 'deals'.
I deleted the legacy project folders.
I then opened the Pro Edition and went through the steps of creating a Django project called 'deals' with a python3.4 interpreter in a virtualenv.
It didn't work, I got an error message saying something about a missing file, and in the PyCharm project explorer, all I could see was
deals
.ideas
So I deleted it (ie. deleted the folders in both ~/.virtualenvs/deals and ~/Projects/deals).
I tried again, although this time I got an interpreter with a number suffix, ie. python3.4 (1).
I continued, and got the same empty file structure.
I deleted both folders again, went and cleaned out the intepreters in Settings > Project Interpreters .
I then tried again, getting 'new' interpreters,until I finally had python3.4 (5)
Plus, along the way I also invalidated the caches and restarted.
(ie. File > Invalidate Caches/Restart)
Then to prove if it works at all, I tried a brand new name 'junk'.
This time it worked fine, and I could see the Django folders in the PyCharm explorer. Great.
But I really want to work on a project called 'deals'.
So I deleted all the 'deal's folders again, and tried to create a deals Django project again.
Same result.
After googling, I went to the Settings > Project Structure > + Content Root, and pointed it to the folder at ~/.virtual/deals.
Ok, so now I could see the files in the virtual env, but there's no Django files, and plus, the project folder was separate to the virtualenv folder, eg
deals
deals (~/project/deals) <- separate
deals (~/.virtualenvs/deals) <- separate
deals
init.py
settings.py
urls.py
wsgi.py
manage.py
Really stuck now.
Any advice on how to get this working please?
Eg. how do I
(i) get it back to 'cleanskin' so that I can start up a Django project and get the proper folders in the project space.
(ii) get it working with virtualenv, and ensure that the interpreter doesn't have a number suffix, such as python3.4(6)
Many thanks in advance,
Chris | PyCharm & VirtualEnvs - How To Remove Legacy | 0 | 0 | 0 | 16,475 |
34,979,145 | 2016-01-24T17:42:00.000 | 0 | 0 | 1 | 0 | python,django,pycharm | 63,129,392 | 3 | false | 1 | 0 | When virtual env is enabled, there will be a 'V' symbol active in the bottom part of pycharm in the same line with terminal and TODO. When you click on the 'V' , the first one will be enabled with a tick mark. Just click on it again. Then it will get disabled. As simple as that. | 3 | 3 | 0 | How do I remove all traces of legacy projects from PyCharm?
Background: I upgraded from PyCharm Community Edition to PyCharm Pro Edition today.
Reason was so I could work on Django projects, and in particular, a fledgling legacy project called 'deals'.
I deleted the legacy project folders.
I then opened the Pro Edition and went through the steps of creating a Django project called 'deals' with a python3.4 interpreter in a virtualenv.
It didn't work, I got an error message saying something about a missing file, and in the PyCharm project explorer, all I could see was
deals
.ideas
So I deleted it (ie. deleted the folders in both ~/.virtualenvs/deals and ~/Projects/deals).
I tried again, although this time I got an interpreter with a number suffix, ie. python3.4 (1).
I continued, and got the same empty file structure.
I deleted both folders again, went and cleaned out the intepreters in Settings > Project Interpreters .
I then tried again, getting 'new' interpreters,until I finally had python3.4 (5)
Plus, along the way I also invalidated the caches and restarted.
(ie. File > Invalidate Caches/Restart)
Then to prove if it works at all, I tried a brand new name 'junk'.
This time it worked fine, and I could see the Django folders in the PyCharm explorer. Great.
But I really want to work on a project called 'deals'.
So I deleted all the 'deal's folders again, and tried to create a deals Django project again.
Same result.
After googling, I went to the Settings > Project Structure > + Content Root, and pointed it to the folder at ~/.virtual/deals.
Ok, so now I could see the files in the virtual env, but there's no Django files, and plus, the project folder was separate to the virtualenv folder, eg
deals
deals (~/project/deals) <- separate
deals (~/.virtualenvs/deals) <- separate
deals
init.py
settings.py
urls.py
wsgi.py
manage.py
Really stuck now.
Any advice on how to get this working please?
Eg. how do I
(i) get it back to 'cleanskin' so that I can start up a Django project and get the proper folders in the project space.
(ii) get it working with virtualenv, and ensure that the interpreter doesn't have a number suffix, such as python3.4(6)
Many thanks in advance,
Chris | PyCharm & VirtualEnvs - How To Remove Legacy | 0 | 0 | 0 | 16,475 |
34,979,846 | 2016-01-24T18:41:00.000 | 0 | 1 | 0 | 1 | python,unix,sudo,remote-server | 36,052,105 | 2 | false | 0 | 0 | The other way is to use paramiko as below.
un_con=paramiko.SSHClient()
un_con.set_missing_host_key_policy(paramiko.AutoAddPolicy())
un_con.connect(host,username=user,key_filename=keyfile) stdin, stdout,
stderr = un_con.exec_command(sudo -H -u sudo_user bash -c 'command') | 1 | 1 | 0 | I pretty new to python programming, as part of my learning i had decided to start coding for a simple daily task which would save sometime, I'm done with most part of the script but now i see a big challenge in executing it , because i need to execute it a remote server with the sudo user access. Basically what i need is,
login to remote system.
run sudo su - user(no need of password as its a SSH key based login)
run the code.
logout with the result assigned to varible.
I need the end result of the script stored in a variable so that i can use that back for verification. | Run python script in a remote machines as a sudo user | 0 | 0 | 0 | 1,341 |
34,980,251 | 2016-01-24T19:17:00.000 | 0 | 0 | 1 | 0 | python,python-3.x | 69,619,413 | 5 | false | 0 | 0 | I realize it is an old thread, but my comment might help someone, so here it is:
for ASCII art you do not want escape char be and tried resolved, so putting "r" before the tripple quotes tells python it is a "raw" format multi-line comment, like:
print(r""" your art here """) | 1 | 49 | 0 | If I wanted to print multiple lines of text in Python without typing print('') for every line, is there a way to do that?
I'm using this for ASCII art.
(Python 3.5.1) | How to print multiple lines of text with Python | 0 | 0 | 0 | 239,567 |
34,983,832 | 2016-01-25T01:34:00.000 | 0 | 0 | 1 | 1 | python,macos,ipython | 34,984,334 | 2 | false | 0 | 0 | Try these shell commands. I'm using Debian, & don't have a Mac.
which ipython
Should show the dir where it is installed, eg, /usr/bin/ipython
echo $PATH
Should show a list of paths where the system looks for programs to execute; it should include the location of ipython, eg, /usr/bin for my example above.
pip list
Should include 'ipython' in the list of installed pkgs.
pip show ipython
Should show data about the installed ipython pkg.
Your home directory should have a dir called '.ipython' if the pkg was installed (unless it was put somewhere else).
If you don't find the program, maybe the install failed. Try again and watch for error messages. | 1 | 1 | 0 | I installed ipython using pip. I have python 2.7 on my Mac. However I am unable to start up ipython.
[17:26:01] ipython
-bash: ipython: command not found
Then thinking that maybe I need to run it from within python I even tried that
[17:28:10] python
import ipython
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named ipython
Any idea what is wrong?
Thanks. | Unable to find ipython after installation on Mac | 0 | 0 | 0 | 2,531 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.