Q_Id
int64
2.93k
49.7M
CreationDate
stringlengths
23
23
Users Score
int64
-10
437
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
DISCREPANCY
int64
0
1
Tags
stringlengths
6
90
ERRORS
int64
0
1
A_Id
int64
2.98k
72.5M
API_CHANGE
int64
0
1
AnswerCount
int64
1
42
REVIEW
int64
0
1
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
15
5.1k
Available Count
int64
1
17
Q_Score
int64
0
3.67k
Data Science and Machine Learning
int64
0
1
DOCUMENTATION
int64
0
1
Question
stringlengths
25
6.53k
Title
stringlengths
11
148
CONCEPTUAL
int64
0
1
Score
float64
-1
1.2
API_USAGE
int64
1
1
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
15
3.72M
28,327,779
2015-02-04T17:36:00.000
3
0
1
0
0
python,python-3.x,lighttable
0
29,534,922
0
4
0
true
0
0
I had the same problem with using a syntax that was only valid on Python3.3. - Go to Settings:User Behaviour - add the line (find the real path of your python binary): [:app :lt.plugins.python/python-exe "/usr/bin/python3.4"] - Save and test in your lighttable It worked for me :) Hope it helps
3
6
0
0
I am trying Light Table and learning how to use it. Overall, I like it, but I noticed that the only means of making the watches and inline evaluation work in Python programs uses Python 2.7.8, making it incompatible with some of my code. Is there a way to make it use Python 3 instead? I looked on Google and GitHub and I couldn't find anything promising. I am using a Mac with OS X 10.10.2. I have an installation of Python 3.4.0 that runs fine from the Terminal.
Running Python 3 from Light Table
0
1.2
1
0
0
4,744
28,340,224
2015-02-05T09:21:00.000
0
0
0
1
0
python,weka
0
35,519,213
0
3
0
false
0
0
Before installing weka wrapper for python you are suppose to install the weka itself using sudo apt-get install weka or build from source code and add the path the enviroment variable using export wekahome="your weka path" this will make sure you have the required weka jar file in the directory
1
0
0
0
The official website shows how weka-wrapper can install on ubuntu 64 bit. I want toknowhow it can be install on ubuntu 32 bit?
How to install python weka wrapper on ubuntu 32 bit?
0
0
1
0
0
359
28,384,481
2015-02-07T16:26:00.000
1
0
1
0
0
python,arrays,numpy
0
53,429,718
0
6
0
false
0
0
Just do the following. import numpy as np arr = np.zeros(10) arr[:3] = 5
1
1
1
0
I'm having trouble figuring out how to create a 10x1 numpy array with the number 5 in the first 3 elements and the other 7 elements with the number 0. Any thoughts on how to do this efficiently?
Create a numpy array (10x1) with zeros and fives
0
0.033321
1
0
0
5,701
28,388,896
2015-02-07T23:46:00.000
1
0
1
1
0
python
0
28,388,912
0
1
0
false
0
0
Okay, first point is that you can't share memory among machines unless you're on a very specialized architecture. (Massively parallel machines, Beowulf clusters, and so on.) If you mean to share code, then package your code into a real Python package and distribute it with a tool like Chef, Puppet or Docker. If you mean to share data, use a database of some sort that all your workers can access. I'm fond of MongoDB because it's easy to match to an application, but there are a million others databases. A lot of people use mysql or postgresql.
1
2
0
0
My python process run on different machines. It consists of a manager and many workers. The worker in each machine are multi threaded and needs to update some data such as its status to the manager residing on another machine. I didn't want to use mysql because many other processes are already executing many queries on it and it will reach its max_connection I have two methods in mind: For each worker thread, write the data to a local text file. A separate bash script will run a while-loop, check for file change and scp this file to all other machines. For each worker thread, write the data to share memory and have it replicated to all other machines. I am not sure how to do this. In python how can i write to shared memory? How can i replicated shared memories?
python - How to share files between many computers
0
0.197375
1
0
0
516
28,417,806
2015-02-09T19:34:00.000
1
0
0
0
0
python,cassandra,datastax-enterprise
0
28,419,293
0
2
0
true
0
0
The details depend on your file format and C* data model but it might look something like this: Read the file from s3 into an RDD val rdd = sc.textFile("s3n://mybucket/path/filename.txt.gz") Manipulate the rdd Write the rdd to a cassandra table: rdd.saveToCassandra("test", "kv", SomeColumns("key", "value"))
1
0
1
0
i Launch cluster spark cassandra with datastax dse in aws cloud. So my dataset storage in S3. But i don't know how transfer data from S3 to my cluster cassandra. Please help me
How import dataset from S3 to cassandra?
0
1.2
1
1
0
1,657
28,422,520
2015-02-10T01:19:00.000
0
0
0
1
0
python,django,macos,pip
0
35,575,253
0
2
0
false
1
0
Try adding sudo. sudo pip install Django
2
0
0
0
I recently installed Python 3.4 on my Mac and now want to install Django using pip. I tried running pip install Django==1.7.4 from the command line and received the following error: Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/basecommand.py", line 232, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/commands/install.py", line 347, in run root=options.root_path, File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/req/req_set.py", line 549, in install **kwargs File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/req/req_install.py", line 754, in install self.move_wheel_files(self.source_dir, root=root) File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/req/req_install.py", line 963, in move_wheel_files isolated=self.isolated, File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/wheel.py", line 234, in move_wheel_files clobber(source, lib_dir, True) File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/wheel.py", line 205, in clobber os.makedirs(destdir) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/django' Obviously my path is pointing to the old version of Python that came preinstalled on my computer, but I don't know how to run the pip on the new version of Python. I am also worried that if I change my file path, it will mess up other programs on my computer. Is there a way to point to version 3.4 without changing the file path? If not how do I update my file path to 3.4?
Mac OSX Trouble Running pip commands
0
0
1
0
0
797
28,422,520
2015-02-10T01:19:00.000
0
0
0
1
0
python,django,macos,pip
0
70,606,374
0
2
0
false
1
0
Try to create a virtual environment. This can be achieved by using python modules like venv or virtualenv. There you can change your python path without affecting any other programs on your machine. If then the error is is still that you do not have permission to read files, try sudo pip install. But only as a last resort since pip recommends not using it as root.
2
0
0
0
I recently installed Python 3.4 on my Mac and now want to install Django using pip. I tried running pip install Django==1.7.4 from the command line and received the following error: Exception: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/basecommand.py", line 232, in main status = self.run(options, args) File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/commands/install.py", line 347, in run root=options.root_path, File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/req/req_set.py", line 549, in install **kwargs File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/req/req_install.py", line 754, in install self.move_wheel_files(self.source_dir, root=root) File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/req/req_install.py", line 963, in move_wheel_files isolated=self.isolated, File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/wheel.py", line 234, in move_wheel_files clobber(source, lib_dir, True) File "/Library/Python/2.7/site-packages/pip-6.0.8-py2.7.egg/pip/wheel.py", line 205, in clobber os.makedirs(destdir) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/django' Obviously my path is pointing to the old version of Python that came preinstalled on my computer, but I don't know how to run the pip on the new version of Python. I am also worried that if I change my file path, it will mess up other programs on my computer. Is there a way to point to version 3.4 without changing the file path? If not how do I update my file path to 3.4?
Mac OSX Trouble Running pip commands
0
0
1
0
0
797
28,444,272
2015-02-11T00:08:00.000
0
0
0
0
0
python,csv,numpy
0
64,579,432
0
5
0
false
0
0
While there is not such a parameter in numpy.loadtxt to ignore quoted or otherwise escaped commas, one alternative that has not been suggested yet would be the following... Perform a find and replace using some text editor to replace commas with tabs OR save the file in Excel as tab delimited. When you use numpy.loadtxt, you will simply specify delimiter='\t' instead of comma delimited. Simple solution that could save you some code...
1
7
1
0
I have a csv file where a line of data might look like this: 10,"Apple, Banana",20,... When I load the data in Python, the extra comma inside the quotes shifts all my column indices around, so my data is no longer a consistent structure. While I could probably write a complex algorithm that iterates through each row and fixes the issue, I was hoping there was an elegant way to just pass an extra parameter to loadtxt (or some other function) that will properly ignore commas inside quotes and treat the entire quote as one value. Note, when I load the CSV file into Excel, Excel correctly recognizes the string as one value.
numpy.loadtxt: how to ignore comma delimiters that appear inside quotes?
0
0
1
0
0
5,651
28,451,428
2015-02-11T10:01:00.000
1
0
1
0
0
python,gdb
0
34,778,855
0
2
0
false
0
0
I had trouble with int(value) giving me a signed number even when the type was explicitly unsigned. I used this to solve the problem: int(value) & 0xffffffffffffffff
1
5
0
0
I want to cast in python code a gdb.value which is a complicated type variable to int (this is not general type, but defined in the our code - a kind of bitfield). When I promped in the gdb shell itself: "p var" it prints it as int, i.e. the gdb know how to cast that bitfield to int. (Strange thing, in addition, I got when I try this command: int(var.cast(i.type)) When "i" is gdb.value that its type is int. The var indeed becomes int, but its value become same as the value of i). So is someone know how to cast this gdb.value by use the knowledge of the gdb. (Or by other way..) Thanks. This is my first question in StackOverFlow, sorry on the confusion and my weak English.
How to cast type of gdb.value to int
0
0.099668
1
0
0
3,058
28,456,493
2015-02-11T14:13:00.000
1
0
1
0
0
python,pip,setuptools
1
28,460,274
0
1
0
true
0
0
I think, you can't! (at least without using such tricks, as you described). The Python package system has (to my knowledge) no such notion as "allowed" packages. There could be a person, that invents a different package C that he calls B, but with totally different functionality. Such a notion would prohibit users of your package A to use package C (alias B). So I would communicate to users of A, that B is no longer needed and make sure, that your newer coding does not reference B at all. And when somebody installs B, it is like a third party library that has nothing to do with yours. Of course, when the functionality of A and B is intermingled very much and the other user code also has to deal with B directly and has (allowed) side effects on A, you could get in trouble when an old B is still installed. But than your initial design was not the best either. In such a case (and when you really have to merge the packages -- see below) I would recommend, that you make a totally new package name like "newA" to underscore the fact, that something has fundamentally changed (and thus an intermingling between old A and B also would more likely be detected). But of course, I would second the argument of msw, that you create your problem by yourself. Normally it is also better to have smaller packages (if they are of a reasonable size, of course) instead of bigger "I manage the world" packages. You just can combine smaller packages better for different applications.
1
4
0
0
I have two python packages A and B that I would like to merge together into A, i.e. all functionality of B is now reachable in A.B. Previously, A-1.0 depended on B-1.0. Now I want to avoid, that users of A-2.0 still have B-1.0 installed and I don't know how to handle this properly. Different solutions/ideas I came up with: Include some code in A-2.0 that tries to import B, if an ImportError is raised, catch the exception and go on, otherwise throw a RuntimeError that B is installed in parallel Somehow mark B as a blocker for A-2.0 (is this possible?) Create a "fake" successor for B, so people who update their virtual environments or install the "latest" version of B get an empty package that throws an exception upon importing. I welcome your opinion, and experiences
Python dependencies: Merging two packages into one
0
1.2
1
0
0
1,572
28,467,558
2015-02-12T01:08:00.000
1
0
0
1
0
python,google-app-engine,capedwarf
0
28,474,564
0
2
0
false
1
0
Yes, as Alex posted - CapeDwarf is Java only.
1
1
0
0
I'm trying to deploy my GAE application - written with Python - on CapeDwarf (WildFly_2.0.0.CR5). But all the documentation talking only about Java Application. So is CapeDwarf can deploy Python Application ? if it is, how to do it ? else any other application that can ?
Is CapeDwarf compatible with Python GAE?
0
0.099668
1
0
0
105
28,494,629
2015-02-13T07:36:00.000
0
0
0
0
0
python,django
0
28,510,476
0
2
0
false
1
0
Instead I ended up using forms.HiddenInput() and did not display the field with null, which fits my case perfectly well!
1
0
0
0
In django model form, how to display blank for null values i.e prevent display of (None) on the form .Using postgresql, django 1.6.5. I do not wish to add space in the model instance for the allowed null values.
django modelform display blank for null
0
0
1
0
0
432
28,512,929
2015-02-14T06:00:00.000
0
0
1
0
0
python,c,pycharm,python-2.x
1
28,513,155
0
3
0
false
0
0
Variables are nothing but reserved memory locations to store values. This means that when you create a variable you reserve some space in memory. Based on the data type of a variable, the interpreter allocates memory and decides what can be stored in the reserved memory. Therefore, by assigning different data types to variables, you can store integers, decimals or characters in these variables. Python variables do not have to be explicitly declared to reserve memory space. The declaration happens automatically when you assign a value to a variable. The equal sign (=) is used to assign values to variables. The operand to the left of the = operator is the name of the variable and the operand to the right of the = operator is the value stored in the variable.
1
1
0
0
In C, I have to set proper type, such as int, float, long for a simple arithmetic for multiplying two numbers. Otherwise, it will give me an incorrect answer. But in Python, basically it can automatically give me the correct answer. I have tried debug a simple 987*456 calculation to see the source code. I set a break point at that line in PyCharm, but I cannot step into the source code, it just finished right away. How can I see the source code? Is it possible? Or how does Python do that multiplication? I mean, how does Python carry out the different of number type in the result of 98*76 or 987654321*123457789, does Python detect some out of range error and try another number type?
How does Python know which number type to use in order to Multiply arbitrary two numbers?
1
0
1
0
0
444
28,533,515
2015-02-16T02:00:00.000
0
0
0
1
0
python,subprocess,dicom,pydicom
0
28,565,218
0
2
1
false
0
0
Oh this was due to syntax error using Pydicom. I wanted to access 0019, 109c tag. Syntax should be: ds[0x0019,0x109c].value. not ds[aaaa,bbbb].value
1
0
0
0
I am trying to read a dicom header tag in dicom file. Now, there are two ways to read this dicom header tag. 1) Using pydicom package in python which apparently is not working well on my python installed version(python 3). 2) or when i call AFNI function 'dicom_hinfo' through command line, i can get dicom tag value. The syntax to call afni function in terminal is as follows: dicom_hinfo -tag aaaa,bbbb filename.dcm output:fgre Now how should i call this dicom-info -tag aaaa,bbbb filename.dcm in python script. I guess subprocess might work but not sure about how to use it in this case.
How to call command line command (AFNI command)?
0
0
1
0
0
411
28,543,193
2015-02-16T14:04:00.000
0
1
0
0
0
python,swig,python-extensions
0
60,458,804
0
2
0
false
0
0
I'd absolutely recommend doing some basic testing of the wrapped code. Even some basic "can I instantiate my objects" test is super helpful; you can always write more tests as you find problem areas. Basically what you're testing is the accuracy of the SWIG interface file - which is code that you wrote in your project! If your objects are at all interesting it is very possible to confuse SWIG. It's also easy to accidentally skip wrapping something, or for a wrapper to use a diferent typemap than you hoped.
1
4
0
0
I need to create python wrapper for the library using SWIG and write unit tests for it. I don't know how to do this. My first take on this problem is to mock dynamic library with the same interface as those library that I'm writing wrapper for. This mock library can log every call or return some generated data. This logs and generated data can be checked by the unit tests.
How should I unit-test python wrapper generated by SWIG
0
0
1
0
0
501
28,551,063
2015-02-16T21:58:00.000
0
0
0
0
0
python,django
0
28,551,333
0
1
0
false
1
0
Depends on how you intend to use them after storing in the database; 2 methods I can think of are: Option 1) models.IntegerField(unique=True) now the trick is loading data and parsing it: you would have to concatenate the numbers then have a way to split them back out. fast would be Option 2) models.CommaSeparatedIntegerField(max_length=1024, unique=True) not sure how it handles unique values; likely '20,40' is not equal to '40,20', so those two sets would be unique. or just implement it yourself in a custom field/functions in the model.
1
0
0
0
I am defining the models for my Django app, and I would like to have a field for a model consisting of a tuple of two (positive) integers. How can I do this? I'm looking at the Django Models API reference but I can't see any way of doing this.
Django - how can I have a field in my model consisting of tuple of integers?
0
0
1
0
0
339
28,553,439
2015-02-17T01:52:00.000
0
0
0
0
0
python,pygame
0
28,553,503
0
2
0
false
0
1
Each of your sprites should have velocity attributes. If your velocity is negative, turn one way. If your velocity is positive, turn the other. Because velocity indicates the direction of movement, this can be used to determine angular direction.
1
0
0
0
Im fairly new to python. I creating a game similar to asteroids, and so far I've gotten the spaceship to move around with the arrow keys. can someone explain what i would do to make the spaceship rotate in the direction its moving? thanks in advance!
python/pygame: how would i get my sprite to rotate in the direction it's moving?
0
0
1
0
0
774
28,555,246
2015-02-17T05:25:00.000
3
0
1
0
0
python,python-3.x
0
28,555,400
0
4
0
false
0
0
If you can get the equation of the line in slope-intercept form, you should be able to set up an inequality. Say we have points (x1, y1), (x2, y2), and the line y = mx+b You should be able to plug in the x and y values for both points, and if both make the equation y < mx + b or both make it y > mx + b, they are on the same side. If either point satisfies the equation (y = mx + b), that point is on the line. if (y1 > m * x1 + b and y2 > m * x2 + b) or (y1 < m * x1 + b and y2 < m *x2 + b): return True #both on same side
1
1
0
0
If I have a definite line segment and then am given two random different points, how would i be able to determine if they are on the same side of the line segment? I am wanting to use a function def same_side (line, p1, p2). I know based on geometry that cross products can be used but am unsure how to do so with python.
Finding out if two points are on the same side
0
0.148885
1
0
0
2,656
28,594,933
2015-02-18T22:10:00.000
0
1
1
0
0
python,c,automation,automated-tests
0
28,594,984
0
1
0
false
0
0
You can use the subprocess module in Python to spawn other programs, retrieve their output and feed them arbitrary input. The relevant pieces will most likely be the Popen.communicate() method and/or the .stdin and .stdout file objects; ensure when you do this that you passed PIPE as the argument to the stdin and stdout keywords on creation.
1
0
0
0
I need to score my students c programming homeworks. I want to write an autograder script which automatically score the homeworks. I plan to write this script in python language. My question is, in the homework, in some parts students get some input from keyboard with scanf, how can i handle this problem if I try to write an autograder? Is there any way to read from text file when the scanf line runs in homework ? Any idea is appreciated.
Autograder script - reading keyboard inputs from textfile
0
0
1
0
0
160
28,597,575
2015-02-19T02:11:00.000
3
0
0
0
0
postgresql,psycopg2,python-db-api
0
28,602,221
0
1
0
true
0
0
You can re-register a plain string type caster for every single PostgreSQL type (or at least for every type you expect a string for in your code): when you register a type caster for an already registered OID the new definition takes precedence. Just have a look at the source code of psycopg (both C and Python) to find the correct OIDs. You can also compile your own version of psycopg disabling type casting. I don't have the source code here right now but probably is just a couple line changes.
1
2
0
0
Query results from some Postgres data types are converted to native types by psycopg2. Neither pgdb (PostgreSQL) and cx_Oracle seem to do this. …so my attempt to switch pgdb out for psycopg2cffi is proving difficult, as there is a fair bit of code expecting strings, and I need to continue to support cx_Oracle. The psycopg2 docs explain how to register additional types for conversion, but I'd actually like to remove that conversion if possible and get the strings as provided by Postgres. Is that doable?
Can one disable conversion to native types when using psycopg2?
0
1.2
1
1
0
274
28,606,809
2015-02-19T12:46:00.000
1
1
0
0
0
python,twisted,irc,twisted.internet,twisted.words
0
28,607,141
0
1
0
false
0
0
Found it, when I override RPL_WHOISUSER, I can get the information after issuing an IRCClient.whois. (And yes, did search for it before I posted my question, but had an epiphany right after I posted my question...)
1
1
0
0
I'm trying to get the hostmask for a user, to allow some authentication in my IRCClient bot. However, it seems to be removed from all responses? I've tried 'whois', but it only gives me the username and the channels the user is in, not the hostmask. Any hint on how to do this?
How to get a user's hostmask with Twisted IRCClient
0
0.197375
1
0
1
37
28,648,230
2015-02-21T16:41:00.000
3
1
0
1
0
python-2.7,ubuntu,mono,ironpython,fcntl
0
28,673,847
0
1
0
true
0
0
As far as I can see, the fcntl module of cPython is a builtin module (implemented in C) - those modules need to be explicitly implemented for most alternative Python interpreters like IronPython (in contrast to the modules implemented in plain Python), as they cannot natively load Python C extensions. Additionally, it seems that there currently is no such fcntl implementation in IronPython. There is a Fcntl.cs in IronRuby, however, maybe this could be used as a base for implementing one in IronPython.
1
1
0
0
I have latest IronPython version built and running in Ubuntu 14.04 through Mono. Building Ironpython and running with Mono seems trivial but I am not convinced I have proper sys.paths or permissions for Ironpython to import modules, especially modules like fcntl. Running ensurepip runs subprocess, and wants to import "fcntl". There are numerous posts already out there, but mostly regarding windows. As I understand, fcntl is part of unix python2.7 standard library. To start the main problem seems to be that Ironpython has no idea where this is, but I also suspect that since fcntl seems to be perl or at least not pure python, that there is more to the story. So my related sys.path questions are: In Ubuntu, where should I install Ironpython (Ironlanguages folder) to? Are there any permissions I need to set? What paths should I add to the sys.path to get Ironpython's standard library found?' What paths should I add to the sys.path to get Ubuntu's python 2.7 installed modules? What paths should I add to the sys.path or methods to get fcntl to import properly in Ironpython Any clues on how to workaround known issues installing pip through ensurepip using mono ipy.exe X:Frames ensurepip Thanks!
Ubuntu and Ironpython: What paths to add to sys.path AND how to import fcntl module?
0
1.2
1
0
0
456
28,665,515
2015-02-23T01:05:00.000
1
0
1
0
0
python
0
28,665,549
0
3
0
false
0
0
Every dict has a value for every key -- no such thing as "no value" for one or more keys. However you can set the value to a placeholder, e.g None, and then simply reassign it once you know exactly what the value for each given key you want to be.
1
0
0
0
Is there a way to import a text file and assign the key without a value? Im doing so to import other values from another text file to assign them as the values for the key. For example, I would get a name from one text file and assign it as a key, then get their bank accnt number, amount of money, id number from another text file and assign those values to the key. I know how to import text files WITH the key and value from a single text file, but i dont know how to do it separately and assign it to one key value pair. Thanks.
In Python I'm trying to import a text file into a dictionary but only assign the key, no value
0
0.066568
1
0
0
74
28,668,641
2015-02-23T07:18:00.000
1
0
0
1
0
python,hadoop,mapreduce,hadoop-streaming
0
28,762,585
0
1
1
true
0
0
This question seems very generic to me. Chain of many map-reduce jobs are the most common pattern for the production ready solutions. But as programmer, we should always try to use less number of MR jobs to get the best performance (You have to be smart in selecting your key-value pairs for the jobs in order to do this) but off course it is dependent on the use cases. Some people use different combinations of Hadoop Streaming, Pig, Hive, JAVA MR etc. MR jobs to solve one business problem. With the help of any workflow management tools like Oozie or bash scripts you can set the dependencies between the jobs. And for exporting/importing data between RDBMS and HDFS, you can use Sqoop. This is the very basic answer of your query. If you want to have further explanation for any point then let me know.
1
2
0
0
I have a huge txt data store on which I want to gather some stats. Using Hadoop-streaming and Python I know how to implement a MapReduce for gathering stats on a single column, e.g. count how many records there are for each of a 100 categories. I create a simple mapper.py and reducer.py, and plug them into the hadoop-streaming command as -mapper and -reducer respectively. Now, I am at a bit of a loss at how to practically approach a more complex task: gathering various stats on various other columns in addition to categories above (e.g. geographies, types, dates, etc.). All that data is in the same txt files. Do I chain the mapper/reducer tasks together? Do I pass key-value pairs initially long (with all data included) and "strip" them of interesting values one by one while processing? Or is this a wrong path? I need a practical advice on how people "glue" various MapReduce tasks for a single data source from within Python.
Python & MapReduce: beyond basics -- how to do more tasks on one database
0
1.2
1
0
0
107
28,679,250
2015-02-23T17:06:00.000
0
0
0
0
0
python,ajax,json,node.js
0
28,687,176
0
2
1
false
1
0
In your application, if you have some requirement of processing results of python server requests in your nodejs application, then you need to call the python server requests in nodejs app with request libray and then process the result. Otherwise, you should simply call python server resources through client side ajax requests. Thanks
1
1
0
0
I need to get data (json) in my html page, with help of Ajax. I have a Nodejs server serving requests. I have to get the json from server, which is python code to process and produce json as output. So should i save json in db and access it? (seems complicated just for one single use) Should i run python server, to serve the requests with json as result (call it directy from html via ajax) Should i serve requests with nodejs alone, by calling python method from nodejs? if so how to call the python method. If calling python requires it to run server ? which one to prefer (zeropc, or some kind of web framework?) Which is the best solution? or Which is preferred over other in what scenario and what factors?
Nodejs Server, get JSON data from Python in html client with Ajax
0
0
1
0
1
932
28,738,062
2015-02-26T08:57:00.000
1
1
0
0
1
python,webkit,gtk,raspberry-pi,startup
0
28,744,914
0
1
0
false
0
0
If you run a DE, then use the DE's session manager. If you want to run only your application in fullscreen mode use the ~/.xinitrc to lauch it.
1
0
0
0
I want to run a python script on raspberry pi when it is turned on. how can i do this? my script contains "Webkit" and "Gtk" modules. I've tried many methods but still not working. the code works perfectly through the python IDEL
how to run a python script automatically after startx on raspberrypi
0
0.197375
1
0
0
711
28,756,204
2015-02-27T01:50:00.000
0
0
1
0
0
python,multiprocessing
0
61,904,891
0
1
0
false
0
0
It seems like Beep function couldn't play several notes at the same time. You can record the sound of 'Beep' through sounddevice library, save the sound in wav files. Then, use subprocess.Popen to achieve multiprocess. The subprocess would play the wav files.
1
0
0
0
I am writing a basic piano style program that uses functions with winsound.Beep to play different note. I am new to multi-processing, and was wondering how I would be able to play two notes at once. If that is not possible, perhaps there is a way to combine frequencies that I do not know. Thanks for reading ~Jimnebob
How do I use the multiprocess library with winsound in python?
1
0
1
0
0
142
28,765,398
2015-02-27T12:42:00.000
1
0
0
0
0
javascript,python,html,web-scraping,beautifulsoup
0
28,852,463
0
2
0
false
1
0
The Python binding for Selenium and phantomjs (if you want to use a headless browser as backend) are the appropriate tools for this job.
2
0
0
0
I want to fetch few data/values from a website. I have used beautifulsoup for this and the fields are blank when I try to fetch them from my Python script, whereas when I am inspecting elements of the webpage I can clearly see the values are available in the table row data. When i saw the HTML Source I noticed its blank there too. I came up with a reason, the website is using Javascript to populate the values in their corresponding fields from its own database. If so then how can i fetch them using Python?
How to fetch data from a website using Python that is being populated by Javascript?
0
0.099668
1
0
1
272
28,765,398
2015-02-27T12:42:00.000
0
0
0
0
0
javascript,python,html,web-scraping,beautifulsoup
0
28,850,142
0
2
0
false
1
0
Yes, you can scrape JS data, it just takes a bit more hacking. Anything a browser can do, python can do. If you're using firebug, look at the network tab to see from which particular request your data is coming from. In chrome element inspection, you can find this information in a tab named network, too. Just hit ctrl-F to search the response content of the requests. If you found the right request, the data might be embedded in JS code, in which case you'll have some regex parsing to do. If you're lucky, the format is xml or json, in which case you can just use the associated builtin parser.
2
0
0
0
I want to fetch few data/values from a website. I have used beautifulsoup for this and the fields are blank when I try to fetch them from my Python script, whereas when I am inspecting elements of the webpage I can clearly see the values are available in the table row data. When i saw the HTML Source I noticed its blank there too. I came up with a reason, the website is using Javascript to populate the values in their corresponding fields from its own database. If so then how can i fetch them using Python?
How to fetch data from a website using Python that is being populated by Javascript?
0
0
1
0
1
272
28,786,932
2015-02-28T21:09:00.000
0
0
0
0
0
python,html,css,python-3.x
0
28,787,299
0
2
0
false
1
0
To achieve your goal, you need good knowledge of javascript, the language for dynamic web pages. You should be familiar with dynamic web techniques, AJAX, DOM, JSON. So the main part is on the browser side. Practically any python web server fits. To "bridge the gap" the keyword is templates. There are quite a few for python, so you can choose, which one suites you best. Some frameworks like django bring their own templating engine. And for your second question: when a web site doesn't offer an API, perhaps the owner of the site does not want, that his data is abused by others.
1
1
0
0
I am working on making a GUI front end for a Python program using HTML and CSS (sort of similar to how a router is configured using a web browser). The program assigns values given by the user to variables and performs calculations with those variables, outputting the results. I have a few snags to work out: How do I design the application so that the data is constantly updated, in real-time? I.e. the user does not need to hit a "calculate" button nor refresh the page to get the results. If the user changes the value in a field, all other are fields simultaneously updated, et cetera. How can a value for a variable be fetched from an arbitrary location on the internet, and also constantly updated if/when it is updated at the source? A few of the variables in the program are based on current market conditions (e.g. the current price of gold). I would like to make those values be automatically entered after retrieving them from certain sources that, let us assume, do not have APIs. How do I build the application to display via HTML and CSS? Considering Python cannot be implemented like PHP, I am seeking a way to "bridge the gap" between HTML and Python, without such a "heavy" framework like Django, so that I can run Python code on the server-side for the webpage. I have been looking into this for a quite some time now, and have found a wide range of what seem to be solutions, or that are nearly solutions but not quite. I am having trouble picking what I need specifically for my application. These are my best findings: Werkzeug - I believe Werkzeug is the best lead I have found for putting the application together with HTML and Python. I would just like to be reassured that I am understanding what it is correctly and that is, in fact, a solution for what I am trying to do. WebSockets - For displaying the live data from an arbitrary website, I believe I could use this protocol. But I am not sure how I would implement this in practice. I.e. I do not understand how to target the value, and then continuously send it to my application. I believe this to be called scraping?
How to make a webpage display dynamic data in real time using Python?
1
0
1
0
0
6,330
28,790,032
2015-03-01T04:17:00.000
0
0
0
0
0
python,scikit-learn
0
28,871,022
0
1
0
true
0
0
Currently there is no way to directly get the optimum number of estimators from GradientBoostingClassifier. If you also pass n_estimators in the parameter grid to GridSearchCV it will only try the exact values you give it, and return one of these. We are looking to improve this, by searching over the number of estimators automatically.
1
0
1
0
With GradientBoostingClassifier suppose I set n_estimators to 2000 and use GridSearchCV to search across learning_rate in [0.01, 0.05, 0.10] - how do I know the number of boosting iterations that produced the optimal result - is the model always going to fit 2000 trees for each value of learning_rate or is it going to use the optimal number of boosting iterations for each of these values? It wouldn't make sense to also include n_estimators in the grid search and search for all values in [1,2000].
Obtain optimal number of boosting iterations in GradientBoostingClassifier using grid search
0
1.2
1
0
0
1,119
28,791,278
2015-03-01T07:41:00.000
0
0
1
0
0
python-3.x
0
28,791,357
0
1
0
false
0
0
What you describe is what was a very popular type of game a long time ago. The most difficult portion of this type of game is interpreting user input. You can start with a list of possible commands, iterating through each token of the user's input. If I type look left the game can begin by calling a method or function which interprets the available visual data. You will want to look into your database implementation for what is left in relation to the current player's position. The game can respond with You see a wooden box on the floor. It is locked. You get the idea. As the user, I will probably want to check my inventory for a key. You can implement a subroutine in your parser for searching, categorizing, and using your inventory. The key to writing anything like the described program is spending at least twice as much time testing and debugging the game over how much time you spent coding the first working version. The second key is to make sure you are thinking ahead. Example: "How can I modularize the navigation subroutine so that maps can be modified or expanded easily?" Comment if you have any questions. Good luck coding!
1
0
0
0
Metaphorically speaking, I'm learning python in a community college for game programming; and our second assignment is to make a text based game. I'm stuck trying to figure out how to get the code to run if the player has something in their inventory, then display these options or print these options if they don't have that certain item in their inventory.
Key and lock code [game programming]?
0
0
1
0
0
117
28,799,663
2015-03-01T22:02:00.000
2
1
1
1
0
python,multithreading
1
28,799,878
0
2
0
true
0
0
To me, this looks like a pristine application for the subprocess module. I.e. do not run the test-scripts from within the same python interpreter, rather spawn a new process for each test-script. Do you have any particular reason why you would not want to spawn a new process and run them in the same interpreter instead? Having a sub-process isolates the scripts from each other, like imports, and other global variables. If you use the subprocess.Popen to start the sub-processes, then you have a .terminate() method to kill the process if need be.
1
0
0
0
I have a python program run_tests.py that executes test scripts (also written in python) one by one. Each test script may use threading. The problem is that when a test script unexpectedly crashes, it may not have a chance to tidy up all open threads (if any), hence the test script cannot actually complete due to the threads that are left hanging open. When this occurs, run_tests.py gets stuck because it is waiting for the test script to finish, but it never does. Of course, we can do our best to catch all exceptions and ensure that all threads are tidied up within each test script so that this scenario never occurs, and we can also set all threads to daemon threads, etc, but what I am looking for is a "catch-all" mechanism at the run_tests.py level which ensures that we do not get stuck indefinitely due to unfinished threads within a test script. We can implement guidelines for how threading is to be used in each test script, but at the end of the day, we don't have full control over how each test script is written. In short, what I need to do is to stop a test script in run_tests.py even when there are rogue threads open within the test script. One way is to execute the shell command killall -9 <test_script_name> or something similar, but this seems to be too forceful/abrupt. Is there a better way? Thanks for reading.
Best way to stop a Python script even if there are Threads running in the script
0
1.2
1
0
0
238
28,850,205
2015-03-04T08:57:00.000
0
0
0
0
0
python,windows,visual-studio,opencv,login
0
28,854,271
0
1
0
false
0
0
You can store the snapshots in an array, run your recognition on each image and see if the user is recognized as one of the users you have trained your model on. If not then prompt the user for their name, if the name matches one of the users you trained your model on, add these snapshots to their training set and re-train, so the model is more up to date, if it does not match any names, then you assume this is a new user and create a new label and folder for them, and retrain your recognizer. Beware though that each time you add more snapshots the training time will increase, so perhaps limit each snapshot capture to 1 FPS or something.
1
0
1
0
maybe some of you can point me in the right direction. I've been playing around with OpenCV FaceRecognition for some time with Eigenfaces to let it learn to recognize faces. Now I would like to let it run during windows logon. Precisely, I want to make Snapshots of Faces when I log into a user so after the software has learned the faces after x logins and to which user it belongs, it will log me in automatically when it recognises me during typing. As a prototype it would be enough to somehow get a textfield and get textinput working to "simulate" login in. Im using python. I hope you understand what i want to achieve and can help me out. Edit: Another Question/Idea: If i want to build an Application in Visual Studio, can I reuse my python Code or do I have to use c++? I could make a Windows Store app or something like that.
opencv FaceRecognition during login
0
0
1
0
0
211
28,854,821
2015-03-04T12:42:00.000
0
0
0
0
0
python,pandas,binary,hex
0
28,858,326
0
1
0
false
0
0
If I understood, in column 1 you have 00, column 2 : 55, ... If I am right, you first need to concat three columns in a string value = str(col1)+str(col2)+str(col3) and then use the method to convert it in binary.
1
1
1
0
I am pretty new to Python and pandas library, i just learned how to read a csv file using pandas. my data is actually raw packets i captured from sensor networks, to analyze corrupt packets. what i have now is, thousands of rows and hundreds of columns, literally, and the values are all in Hex. i need to convert all the values to binary with trailing zeros. i am at lost on how to accomplish that once i have read the CSV file successfully using pandas. I'd appreciate every kind of help, even a simple direction. here is my sample data: 00 FF FF 00 00 29 00 89 29 41 88 44 22 00 FF FF 01 00 3F 06 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 0A 00 FF FF 00 00 29 00 89 29 41 88 45 22 00 FF FF 01 00 3F 06 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 0A 00 FF FF 00 00 29 00 89 29 41 88 46 22 00 FF FF 01 00 3F 06 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 0A 00 FF FF 00 00 29 00 89 29 41 88 47 22 00 FF FF 01 00 3F 06 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 55 0A
converting dataframe from Hex to binary in using python
0
0
1
0
0
778
28,874,356
2015-03-05T09:32:00.000
7
0
0
1
0
python,c++,macos,installation,blpapi
0
29,039,670
0
1
0
false
0
0
There is a missing step in the Python SDK README file; it instructs you to set BLPAPI_ROOT in order to build the API wrapper, but this doesn't provide the information needed at runtime to be able to load it. If you unpacked the C/C++ SDK into '/home/foo/blpapi-sdk' (for example), you will need to set DYLD_LIBRARY_PATH to allow the runtime dynamic linker to locate the BLPAPI library. This can be done as so: $ export DYLD_LIBRARY_PATH=/home/foo/blpapi-sdk/Darwin
1
3
0
0
I am trying to install and run successfully Bloomberg API Python 3.5.5 and I have also downloaded and unpacked C++ library 3.8.1.1., both for the Mac OS X. I'm running Mac OS X 10.10.2. I am using the Python native to Mac OS X, Python 2.7.6 and I had already installed, via Xcode, the Command line gcc compiler, GCC 4.2.1. I did, on an administrator account, sudo python setup.py install. I also had changed the setup.py ENVIRONMENT variable BLPAPI_ROOT to the directory for the C++ headers, blpapi_cpp_3.8.1.1. The setup was successful. I changed to another directory as suggested by the Python's README file, to avoid 'Import Error: No module named _internals'. When I go to python and enter the command import blpapi, I obtain the following error: import blpapi Traceback (most recent call last): File "", line 1, in File "/Library/Python/2.7/site-packages/blpapi/init.py", line 5, in from .internals import CorrelationId File "/Library/Python/2.7/site-packages/blpapi/internals.py", line 50, in _internals = swig_import_helper() File "/Library/Python/2.7/site-packages/blpapi/internals.py", line 46, in swig_import_helper _mod = imp.load_module('_internals', fp, pathname, description) ImportError: dlopen(/Library/Python/2.7/site-packages/blpapi/_internals.so, 2): Library not loaded: libblpapi3_64.so Referenced from: /Library/Python/2.7/site-packages/blpapi/_internals.so Reason: image not found I check the directory for /Library/Python.../blpapi/ and there is no _internals.so only *.py files. Is that the problem? I don't know how to proceed.
Bloomberg API Python 3.5.5 with C++ 3.8.1.1. on Mac OS X import blpapi referencing
0
1
1
0
0
3,384
28,877,499
2015-03-05T12:05:00.000
0
0
0
0
1
python,django,pagination
0
28,939,075
0
2
1
true
1
0
You've all been really helpful. However the my confusion came from the fact that paginator was being added to the context, yet there was a statement {% load paginator %} at the top of the template. I thought they were the same, but no. The paginator from the context was unused, and the load statement pulled in the bad paginator, which was registered with the templating engine. The fix is obvious: remove the load statement, including the context paginator and use that one.
1
0
0
0
My code imports Paginator from django.core.paginator. (Django 1.6.7) However, when run, somehow it is calling a custom paginator. I don't want it to do this, as the custom paginator template has been removed in an upgrade. I just want Django's Paginator to be used, but I can't find how to workout where it's overriding Django's Paginator with our broken one. This may not be so much of a Django question and more of a generic Python question. All the usual things like grepping the code, inserting ipdb's, judicious use of find etc yield no help.
Django keeps calling another package to paginate -- how?
0
1.2
1
0
0
36
28,877,646
2015-03-05T12:14:00.000
0
0
0
1
0
python,python-2.7,celery,py2exe
0
28,878,537
0
1
0
false
0
0
if __name__ == '__main__': app.start() should be added to the entry point of the script.
1
0
0
0
Is it possible to use Celery to run an already compiled (py2exe) python script, if yes, how I can invoke it ?
Is it possible to use Celery to run an already compiled (py2exe) python script
1
0
1
0
0
94
28,892,506
2015-03-06T04:47:00.000
2
0
1
0
0
python,travis-ci,anaconda,conda
0
28,909,473
0
2
0
true
0
0
The requirements in the meta.yaml can be any conda package (which doesn't have to just be Python packages). If you have a conda package for your dependency, you can specify it.
1
3
0
0
I am writing a meta.yaml file for a python package to be used in conda packages in a way that works with CI systems. how can i specify external software requirements for the package? meaning software that is not a python library but is required for the package unit tests to pass? to clarify: the required module is not a python package, but the python package depends on it.
how to specify external software requirements with conda/meta.yaml in Python?
1
1.2
1
0
0
1,203
28,903,187
2015-03-06T16:28:00.000
0
0
0
0
0
django,facebook,rest,django-rest-framework,python-social-auth
0
38,586,945
0
1
0
false
1
0
i didn't unerstand the first case, when you are using facebook login it does the authentication and we will register the user with the access token provided by facebook. When ever user log in we are not worried about the password, authentication is not done on our end. so when ever user tries to login it contacts facebook if everything goes good there, it will give you a token with that user can login.
1
1
0
0
I have to implement a REST backend for mobile applications. I will have to use Django REST Framework. Among the features that I need to implement there will be the registration user and login. Through the mobile application the user can create an account using ONLY the Facebook login. Then, the application will take the information from Facebook using the token-facebook and it will send this data to my server. I tried using python_social about Facebook authentication and user registration using the Facebook token. At this point I have doubts: think there could be two choices: 1: The mobile application use the Facebook-login to retrieve user data and will send a request to my server to create a new user with the Facebook data user and passing the Facebook-token. In this case, in the server side, it will not be integrated python_social and facebook-token is a simple profile field. Doubts: how can you implement the next login (which password Is necessary to use?) 2: The second possibility is to use python_social. In this way there are no problems for subsequent logins. The token Facebook will be used to retrieve the data (and validate the user) by calling: do_auth But in this case, for each user, the server will have to make a request to Facebbok (which actually is possible to avoid: the mobile application has already recovered all the data) What is the best case? What do you usually use for authentication backend rest with Facebook?
Django REST Backend for mobile app with Facebook login
1
0
1
0
0
779
28,909,929
2015-03-07T00:27:00.000
0
0
1
0
0
python,pandas,datanitro
0
28,910,528
0
1
0
false
0
0
Right click in the shell, click "mark", highlight the region you want to copy, and press enter. It'll then be in your clipboard and you can paste it into an editor. (This is the way to copy things from a windows command prompt in general, which is where the shell is running.)
1
2
0
0
I'm using DataNitro and would like to be able to copy output from the Excel Shell and paste into my editor. However, I can't find a way to do that. Any idea how I might do that?
DataNitro - copying from the shell into an editor
0
0
1
0
0
110
28,915,681
2015-03-07T14:04:00.000
1
0
0
0
0
python,qt,pyqt
0
28,923,741
0
3
0
true
0
1
QWidget.sizeHint holds the recommended size for the widget. It's default implementation returns the layout's preferred size if the widget has a layout. So if your dialog has a layout, just use sizeHint to get the recommended size which is the default one.
1
0
0
0
When I create a QMainWindow without explicitly specifying its dimensions, PyQt will give it a -let's say- "standard size" which is not the minimum that the window can get. Can I set this size at will in any way? My goal is to get this "standard size" according to the currently visible widgets, when I set the visibility of some widgets on/off.
how to get the default size of a window in pyqt4
1
1.2
1
0
0
1,830
28,921,997
2015-03-08T01:11:00.000
1
0
0
0
1
python,android,pygame,renpy
1
28,922,150
0
1
0
false
0
1
I e-mailed PyTom, the creator of Ren'py, and he responded back instantly: "Edit the rapt/blacklist.txt file, and delete the line that says **.py. " It worked! Thanks, PyTom.
1
0
0
0
I have this game which I have coded in python with pygame. When I launch my game through the renpy launcher, it works just fine. When I emulate my game through the renpy android emulator, it works just fine. When I go through each of the build steps (install sdk, configure, build package, install package) it completes successfully. The package is installed on my connected android device. When I open the game on my android device, it says: "ImportError: No module named renpygame." When I build the game and play it on my desktop I don't have this problem. I'm guessing somehow renpygame is not getting included in the package when I go through the build process in renpy for Android, but I don't know why or how to fix it. I'm not even getting an error message, I don't know where to start.
Ren'Py with Renpygame for Android
0
0.197375
1
0
0
1,556
28,938,162
2015-03-09T08:59:00.000
5
0
0
0
0
python,pydub
0
28,943,471
0
1
0
true
0
0
Your best bet is to grab chunks from the stream (I'd advise 50 millisecond chunks since one complete wave form at 20Hz is 50ms), and construct an AudioSegment using this data. Once you've done that you'll be able to use the AudioSegment().dBFS property to get a rough measure of the average loudness of that chunk. Once you get a sense for where the highs and lows are you can set a threshold below which will be considered silence. You can of course determine the silence threshold automatically as well, but that'll probably require keeping track of loudest and quietest signal level in the last X seconds, and probably using some kind of decay as well. Note: The method I've described above is definitely not the fastest way to do this, but pydub does not natively handle streaming. That said, it's probably the simplest way to accomplish your goal with pydub.
1
3
0
0
I want to monitor an audio streaming for silences. Any idea how I can do this ? it's a stream, not an audio file.
How to detect silence in an audio stream with pydub?
0
1.2
1
0
0
6,105
28,940,711
2015-03-09T11:18:00.000
5
0
0
0
1
python,opencv,colors,python-2.x
0
28,940,905
0
1
0
true
0
0
JPEG is a lossy format, you need to save your images as PNG as it is a lossless format.
1
1
1
0
I am wondering seriously about the effects of cv2.imwrite() function of OpenCV. I noticed that when I read pictures with cv2.imread() and save them again with cv2.imwrite() function, their quality is not the same any more for the human eyes. I ask you how can I keep the quality of the image the same as the original after saving it using cv2.imwrite() function. I ask this question because I have really a serious issue in a larger program and when I checked the quality of the pictures saved by this function, I guesses that my problem comes certainly from this function. For example, I draw using the mouse movements small red (Red=255) squares on picture . When I save the picture and count the number of pixels that have Red color equal to 255 I get very few of them only even if I draw a lot of them in pure red color. But when I check the image by my eyes, I notice the red pixels I drawed are not correctly saved in the correct red color I chosed (255). Any one does know how to resolve this problem ? I mean to save the pictures using OpenCV without degrading its quality.
Not losing the quality of pictures saved with cv2.imwrite()
0
1.2
1
0
0
2,752
28,949,290
2015-03-09T18:31:00.000
1
1
0
0
1
java,python,intellij-idea,pycharm,ide
0
66,222,468
1
8
0
false
1
0
If there are any file watchers active (Preferences>Tools>File Watchers), make sure to check their Advanced Options. Disable any Auto-save files to trigger the watcher toggles. This option supersedes the Autosave options from Preferences>Appearance & Behaviour>System Settings.
2
78
0
0
I have done a fair amount of googling about this question and most of the threads I've found are 2+ years old, so I am wondering if anything has changed, or if there is a new method to solve the issue pertaining to this topic. As you might know when using IntelliJ (I use 14.0.2), it often autosaves files. For me, when making a change in Java or JavaScript files, its about 2 seconds before the changes are saved. There are options that one would think should have an effect on this, such as Settings > Appearance & Behavior > System Settings > Synchronization > Save files automatically if application is idle for X sec. These settings seem to have no effect for me though, and IntelliJ still autosaves, for example if I need to scroll up to remember how a method I am referencing does something. This is really frustrating when I have auto-makes, which mess up TomCat, or watches on files via Grunt, Karma, etc when doing JS development. Is there a magic setting that they've put in recently? Has anyone figured out how to turn off auto-save, or actually delay it?
Turning off IntelliJ Auto-save
0
0.024995
1
0
0
59,393
28,949,290
2015-03-09T18:31:00.000
8
1
0
0
1
java,python,intellij-idea,pycharm,ide
0
37,813,276
1
8
0
false
1
0
I think the correct answer was given as a comment from ryanlutgen above: The beaviour of "auto-saving" your file is not due to the auto-save options mentioned. IJ saves all changes to your build sources to automatically build the target. This can be turned of in: Preferences -> Build,Execution,Deployment -> Compiler -> Make project automatically. Note: now have to initiate the project build manually (e.g. by using an appropriate key-shortcut) (All other "auto-save" options just fine-tune the build in auto-save behaviour.)
2
78
0
0
I have done a fair amount of googling about this question and most of the threads I've found are 2+ years old, so I am wondering if anything has changed, or if there is a new method to solve the issue pertaining to this topic. As you might know when using IntelliJ (I use 14.0.2), it often autosaves files. For me, when making a change in Java or JavaScript files, its about 2 seconds before the changes are saved. There are options that one would think should have an effect on this, such as Settings > Appearance & Behavior > System Settings > Synchronization > Save files automatically if application is idle for X sec. These settings seem to have no effect for me though, and IntelliJ still autosaves, for example if I need to scroll up to remember how a method I am referencing does something. This is really frustrating when I have auto-makes, which mess up TomCat, or watches on files via Grunt, Karma, etc when doing JS development. Is there a magic setting that they've put in recently? Has anyone figured out how to turn off auto-save, or actually delay it?
Turning off IntelliJ Auto-save
0
1
1
0
0
59,393
28,960,249
2015-03-10T09:37:00.000
2
0
0
0
0
python,cryptography
0
28,960,426
0
1
0
false
0
0
Doing '\x00\x00\xff' or "0000ff".decode('hex') should work.
1
0
0
0
I am trying to carry out a padding oracle attack. I am aware that I have to modify the bytes from 00 to the point where it succeeds, to find the correct padding. But, how do I represent 00-FF in python? When I try representing it as a part of the string, 00 is taken as 2 bytes. P.S - This is a homework problem.
padding oracle attack - how to represent hexadecimal as one byte in python
0
0.379949
1
0
0
289
28,967,747
2015-03-10T15:31:00.000
0
0
0
0
1
python,django,django-models,django-admin
0
29,432,774
0
2
0
true
1
0
I was using Django 1.6 which did not support overriding the get_fields method. Updated to 1.7 and this method worked perfectly.
1
1
0
0
I want to create a dynamic admin site, that based on if the field is blank or not will show that field. So I have a model that has a set number of fields, but for each individual entry will not contain all of the fields in my model and I want to exclude based on if that field is blank. I have a unique bridge identifier, that correlates to each bridge, and then all of the various different variables that describe the bridge. I have it set up now that the user will go to a url with the unique bridgekey and then this will create an entry of that bridge. So (as i am testing on my local machine) it would be like localhost/home/brkey and that code in my views.py that corresponds to that url is However, not every bridge is the same and I have a lot more variables that I would like to include in my model but for now I am just testing on two : prestressed_concrete_deck and reinforced_concrete_coated_bars. What I want is to dynamically create the admin site to not display the prestressed_concrete_deck variable if that field is blank. So instead of displaying all of the variables on the admin site, I want to only display those variables if that bridge has that part, and to not display anything if the field is blank. Another possible solution to the problem would be to get that unique identifier over to my admins.py. I cant figure out either how to get that individual key over as then I could query in the admins.py. If i knew how to access the bridgekey, I could just query in my admins.py dynamically. So how would I access the brkey for that entry in my admins.py (Something like BridgeModel.brkey ?) I have tried several different things in my admin.py and have tried the comments suggestion of overwriting the get_fields() method in my admin class, but I am probably syntactically wrong and I am kind of confused what the object it takes exactly is. Is that the actual entry? Or is that the individual field?
Create a dynamic admin site
0
1.2
1
0
0
84
28,971,180
2015-03-10T18:17:00.000
0
1
0
1
0
python,ssh
0
28,971,821
0
3
0
true
0
0
If you want the python script to exit, I think your best bet would be to continue doing a similar thing to what you're doing; print the credentials in the form of arguments to the ssh command and run python myscript.py | xargs ssh. As tdelaney pointed out, though, subprocess.call(['ssh', args]) will let you run the ssh shell as a child of your python process, causing python to exit when the connection is closed.
1
1
0
0
I am writing a little script which picks the best machine out of a few dozen to connect to. It gets a users name and password, and then picks the best machine and gets a hostname. Right now all the script does is print the hostname. What I want is for the script to find a good machine, and open an ssh connection to it with the users provided credentials. So my question is how do I get the script to open the connection when it exits, so that when the user runs the script, it ends with an open ssh connection. I am using sshpass.
Open SSH connection on exit Python
0
1.2
1
0
1
1,186
29,017,156
2015-03-12T18:05:00.000
1
0
0
0
1
python,openerp,aptana,odoo
0
29,017,479
0
1
0
true
1
0
It could happen if you run your server in "run" mode and not "debug" mode. If you are in "run" mode the breakpoints would be skipped. In Aptana, go to the "run" -> "debug" to run it in debug mode.
1
0
0
0
I have created Odoo v8 PyDev project in Aptana. When I run the openerp server from Aptana, and set a breakpoint in my file product_nk.py, the program does not stop at this break point although I navigated to the Odoo web pages where the functionality is linked to the code with breakpoint. What am I possibly missing in the setup and what I need to do to have the program stop at the set breakpoint in Python code?
OpenERP, Aptana - debugging Python code, breakpoint not working
0
1.2
1
0
0
371
29,018,843
2015-03-12T19:40:00.000
1
0
1
0
0
python,numpy,matrix,scipy
0
29,026,455
0
1
0
true
0
0
With standard dict methods you can get a list of the keys, and another list of the values. Pass the 2nd to numpy.array and you should get a 100 x 7000 array. The keys list could also be made into array, but it might not be any more useful than the list. The values array could be turned into a sparse matrix. But its size isn't exceptional, and arrays have more methods. Tomorrow I can add sample code if needed.
1
1
1
0
I have a very large dictionary of the following format {str: [0, 0, 1, 2.5, 0, 0, 0, ...], str: [0, 0, 0, 1.1, 0, 0, ...], ...}. The number of elements for each str key can be very big so I need an effective way to store and make calculations over this data. For example right now my dict of str keys has 100 keys. Each key has one value which is a list of 7000 float elements. The length of str keys and values is constant. So, let's say str key is of length 5 and its value (which is a list) is 7000. After some reading I found that scipy.sparse module has a nice collection of various matrices to store sparse data but scipy documentation is so sparse that I can barely understand what's going on. Can you provide an example of how to convert the dictionary above to correct matrix type?
How to convert a sparse dict to scipy.sparse matrix in python?
1
1.2
1
0
0
1,646
29,019,387
2015-03-12T20:15:00.000
0
0
1
0
0
macos,python-3.x,ipython
0
30,279,651
0
2
0
false
0
0
Check in the hidden directory ".ipynb_checkpoints" inside of the directory that used to hold the notebook. If you had recently been running the notebook prior to deleting it, you may be able to find a recent copy of it saved at the last "checkpoint".
2
1
0
0
Just installed iPython/Jupyter and accidentally deleted pictures from a file that was living on my desktop. I don't know how to undo what I just deleted and can't seem to find any of the pictures in my trash. Is there anyway I can recover them? My instance of iPython/Jupyter is still open. Thanks.
Accidentally deleted a folder's contents in iPython/Jupyter and I can recover it from trash on Mac OSX Yosemite. Is there anyway I can get it back?
0
0
1
0
0
3,792
29,019,387
2015-03-12T20:15:00.000
1
0
1
0
0
macos,python-3.x,ipython
0
29,019,518
0
2
0
false
0
0
No, you can't easily recover the files. The files are gone. Your option is to restore from a backup, or use a data recovery tool of some sort.
2
1
0
0
Just installed iPython/Jupyter and accidentally deleted pictures from a file that was living on my desktop. I don't know how to undo what I just deleted and can't seem to find any of the pictures in my trash. Is there anyway I can recover them? My instance of iPython/Jupyter is still open. Thanks.
Accidentally deleted a folder's contents in iPython/Jupyter and I can recover it from trash on Mac OSX Yosemite. Is there anyway I can get it back?
0
0.099668
1
0
0
3,792
29,041,571
2015-03-13T20:40:00.000
5
1
0
0
1
python,windows,file-extension,file-association
0
63,965,668
0
2
0
false
0
0
press the windows key type cmd right click the result and choose "run as administrator" assoc .foo=foofile ftype foofile="C:\Users\<user>\AppData\Local\Programs\Python\PYTHON~1\python.exe" "C:\<whatever>\fooOpener.py" "%1" %* Use pythonw.exe if it's a .pyw file (to prevent a cmd window from spawning). If you want to use an existing file type, you can find its alias by not assigning anything. For example, assoc .txt returns .txt=txtfile.
1
17
0
0
I want to do the following: Save numeric data in a CSV-like formatting, with a ".foo" extension; Associate the ".foo" file extension with some python script, which in turns opens the .foo file, reads its content, and plots something with a plotting library (matplotlib most probably). The use-case would be: double-click the file, and its respective plot pops up right away. I wonder how I should write a python script in order to do that. Besides, the windows "open with" dialog only allows me to choose executables (*.exe). If I choose "fooOpener.py", it doesn't work.
Associate file extension to python script, so that I can open the file by double click, in windows
0
0.462117
1
0
0
8,512
29,049,985
2015-03-14T14:23:00.000
1
0
0
0
0
python,pandas,time-series
0
29,050,296
0
3
0
false
0
0
Use this function to create the new column... DataFrame.shift(periods=1, freq=None, axis=0, **kwds) Shift index by desired number of periods with an optional time freq
1
1
1
0
I realize this is a fairly basic question, but I couldn't find what I'm looking for through searching (partly because I'm not sure how to summarize what I want). In any case: I have a dataframe that has the following columns: * ID (each one represents a specific college course) * Year * Term (0 = fall semester, 1 = spring semester) * Rating (from 0 to 5) My goal is to create another column for Previous Rating. This column would be equal to the course's rating the last time the course was held, and would be NaN for the first offering of the course. The goal is to use the course's rating from the last time the course was offered in order to predict the current semester's enrollment. I am struggling to figure out how to find the last offering of each course for a given row. I'd appreciate any help in performing this operation! I am working in Pandas but could move my data to R if that'd make it easier. Please let me know if I need to clarify my question.
Pandas Time-Series: Find previous value for each ID based on year and semester
0
0.066568
1
0
0
1,352
29,056,302
2015-03-15T01:35:00.000
0
0
0
0
0
python,numpy,casting
0
29,058,672
0
1
0
true
0
0
You can't change the type of parts of an ordinary ndarray. An ndarray requires all elements in the array to have the same numpy type (the dtype), so that mathematical operations can be done efficiently. The only way to do this is to change the dtype to object, which allows you to store arbitrary types in each element. However, this will drastically reduce the speed of most operations, and make some operations impossible or unreliable (such as adding two arrays).
1
1
1
0
I have an arff file as input. I read the arff file and put the element values in a numpy ndarray.Now my arff file contains some '?' as some of the elements. Basically these are property values of matrices calculated by anamod. Whichever values anamod cannot calculate it plugs in a '?' character for those. I want to do a Naive baiyes, Random Forest etc prediction for my data. So to handle the '?' I want to use an imputer which is like : Imputer(missing_values='NaN', strategy='mean', axis=0) The missing_values above is of type string of course. My question is that how to change the type of a few numpy ndarray elements to string from float. I used my_numpy_ndarray.astype('str') == 'NaN' to check for NaN values and I could do it successfully but I am not sure how to change the type of numpyndarray float element to string.
change the type of numpyndarray float element to string
0
1.2
1
0
0
117
29,060,962
2015-03-15T13:11:00.000
8
0
0
0
0
python,numpy,scikit-learn
1
29,061,597
0
1
0
false
0
0
Ok I got it. After i used Imputer(missing_values='NaN', strategy='median', axis=1) imp.fit(X2). I also had to write : X2 = imp.fit_transform(X2). The reason being sklearn.preprocessing.Imputer.fit_transform returns a new array, it doesn't alter the argument array
1
9
1
0
I'm using numpy for reading an arff file and I'm getting the following error: ValueError: Input contains NaN, infinity or a value too large for dtype('float64'). I used np.isnan(X2.any()) and np.isfinite(X2.all())to check if it's a nan or infinite case. But it's none of these. This means it's the third case, which is infinity or a value too large for dtype('float64'). I would appreciate if someone could tell me how to take care of this error. Thanks.
a value too large for dtype('float64')
0
1
1
0
0
26,758
29,068,229
2015-03-16T01:12:00.000
0
0
0
0
1
python,opencl,pyopencl
0
42,962,569
0
5
0
false
0
0
CodeXL from AMD works very well.
1
5
1
0
I am trying to optimize a pyOpenCL program. For this reason I was wondering if there is a way to profile the program and see where most of the time is needed for. Do you have any idea how to approach this problem? Thanks in advance Andi EDIT: For example nvidias nvprof for CUDA would do the trick for pyCuda, however, not for pyOpenCL.
Is there a way to profile an OpenCL or a pyOpenCL program?
1
0
1
0
0
2,168
29,073,907
2015-03-16T09:57:00.000
0
1
0
0
0
python,email,smtp
0
29,073,968
0
1
0
false
0
0
Short of sending an email and having someone respond to it is impossible to verify an email exists. You can verify the SMTP server has a whois address, but thats it.
1
0
0
0
I want to check that the given email id is really exists or not in smtp server. Is it possible to check or not.? If it possible please give me suggestion how can we do it.
How to Check is email exists or not in python
1
0
1
0
1
1,440
29,079,698
2015-03-16T14:45:00.000
2
0
1
1
0
python,ansible,kvm,libvirt
0
29,079,838
0
1
0
false
0
0
Of course - if you have SSH access to it. Yes, you can run Ansible using its Python API or through command-line call. About passing YAML file - also - yes.
1
1
0
0
I have a specific requirement where I can use only Ansible in my host machine without vagrant. Two questions associated with it: Is it possible to spin up a VM over the host machine with libvirt/KVM as hypervisor using ansible ? I know there is a module called virt in ansible which is capable of doing this. But I could'nt find any real example of how to use this. Appreciate if someone can point me to the example YAML through which I can spin up VM. With Ansible, is it possible to run my playbook from python code ? If I am not wrong there is a python API supported by Ansible. But is it possible to give YAML file as input to this API which executes tasks from YAML.
Spin up VM using Ansible without Vagrant
1
0.379949
1
0
0
561
29,085,298
2015-03-16T19:24:00.000
0
0
0
0
0
python,windows,matlab,file,shared
0
29,085,388
0
3
0
true
0
0
I am not sure about window's API for locking files Heres a possible solution: While matlab has the file open, you create an empty file called "data.lock" or something to that effect. When python tries to read the file, it will check for the lock file, and if it is there, then it will sleep for a given interval. When matlab is done with the file, it can delete the "data.lock" file. Its a programmatic solution, but it is simpler than digging through the windows api and finding the right calls in matlab and python.
1
2
1
0
I have a Matlab application that writes in to a .csv file and a Python script that reads from it. These operations happen concurrently and at their own respective periods (not necessarily the same). All of this runs on Windows 7. I wish to know : Would the OS inherently provide some sort of locking mechanism so that only one of the two applications - Matlab or Python - have access to the shared file? In the Python application, how do I check if the file is already "open"ed by Matlab application? What's the loop structure for this so that the Python application is blocked until it gets access to read the file?
Shared file access between Python and Matlab
0
1.2
1
0
0
212
29,088,344
2015-03-16T22:50:00.000
0
0
0
0
0
python,web-development-server
0
37,997,045
0
1
0
false
1
0
For that you would need a web framework like Bottle or Flask. Bottle is a simple WSGI based web framework for Python. Using either of these you may write simple REST based APIs, one for set and other for get. The "set" one could accept data from your client side and store it on your database where as your "get" api should return the data by reading it from your DB. Hope it helps.
1
0
0
0
I am in the middle of my personal website development and I am using python to create a "Comment section" which my visitors could leave comments at there in public (which means, everybody can see it, so don't worry about the user name registration things). I already set up the sql database to store those data but only thing I haven't figured out yet was how to get the user input (their comments) from the browser. So, is there any modules in python could do that? (Like, the "Charfield" things in django, but unfortunately I don't use django)
How I can get user input from browser using python
1
0
1
0
1
754
29,102,422
2015-03-17T14:56:00.000
1
0
0
0
0
python,mysql,django,eclipse,pydev
0
29,493,720
0
1
0
true
1
0
I am a mac user. I have luckily overcome the issue with connecting Django to mysql workbench. I assume that you have already installed Django package created your project directory e.g. mysite. Initially after installation of MySQL workbench i have created a database : create database djo; Go to mysite/settings.py and edit following piece of block. NOTE: Keep Engine name "django.db.backends.mysql" while using MySQL server. and STOP the other Django MySQL service which might be running. DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'djo', # Or path to database file if using sqlite3. # The following settings are not used with sqlite3: 'USER': 'root', 'PASSWORD': '****', # Replace **** with your set password. 'HOST': '127.0.0.1', # Empty for localhost through domain sockets or '127.0.0.1' for localhost through TCP. 'PORT': '3306', # Set to empty string for default. } } now run manage.py to sync your database : $ python mysite/manage.py syncdb bash-3.2$ python manage.py syncdb Creating tables ... Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_groups Creating table auth_user_user_permissions Creating table auth_user Creating table django_content_type Creating table django_session Creating table django_site You just installed Django's auth system, which means you don't have any superusers defined. Would you like to create one now? (yes/no): yes Username (leave blank to use 'ambershe'): root Email address: [email protected] /Users/ambershe/Library/Containers/com.bitnami.django/Data/app/python/lib/python2.7/getpass.py:83: GetPassWarning: Can not control echo on the terminal. passwd = fallback_getpass(prompt, stream) Warning: Password input may be echoed. Password: **** Warning: Password input may be echoed. Password (again): **** Superuser created successfully. Installing custom SQL ... Installing indexes ... Installed 0 object(s) from 0 fixture(s)
1
0
0
0
I am new to this so a silly question I am trying to make a demo website using Django for that I need a database.. Have downloaded and installed MySQL Workbench for the same. But I don't know how to setup this. Thank you in advance :) I tried googling stuff but didn't find any exact solution for the same. Please help
Connect MySQL Workbench with Django in Eclipse in a mac
0
1.2
1
1
0
1,235
29,114,480
2015-03-18T05:02:00.000
0
0
0
0
1
python-2.7,scrapy,virtualenv,virtualenvwrapper
0
29,121,164
0
1
0
false
1
0
You need to pip install all the set-up within the virtualenv.
1
0
0
0
I create a virtualenv name as ScrapyProject. when I use scrapy command or pip command it does not work but when I enter the python command it works. Here is how he shows me. (ScrapyProject) C:\Users\Jake\ScrapyProject>scrapy (ScrapyProject) C:\Users\Jake\ScrapyProject>pip (ScrapyProject) C:\Users\Jake\ScrapyProject>python python2.7.6 etc. Here is how my paths are in virtualenv. C:\Users\Jake\ScrapyProject\Scripts;Then some windows paths then some python paths There is not extra spaces between them I am sure! and here is how my python paths look;C:\python27;C:\Python27\Lib\site-packages Can anybody help me If he needs some extra information, I will give you? I totally didnot understand it!
Commands not working in ScrapyProject
0
0
1
0
0
40
29,124,446
2015-03-18T14:16:00.000
0
0
0
1
0
python,mongodb,tornado
0
29,156,089
0
1
0
false
0
0
You need to understand how Tornado works asynchronously. Everytime you yield a Future object, Tornado suspends current coroutine and jumps to the next coroutine. Doing queries synchronous or asynchronous depends on the situation. If your query is fast enough, you can use synchronous driver. Also, keep in mind, jumping between coroutines has a cost, too. If it is not fast enough, you might consider doing asychronous calls.
1
0
0
0
Basically, what is a Futures on Tornado's approach? I've read on some stackoverflow threads that a tornado coroutine must return a Future, but returning a Future how do my db queries work? Using Futures will my Tornado app be waiting for the query to return anything like a blocking i/o or it will just dispatch the request and change the context until the query return? And this Motorengine solution? Do I need to use Futures or just make the queries?
Do I need to use Tornado Futures with Motorengine?
0
0
1
0
0
113
29,137,043
2015-03-19T04:05:00.000
0
1
1
0
0
python,hash
0
29,137,224
0
1
0
true
0
0
Search on StackOverflow for code to recursively list full file names in Python Search on StackOverflow for code to return the hash checksum of a file Then list files using an iterator function. Inside the loop: Get the hash checksum of the current file in the loop Iterate through every hash. Inside the loop: Compare with the current file's checksum Algorithms? Don't worry about it. If you iterate through each line of the file, it will be fine. Just don't load it all at once, and don't load it into a data structure such as a list or dictionary because you might run out of memory.
1
0
0
0
I have a text file of size 2.5 GB which contains hash values of some standard known files. My task is to find the hash of all files on my file system and compare it with the hashes stored in the text file. If a match is found I need to print Known on the screen and if no match is found then I need to print unknown on the screen. Thus the approach for the task is quite simple but the main issue is that the files involved in the process are very huge. Can somebody suggest how to accomplish this task in an optimized way. Should I import the text file containing hashes to a database. If yes, then please provide some link which could possibly help me accomplish it. Secondly what algorithm can I use for searching to speed up the process? My preferred language is Python.
Perform searching on very large file programatically in Python
0
1.2
1
0
0
62
29,148,746
2015-03-19T15:32:00.000
1
0
0
0
1
python,scikit-learn,cluster-analysis,mean-shift
0
29,149,537
0
1
1
true
0
0
The standard deviation of the clusters isn't 1. You have 8 dimensions, each of which has a stddev of 1, so you have a total standard deviation of sqrt(8) or something like that. Kernel density estimation does not work well in high-dimensional data because of bandwidth problems.
1
1
1
0
I am working with the Mean Shift clustering algorithm, which is based on the kernel density estimate of a dataset. I would like to generate a large, high dimensional dataset and I thought the Scikit-Learn function make_blobs would be suitable. But when I try to generate a 1 million point, 8 dimensional dataset, I end up with almost every point being treated as a separate cluster. I am generating the blobs with standard deviation 1, and then setting the bandwidth for the Mean Shift to the same value (I think this makes sense, right?). For two dimensional datasets this produced fine results, but for higher dimensions I think I'm running into the curse of dimensionality in that the distance between points becomes too big for meaningful clustering. Does anyone have any tips/tricks on how to get a good high-dimensional dataset that is suitable for (something like) Mean Shift clustering? (or am I doing something wrong? (which is of course a good possibility))
Generating high dimensional datasets with Scikit-Learn
0
1.2
1
0
0
681
29,170,268
2015-03-20T15:33:00.000
0
0
0
0
1
python,sql
0
29,170,744
0
2
0
false
0
0
For the first question (how do I make sure I'm not duplicating ingredients?), if I understand well, is basically put your primary key as (i_id, name) in the table ingredients. This way you guarantee that is impossible insert an ingredient with the same key (i_id, name). Now for the second question (how do I insert the data so that the ingredient is linked to the id of the current Recipe object?). I really don't understand this question very well. What I think you want is link the recipes with ingredients. This can be made with the table RecipeIngredients. When you want to do that, you simple insert a new row in that table with the id of the recipe and the id of the ingredient. If isn't this what you want sorry, but I really don't understand.
1
0
0
0
I'm not sure what exactly the wording for the problem is so if I haven't been able to find any resource telling me how to do this, that's most likely why. The basic problem is that I have a webcrawler, coded in Python, that has a 'Recipe' object that stores certain data about a specific recipe such as 'Name', 'Instructions', 'Ingredients', etc. with 'Instructions' and 'Ingredients' being a string array. Now, the problem I have comes when I want to store this data in a database for access from other sources. A basic example of the database looks as follows: (Recipes) r_id, name, .... (Ingredients) i_id, name, .... (RecipeIngredients) r_id, i_id. Now, specifically my problem is, how do I make sure I'm not duplicating ingredients and how do I insert the data so that the ingredient is linked to the id of the current Recipe object? I know my explanation is bad but I'm struggling to put it into words. Any help is appreciated, thanks.
Inserting data into SQL database that needs to be linked
0
0
1
1
0
45
29,178,949
2015-03-21T03:37:00.000
1
0
1
0
0
python,git
0
29,179,099
0
1
0
true
0
0
Absolute paths that depend on your specific computer do not belong in version control. A good solution would be to have your program read an environment variable and use it as the path. Make sure to set a sensible default if the environment variable is unset.
1
1
0
0
I am using a python program on 2 different computers. On computer 1 some path (e.g., to an image or something), used by the program, is, say, a/b/ On computer 2, the equivalent path is different, say, b/a/ (the image, e.g., is in a different folder) When I want to run the script on computer 1 I pull the code and set the path to a/b/. Then I make changes and push. Then I go to computer 2 and pull. Now the path is a/b/ but actually I want the pull to not change the path (all the rest should change though of course). Q1: Is there a way to automatically do this (prevent the changes in the path)? Also I keep getting merge conflicts just due to the path being different. Q2: I might not even be doing this in an optimal way, how do people do this? My procedure could be wrong causing these issues.
Ignore part of file in git when using on 2 different computers (python)
1
1.2
1
0
0
48
29,186,447
2015-03-21T18:30:00.000
0
0
1
0
0
python,audio
0
29,192,486
0
1
0
false
0
0
I dont't know how you recorded your text but if you call every word in an other line you can type import time at line 1 and and then after every audio file you could set time.sleep(0.3) (0.3 stands for 0.3 seconds). but that may take a while. it would be useful to see your code. so can you send it maybe?
1
0
0
0
I want to make a simple script that uses audio files to talk a user through a process. When the user has finished the current step, it should stop trying to explain that step and move into the next. This is easy to do, except that it sounds very ugly and unnatural when the audio stops mid-word. I've noticed in Grand Theft Auto 5, when a character is talking and something unexpected happens, the audio pauses between words and then resumes at the beginning of the sentence a few seconds later -- which sounds very natural, because that's how people really speak. I'd like to find a way to do this with my script. It doesn't need to resume at the beginning of the sentence, because it's responding the the user being ready to move on, but I'd like it to somehow find the spaces between words and pause there. Is there a simple way to do this in Python, or maybe an easier way to do it in something else, assuming that it's not simple in Python? Edit: The audio has yet to be recorded, so if there's some way I should record it to make this happen, I can do that. I'm aware that GTA5 had a team of programmers, and I'm just one guy making a simple script -- but it seems like there should be a simple solution, like something that looks for silence in the audio, or maybe marking the spaces between words by hand?
python - stop audio between words
0
0
1
0
0
100
29,190,350
2015-03-22T02:39:00.000
0
0
1
0
1
python-2.7,pygame,importerror
1
29,221,166
0
1
0
false
0
1
There is only really only one way to do this: Create a folder Put the Python Interpreter you are using in that folder Put the PyGame module you are using in that same folder And your problem is now solved. I hope this helps you!
1
0
0
0
Okay, so I am brand new at this and I really need for this to be dumbed down for me. My python version is 2.7.9 and I downloaded pygame-1.9.1.win32-py2.7.msi and I am on a windows computer. I really need someone to explain why this is not working. I was reading on some of the other questions that you have to change the path of python and I have no idea how to do that. I am literally screwed because I have to turn something in for a major grade and I have literally been trying to figure this out for like about four weeks now. When I try to import pygame, I get this error message and I have absolutely no idea what I am doing wrong: Traceback (most recent call last): File "C:\Users\Tiffany\Documents\NetBeansProjects\NewPythonProject\src\newpythonproject.py", line 3, in import pygame ImportError: No module named pygame
ImportError: No module named pygame and how to change the path of pygame?
1
0
1
0
0
3,786
29,197,996
2015-03-22T18:11:00.000
0
0
1
0
1
c#,python,windows-8,windows-store-apps,ironpython
0
29,978,493
0
1
0
true
0
1
I don't think MS allows this functionality. As an alternative, you can have the user put their mouse over the window and press a keyboard shortcut (What I am doing). That is the best one can do.
1
0
0
0
Background I am working on a program that needs to find a list of open Metro apps. I originally tried using pure python with ctypes and using win32 api. Unfortunately, I couldn't get the names of Metro apps. So I moved on to IronPython thinking I could leverage a .net function or two to get what I want. No luck. Where I am at I can easily get a list of running processes and their PID, but filtering the Metro apps from the non-metro apps is proving near impossible. Also, I need to get the HWND of that window so I can interact with it. What Won't Work EnumWindows doesn't seem to work. FindWindowEx seems promising, but I am not sure how to do this. Thanks in advance. EDIT: I am now able to get what I want using IsImmersiveProcess, but the process list is doesn't include the Windows Apps.
IronPython-List Open Metro Apps only
0
1.2
1
0
0
121
29,208,955
2015-03-23T11:20:00.000
0
1
0
0
0
oauth,openid,python-social-auth
0
30,144,643
0
1
0
false
1
0
You could simply use the yahoo site's favicon. It's already squared and gets as close to official as it can get.
1
0
0
0
I want my users to be able to login using all common OpenIds. But there seems to be a forest of clauses on how to use logos from google, facebook, yahoo and twitter. Actually I'd prefer a wider button with the text on it, but it seems that all these buttons don't have the aspect ratio. And some of them I am not allowed to scale. Also some pages switched to javascript buttons which seems incredibly stupid, because that causes the browser to download all the javascripts and the page looks like 1990 where you can watch individual elements loading. So finally I surrendered and went for the stackoverflow approach of using square logos with the text "login with" next to them. But now I can't find an official source for a square yahoo icon. The question: Can you recommend any articles or do you have any tipps on how to make OpenId look uniform? And: Have you seen an official source for a square Icon that I'm allowed to use? PS: I'm using python-social-auth for Django.
How to make most common OpenID logins look the same?
0
0
1
0
0
21
29,221,836
2015-03-23T22:44:00.000
0
1
0
1
0
python,openshift
0
29,236,634
0
2
0
false
0
0
You can not use the "yum" command to install packages on OpenShift. What specific issue are you having? I am sure that at least some of those packages are already installed in OpenShift online already (such as wget). Have you tried running your project to see what specific errors you get about what is missing?
1
0
0
0
Hello i want to install these dependencies in OpenShift for my App yum -y install wget gcc zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel libffi-devel libxslt libxslt-devel libxml2 libxml2-devel openldap-devel libjpeg-turbo-devel openjpeg-devel libtiff-devel libyaml-devel python-virtualenv git libpng12 libXext xorg-x11-font-utils But don't know how, is it through rhc? if so, how?
How to install dependencies in OpenShift?
0
0
1
0
0
461
29,232,101
2015-03-24T11:58:00.000
0
0
1
0
0
python,windows,anaconda
0
43,085,233
0
3
0
false
0
0
Install newest version of Anaconda from their website. This will upgrade your Anaconda and python as well. Also, uninstall the previous version of Anaconda.
1
6
0
0
After successful installation of Anaconda on Windows 7 I realized the default Python version is 2.7.8. However I need 2.7.9. So how do I upgrade?
How do I upgrade python 2.7.8 to 2.7.9 in Anaconda without conflicting other components in its environment?
0
0
1
0
0
6,678
29,246,687
2015-03-25T02:58:00.000
0
0
1
0
0
python,python-2.7,dictionary,key
0
29,246,758
0
6
0
false
0
0
a key only has one value, you would need to make the value a tuple or list etc If you know you are going to have multiple values for a key then i suggest you make the values capable of handling this when they are created
1
5
0
0
the current code I have is category1[name]=(number) however if the same name comes up the value in the dictionary is replaced by the new number how would I make it so instead of the value being replaced the original value is kept and the new value is also added, giving the key two values now, thanks.
How to add to python dictionary without replacing
0
0
1
0
0
10,467
29,252,233
2015-03-25T09:47:00.000
3
0
1
0
0
python,python-3.x,asynchronous,python-asyncio
0
29,729,638
0
2
0
true
0
0
The standard way to deal with this is by starting multiple server processes (each with its own event loop), with a load balancer in front. Each such process typically cannot utilize more than one CPU core, so you might want to have as many processes as you have cores.
1
6
0
0
I am trying to create an asynchronous application using Python's asyncio module. However, all implementations I can find on the documentation is based on a single Event Loop. Is there any way to launch multiple Event Loops running the same application, so I can achieve high availability and fault tolerance? In other words, I want to scale-out my application by inserting new nodes that would share the execution of coroutines behind a load balancer. I understand there is an inherent issue between asynchronous programming and thread-safety, perhaps what I have in mind isn't even possible. If so, how to avoid this kind of SPOF on asynchronous architectures?
High availability for Python's asyncio
0
1.2
1
0
0
1,433
29,262,859
2015-03-25T17:46:00.000
3
0
1
0
0
python,date
0
29,263,021
0
3
0
false
0
0
You're almost there, take the date of Dec. 28. If there is a monday after that, it will only have 3 days in the old year and hence be week 1 of the new year.
1
6
0
0
In Python, how can we find out the number of calendar weeks in a year? I didn't find a function in the standard library. I then thought about date(year, 12, 31).isocalendar()[1], but For example, 2004 begins on a Thursday, so the first week of ISO year 2004 begins on Monday, 29 Dec 2003 and ends on Sunday, 4 Jan 2004, so that date(2003, 12, 29).isocalendar() == (2004, 1, 1) and date(2004, 1, 4).isocalendar() == (2004, 1, 7).
The number of calendar weeks in a year?
0
0.197375
1
0
0
7,322
29,284,282
2015-03-26T16:48:00.000
8
0
1
1
0
python,numpy,module
1
33,483,629
0
2
0
false
0
0
See the easiest solution is to unzip the .whl file using 7-zip. Then in the unzipped directory you will find the module which you can copy and paste in the directory C:/Python34/Lib/site-packages/ (or wherever else you have installed Python).
2
5
0
1
I recently tried to re-install numpy for python 3.4, since I got a new computer, and am struggling. I am on windows 8.1, and from what I remember I previously used a .exe file that did everything for me. However, this time I was given a .whl file (apparently this is a "Wheel" file), which I cannot figure out how to install. Other posts have explained that I have to use PIP, however the explanations of how to install these files that I have been able to find are dreadful. The command "python install pip" or "pip install numpy" or all the other various commands I have seen only return an error that "python is not recognized as an internal or external command, operable program or batch file", or "pip is not recognised as an internal...." ect. I have also tried "python3.4", "python.exe" and many others since it does not like python. The file name of the numpy file that I downloaded is "numpy-1.9.2+mkl-cp34-none-win_amd64.whl". So can anybody give me a Detailed tutorial of how to use these, as by the looks of things all modules are using these now. Also, why did people stop using .exe files to install these? It was so much easier!
How to install in python 3.4 - .whl files
0
1
1
0
0
22,006
29,284,282
2015-03-26T16:48:00.000
6
0
1
1
0
python,numpy,module
1
29,286,276
0
2
0
true
0
0
Python 3.4 comes with PIP already included in the package, so you should be able to start using PIP immediately after installing Python 3.4. Commands like pip install <packagename> only work if the path to PIP is included in your path environment variable. If it's not, and you'd rather not edit your environment variables, you need to provide the full path. The default location for PIP in Python 3.4 is in C:\Python34\Scripts\pip3.4.exe. If that file exists there (it should), enter the command C:\Python34\Scripts\pip3.4.exe install <numpy_whl_path>, where <numpy_whl_path> is the full path to your numpy .whl file. For example: C:\Python34\Scripts\pip3.4.exe install C:\Users\mwinfield\Downloads\numpy‑1.9.2+mkl‑cp34‑none‑win_amd64.whl.
2
5
0
1
I recently tried to re-install numpy for python 3.4, since I got a new computer, and am struggling. I am on windows 8.1, and from what I remember I previously used a .exe file that did everything for me. However, this time I was given a .whl file (apparently this is a "Wheel" file), which I cannot figure out how to install. Other posts have explained that I have to use PIP, however the explanations of how to install these files that I have been able to find are dreadful. The command "python install pip" or "pip install numpy" or all the other various commands I have seen only return an error that "python is not recognized as an internal or external command, operable program or batch file", or "pip is not recognised as an internal...." ect. I have also tried "python3.4", "python.exe" and many others since it does not like python. The file name of the numpy file that I downloaded is "numpy-1.9.2+mkl-cp34-none-win_amd64.whl". So can anybody give me a Detailed tutorial of how to use these, as by the looks of things all modules are using these now. Also, why did people stop using .exe files to install these? It was so much easier!
How to install in python 3.4 - .whl files
0
1.2
1
0
0
22,006
29,290,361
2015-03-26T22:57:00.000
0
1
1
0
0
python,camera,raspberry-pi
0
29,363,197
0
2
0
false
0
0
You do realise that the Pi Camera board has a fixed focus at 1m to infinity ? & If you want focus <1m you're gonna have to manually twist the lens, after removing the glue that fixates it in the housing.
2
0
0
0
I just got my raspberry pi yesterday and have gone through all the setup and update steps required to enable the camera extension (which took me like 6 hrs; yes, I am a newbie). I need to create a focus point on the camera's output and use the arrow keys to move this point around the screen. My question is how to access the pixel addresses in order to achieve this (if that is even how to do it). Please can one of you gurus out there point me in the right direction.
A focus point on the raspberry pi camera
0
0
1
0
0
1,081
29,290,361
2015-03-26T22:57:00.000
0
1
1
0
0
python,camera,raspberry-pi
0
38,924,580
0
2
0
false
0
0
If you need to take close-ups or if you are capturing image very close to camera lens then you need to manually set the focus.
2
0
0
0
I just got my raspberry pi yesterday and have gone through all the setup and update steps required to enable the camera extension (which took me like 6 hrs; yes, I am a newbie). I need to create a focus point on the camera's output and use the arrow keys to move this point around the screen. My question is how to access the pixel addresses in order to achieve this (if that is even how to do it). Please can one of you gurus out there point me in the right direction.
A focus point on the raspberry pi camera
0
0
1
0
0
1,081
29,298,191
2015-03-27T10:14:00.000
0
0
1
0
0
python,xml-parsing,python-2.4
0
29,298,306
0
1
0
false
0
0
You don't need to upgrade the system Python. Install your own local version of 2.7 into your home directory, create a virtualenv using that version, and install your libraries there.
1
1
0
0
I want to use a library which is available from python 2.7 onwards on a system with python 2.4. I cannot upgrade the system to python 2.7, as many other libraries and softwares are written in python 2.4. For instance, i want to use the xml.eTree library in python 2.4. Can i take the source code of that library and do few changes and compile it on 2.4 ? If yes, could you please tell how to proceed?
how to use a library that's compiled for python 2.7 in python 2.4
0
0
1
0
0
52
29,307,829
2015-03-27T18:34:00.000
1
0
1
0
0
python,python-2.7,pandas,pycharm,anaconda
0
29,353,457
0
1
0
true
0
0
If you are using conda, use the --offline flag when installing the conda package that you downloaded, like conda install --offline pandas-0.15.2-np19py27.tar.bz2.
1
1
0
0
Hi I have both Pycharm and Anaconda installed on my computer and I would like to upgrade the package Pandas to the latest version (i have the 0.14 version) Unfortunately my computer has strong firewall restrictions and I am not able to use the internet to update it. I can download the source files though. Is there a way to install the package manually? I use win 7 64 by the way Many thanks!!!
how to update a Python package (Pandas) manually?
0
1.2
1
0
0
1,167
29,317,107
2015-03-28T12:24:00.000
0
0
0
0
0
python,vim
0
29,332,655
0
3
0
false
0
0
Thanks everyone who replied. I hoped .swp would be documented, maybe even code available to access it, seems not. The suggestion to write a plugin in Python probably makes the most sense, I bet it's possible to hook to something like 'on_keystroke' and maintain a mirror I can understand. 'shell out' as in 'write this chunk of text to a temp file, open that file in vim, when vim exits, write the temp file contents to a replacement chunk of text' PS: $100 is way too much to shell out for those tickets
2
3
0
1
An application wants to shell text out to vim and know what edits are being made, in real time. The .swp file provides this information. Can anyone provide guidance on how to read this binary file, say, with Python?
What is the structure of a vim .swp file?
0
0
1
0
0
2,549
29,317,107
2015-03-28T12:24:00.000
1
0
0
0
0
python,vim
0
29,321,744
0
3
0
false
0
0
I wouldn't rely on the swapfile contents to get real-time updates. Its format is geared towards Vim's uses, and its format isn't documented other than by its implementation. You would have to duplicate large parts of the algorithm, and maintain that whenever the internal format changes (without prior notice). Alternatively, I would use one of the embedded languages (e.g. Python) to interface with the outside program that wants to get real-time updates. A Python function could periodically send along the entire buffer contents on a socket, for example.
2
3
0
1
An application wants to shell text out to vim and know what edits are being made, in real time. The .swp file provides this information. Can anyone provide guidance on how to read this binary file, say, with Python?
What is the structure of a vim .swp file?
0
0.066568
1
0
0
2,549
29,319,546
2015-03-28T16:27:00.000
0
0
1
0
0
python,visual-studio,ptvs
0
47,253,726
0
2
0
false
0
0
Use the Python Interactive Window (CTRL-ALT-F8 or Debug Menu). You will have the code output on the python interactive shell (where you can obviously interact). The win terminal will not appear anymore.
1
12
0
0
In PTVS the default behavior is for the program to print to the Python console window and the Visual Studio Debug Output window. Realizing that it won't be able to accept user input, how do I suppress the Python console window?
How do I suppress the console window when debugging python code in Python Tools for Visual Studio (PTVS)?
0
0
1
0
0
9,244
29,337,928
2015-03-30T03:25:00.000
3
0
1
1
0
python,windows,anaconda
0
50,202,529
0
14
0
false
0
0
It looks that some files are still left and some registry keys are left. So you can run revocleaner tool to remove those entries as well. Do a reboot and install again it should be doing it now. I also faced issue and by complete cleaning I got Rid of it.
13
102
0
0
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7. I removed Anaconda and deleted all the directories and installed python 2.7. But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
How to remove anaconda from windows completely?
0
0.042831
1
0
0
368,077
29,337,928
2015-03-30T03:25:00.000
18
0
1
1
0
python,windows,anaconda
0
45,360,960
0
14
0
false
0
0
If a clean re-install/uninstall did not work, this is because the Anaconda install is still listed in the registry. Start -> Run -> Regedit Navigate to HKEY_CURRENT_USER -> Software -> Python You may see 2 subfolders, Anaconda and PythonCore. Expand both and check the "Install Location" in the Install folder, it will be listed on the right. Delete either or both Anaconda and PythonCore folders, or the entire Python folder and the Registry path to install your Python Package to Anaconda will be gone.
13
102
0
0
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7. I removed Anaconda and deleted all the directories and installed python 2.7. But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
How to remove anaconda from windows completely?
0
1
1
0
0
368,077
29,337,928
2015-03-30T03:25:00.000
3
0
1
1
0
python,windows,anaconda
0
61,464,989
0
14
0
false
0
0
For windows- In the Control Panel, choose Add or Remove Programs or Uninstall a program, and then select Python 3.6 (Anaconda) or your version of Python. Use Windows Explorer to delete the envs and pkgs folders prior to Running the uninstall in the root of your installation.
13
102
0
0
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7. I removed Anaconda and deleted all the directories and installed python 2.7. But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
How to remove anaconda from windows completely?
0
0.042831
1
0
0
368,077
29,337,928
2015-03-30T03:25:00.000
7
0
1
1
0
python,windows,anaconda
0
48,980,402
0
14
0
false
0
0
To use Uninstall-Anaconda.exe in C:\Users\username\Anaconda3 is a good way.
13
102
0
0
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7. I removed Anaconda and deleted all the directories and installed python 2.7. But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
How to remove anaconda from windows completely?
0
1
1
0
0
368,077
29,337,928
2015-03-30T03:25:00.000
1
0
1
1
0
python,windows,anaconda
0
58,889,740
0
14
0
false
0
0
Uninstall Anaconda from control Panel Delete related folders, cache data and configurations from Users/user Delete from AppData folder from hidden list To remove start menu entry -> Go to C:/ProgramsData/Microsoft/Windows/ and delete Anaconda folder or search for anaconda in start menu and right click on anaconda prompt -> Show in Folder option. This will do almost cleaning of every anaconda file on your system.
13
102
0
0
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7. I removed Anaconda and deleted all the directories and installed python 2.7. But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
How to remove anaconda from windows completely?
0
0.014285
1
0
0
368,077
29,337,928
2015-03-30T03:25:00.000
3
0
1
1
0
python,windows,anaconda
0
63,536,711
0
14
0
false
0
0
there is a start item folder in C:\ drive. Remove ur anaconda3 folder there, simple and you are good to go. In my case I found here "C:\Users\pravu\AppData\Roaming\Microsoft\Windows\Start Menu\Programs"
13
102
0
0
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7. I removed Anaconda and deleted all the directories and installed python 2.7. But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
How to remove anaconda from windows completely?
0
0.042831
1
0
0
368,077
29,337,928
2015-03-30T03:25:00.000
3
0
1
1
0
python,windows,anaconda
0
56,087,895
0
14
0
false
0
0
On my machine (Win10), the uninstaller was located at C:\ProgramData\Anaconda3\Uninstall-Anaconda3.exe.
13
102
0
0
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7. I removed Anaconda and deleted all the directories and installed python 2.7. But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
How to remove anaconda from windows completely?
0
0.042831
1
0
0
368,077
29,337,928
2015-03-30T03:25:00.000
1
0
1
1
0
python,windows,anaconda
0
54,453,907
0
14
0
false
0
0
Go to C:\Users\username\Anaconda3 and search for Uninstall-Anaconda3.exe which will remove all the components of Anaconda.
13
102
0
0
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7. I removed Anaconda and deleted all the directories and installed python 2.7. But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
How to remove anaconda from windows completely?
0
0.014285
1
0
0
368,077
29,337,928
2015-03-30T03:25:00.000
19
0
1
1
0
python,windows,anaconda
0
37,898,464
0
14
0
false
0
0
In my computer there wasn't a uninstaller in the Start Menu as well. But it worked it the Control Panel > Programs > Uninstall a Program, and selecting Python(Anaconda64bits) in the menu. (Note that I'm using Win10)
13
102
0
0
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7. I removed Anaconda and deleted all the directories and installed python 2.7. But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
How to remove anaconda from windows completely?
0
1
1
0
0
368,077
29,337,928
2015-03-30T03:25:00.000
184
0
1
1
0
python,windows,anaconda
0
39,490,516
0
14
0
false
0
0
In the folder where you installed Anaconda (Example: C:\Users\username\Anaconda3) there should be an executable called Uninstall-Anaconda.exe. Double click on this file to start uninstall Anaconda. That should do the trick as well.
13
102
0
0
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7. I removed Anaconda and deleted all the directories and installed python 2.7. But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
How to remove anaconda from windows completely?
0
1
1
0
0
368,077
29,337,928
2015-03-30T03:25:00.000
8
0
1
1
0
python,windows,anaconda
0
29,377,301
0
14
0
false
0
0
Anaconda comes with an uninstaller, which should have been installed in the Start menu.
13
102
0
0
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7. I removed Anaconda and deleted all the directories and installed python 2.7. But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
How to remove anaconda from windows completely?
0
1
1
0
0
368,077
29,337,928
2015-03-30T03:25:00.000
15
0
1
1
0
python,windows,anaconda
0
29,377,433
0
14
0
true
0
0
Since I didn't have the uninstaller listed - the solution turned out to be to reinstall Anaconda and then uninstall it.
13
102
0
0
I installed Anaconda a while ago but recently decided to uninstall it and just install basic python 2.7. I removed Anaconda and deleted all the directories and installed python 2.7. But when I go to install PyGTK for Windows it says it will install it to the c:/users/.../Anaconda directory - this doesn't even exist. I want to install it to the c:/python-2.7 directory. Why does it think Anaconda is still installed? And how can I change this?
How to remove anaconda from windows completely?
0
1.2
1
0
0
368,077