Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
30,519,299
2015-05-29T00:34:00.000
0
0
0
0
python,postgresql,amazon-rds,gevent
30,519,353
2
false
1
0
You could try this from within psql to get more details on query timing EXPLAIN sql_statement Also turn on more database logging. mysql has slow query analysis, maybe PostgreSQL has an equivalent.
1
0
0
First, the server setup: nginx frontend to the world gunicorn running a Flask app with gevent workers Postgres database, connection pooled in the app, running from Amazon RDS, connected with psycopg2 patched to work with gevent The problem I'm encountering is inexplicably slow queries that are sometimes running on the order of 100ms or so (ideal), but which often spike to 10s or more. While time is a parameter in the query, the difference between the fast and slow query happens much more frequently than a change in the result set. This doesn't seem to be tied to any meaningful spike in CPU usage, memory usage, read/write I/O, request frequency, etc. It seems to be arbitrary. I've tried: Optimizing the query - definitely valid, but it runs quite well locally, as well as any time I've tried it directly on the server through psql. Running on a larger/better RDS instance - I'm currently working on an m3.medium instance with PIOPS and not coming close to that read rate, so I don't think that's the issue. Tweaking the number of gunicorn workers - I thought this could be an issue, if the psycopg2 driver is having to context switch excessively, but this had no effect. More - I've been working for a decent amount of time at this, so these were just a couple of the things I've tried. Does anyone have ideas about how to debug this problem?
Inconsistently slow queries in production (RDS)
0
1
0
1,181
30,519,737
2015-05-29T01:34:00.000
3
0
1
0
python,python-3.x
30,519,763
3
false
0
0
You can't change the values in tuples, tuples are immutable. You would need to make them be lists or create a new tuple with the value you you want and store that.
1
11
0
Suppose that I have tuples of the form [(('d',0),('g',0)),(('d',0),('d',1)),(('i',0),('g',0))] Then how do I increment the numbers inside the tuple that they are of the form:- [(('d',1),('g',1)),(('d',1),('d',2)),(('i',1),('g',1))] ? I am able to do this in a single for loop. But I am looking for shorter methods. P.S. You are allowed to create new tuples
how to increment inside tuple in python?
0.197375
0
0
13,989
30,524,215
2015-05-29T07:58:00.000
0
0
1
0
windows,python-2.7,append,sys.path
30,524,216
1
true
0
0
I figured out how whilst typing my question. Just use a double back slash before directories starting with numbers: I.e. sys.path.append("C:\Postgrad\\2015\Records\\20150528\RAMP_UP2") Returns: 'C:\Postgrad\2015\Records\20150528\RAMP_UP2'
1
0
0
I am having trouble using the sys.path.append() function. When I try to append a path which contains directories names starting with numbers the directories are not correctly named. For example: sys.path.append("C:Postgrad\2015\Records\20150528\RAMP_UP2") Returns: 'C:\Postgrad\x815\Records\x8150528\RAMP_UP2' In the sys.path directory. Is there a way to ensure that the path is correctly appended?
Error Using sys.path.append - Function copies path incorrectly
1.2
0
0
124
30,526,510
2015-05-29T09:52:00.000
2
0
0
0
python,django,python-2.7,django-models
62,368,839
1
false
1
0
To track history for a model and fix ur issues to use of this code. pip install django-simple-history
1
2
0
I am using django-simple-history for maintaining the history for each model. Now when I import the model in my management command it gives import error "cannot find import name 'ModelName'". Any Help regarding this.?
django-simple-history cannot import name error for model import in management command
0.379949
0
0
1,268
30,528,852
2015-05-29T11:49:00.000
2
0
0
0
python,numpy,nan
30,529,135
3
false
0
0
From what I understand NaN represents anything that is not a number, while a masked array marks missing values OR values that are numbers but are not valid for your data set. I hope that helps.
2
13
1
In numpy there are two ways to mark missing values: I can either use a NaN or a masked array. I understand that using NaNs is (potentially) faster while masked array offers more functionality (which?). I guess my question is if/ when should I use one over the other? What is the use case of np.NaN in a regular array vs. a masked array? I am sure the answer must be out there but I could not find it...
numpy: difference between NaN and masked array
0.132549
0
0
2,235
30,528,852
2015-05-29T11:49:00.000
1
0
0
0
python,numpy,nan
52,755,040
3
false
0
0
Keep in mind that strange np.nan behaviours, mentioned by jrmyp, include unexpected results for example when using functions of the statsmodels (e.g. ttest) or numpy module (e.g. average). From experience, most those functions have workarounds for NaNs, but it has the potential of driving you mad for a while. This seems like a reason to mask arrays whenever possible.
2
13
1
In numpy there are two ways to mark missing values: I can either use a NaN or a masked array. I understand that using NaNs is (potentially) faster while masked array offers more functionality (which?). I guess my question is if/ when should I use one over the other? What is the use case of np.NaN in a regular array vs. a masked array? I am sure the answer must be out there but I could not find it...
numpy: difference between NaN and masked array
0.066568
0
0
2,235
30,533,189
2015-05-29T15:15:00.000
1
0
1
0
python,string,null,escaping,backslash
30,533,575
1
false
0
0
Considering all comments it looks like incorrectly used PIL/Pillow API, namely the Image.open function that requires file name instead of file data.
1
1
0
I'm using Python 2.6 and I have a variable which contains a string (I have sent it thorugh sockets and now I want to do something with it). The problem is that I get the following error: TypeError: file() argument 1 must be encoded string without NULL bytes, not str After I looked it up I found out that the problem is probably that the string I'm sending contains '\0' but it isn't a literal string that I can just edit with double backslash or adding a 'r' before hand, so is there a way to tell python to ignore the escape sequences and treat the whole thing as string? (For example - I don't want python to treat the sequence \0 as a null char, but rather I want it to be treated as a backslash char followed by a zero char)
Ignoring escape sequences
0.197375
0
0
666
30,533,263
2015-05-29T15:18:00.000
3
0
1
1
python,vim,nerdtree,python-mode,netrw
30,533,662
3
true
0
0
But having a file opened, if I open netrw by typing :E and open another file by hitting <enter> VIM closes the old one and opens the new one in the same window. [...] How can I open multiple files/buffers in the same window using netrw? Buffers are added to a list, the buffer list, and facultatively displayed in one or more window in one or more tab pages. Since a window can only display one buffer, the only way to see two separate buffers at the same time is to display them in two separate windows. That's what netrw's o and v allow you to do. When you use <CR>to edit a file, the previous buffer doesn't go away: it is still in the buffer list and can be accessed with :bp[revious].
1
2
0
I have recently switched to VIM using NERDTree and python-mode. As NERDTree seems to have a conflict with python-mode and breaks my layout if I close one out of multiple buffers, I decided to switch to netrw since it is shipped with VIM anyway. But having a file opened, if I open netrw by typing :E and open another file by hitting <enter> VIM closes the old one and opens the new one in the same window. And if I hit <o> in the same situation VIM adds another buffer but adds a new window in a horizontal split. How can I add multiple files/buffers to the buffer list and only show the last added buffer in the active window (without new splits) using netrw? #edited# Thanks in advance! I hope I haven't missed something trivial from the manual.. ;-)
VIM + netrw: open multiple files/buffers in the same window
1.2
0
0
3,905
30,535,810
2015-05-29T17:38:00.000
0
0
0
0
python,visual-studio,python-3.x,matplotlib,ptvs
30,576,274
2
true
0
0
All of the attempts sounds futile... Even removing the VS is a challenging stuff... and as people consider, changing the OS is the most stable way to get rid of VS in the presence of such anomalies regarding its libraries... So... I changed the OS... and installed the VS and PTVS again...
2
1
1
Here is a Python 3.4 user, in VS 2013 and PTVS... I'd written a program to plot something by Matplotlib... The output had been generating and everything was ok... So, I closed VS and now I've opened it again after an hour, running the very script, but this time as soon as I press F5, a window appears and says Python has stopped working... There is a short log in the output window, which asserts that: The program '[9952] python.exe' has exited with code -1073741819 (0xc0000005) 'Access violation'. Who could decrypt this error, please?!... Kind Regards ......................................... Edit: I just tested again with no change... The error has been changed to: The program '[11284] python.exe' has exited with code -1073741819 (0xc0000005) 'Access violation'. Debug shows that when the program points to the drawing command of the matloptlib, i.e. plt.show(), this crash will happen...
Python access violation
1.2
0
0
659
30,535,810
2015-05-29T17:38:00.000
-1
0
0
0
python,visual-studio,python-3.x,matplotlib,ptvs
30,536,536
2
false
0
0
It seems to be a problem with your python and PTVS. Try to remove every .pyc file and have another go at it
2
1
1
Here is a Python 3.4 user, in VS 2013 and PTVS... I'd written a program to plot something by Matplotlib... The output had been generating and everything was ok... So, I closed VS and now I've opened it again after an hour, running the very script, but this time as soon as I press F5, a window appears and says Python has stopped working... There is a short log in the output window, which asserts that: The program '[9952] python.exe' has exited with code -1073741819 (0xc0000005) 'Access violation'. Who could decrypt this error, please?!... Kind Regards ......................................... Edit: I just tested again with no change... The error has been changed to: The program '[11284] python.exe' has exited with code -1073741819 (0xc0000005) 'Access violation'. Debug shows that when the program points to the drawing command of the matloptlib, i.e. plt.show(), this crash will happen...
Python access violation
-0.099668
0
0
659
30,535,831
2015-05-29T17:39:00.000
0
1
0
0
python,libraries,coordinate-systems,satellite,sgp4
57,221,666
1
false
0
0
Michael mentioned it in his comment, but PyEphem I believe is deprecated as of the current Python 3 version. That being said, if you are to use TLEs, SGP4 was made to handle TLEs in particular. The non-keplerian and non-newtonian terms you see in TLEs are specifically passed into the SGP4 propagator (B* drag, second derivative of mean motion, etc.). Once you get outside of Earth neighborhood (beyond GEO), SGP4 is not meant to handle these cases. SGP4 in of itself is inherently a near-earth propagator that does not scale well on an inter-planetary or even cis-lunar regime. In fact, if you are to have both apogee and perigee extend beyond GEO, I would tend to avoid SGP4. It is important to note that SGP4 outputs things in a TEME frame (true equator mean equinox). This is an inertial frame. If you want ECEF coordinates, you will need to find a package that converts you from inertial to fixed frames. Regardless of whether or not you desired earth-fixed coordinates, I highly recommend making this conversion so you can then convert to your inertial frame of choice.
1
5
0
The landscape of Python tools that seem to accomplish the task of propagating Earth satellites/celestial bodies is confusing. Depending on what you're trying to do, PyEphem or Python-SGP4 may be more suitable. Which of these should I use if: I want ECEF/ECI coordinates of an Earth satellite I want general sky coordinates of a celestial object Near Earth vs. far away objects Want to use two-line element sets Do any of these accomplish precise orbit determination? If not, where do I go/what resources are there out there for precise orbit determination? I kind of know the answers here. For instance, POD is not part of any of these libraries. These computations seem to be very involved. POD for many objects are available from IGS. The main reason I ask is for documentation purposes. I'm not familiar with python-skyfield, but I have a hunch it accomplishes what these other two do. --Brandon Rhodes, I await your expertise :)
Which should I use: Python-sgp4, PyEphem, python-skyfield
0
0
0
2,484
30,536,946
2015-05-29T18:48:00.000
1
0
1
0
pip,python-3.4
48,930,745
6
false
0
0
If you hace problems with the python command only need add the route C:/python34 or the route went you have python installed: List item Right click on "My computer" Click "Properties" Click "Advanced system settings" in the side panel Click "Environment Variables" Click the "New" below system variables find the path variable and edit add this variable ;C:\Python34 with the semicolon now you can run this comand cd C:\Python34 python -m pip install requests
4
20
0
What command should I use in command prompt to install requests module in python 3.4 version ??? pip install requests is not useful to install requests module in python 3.4 version. Because while running the script below error is coming ImportError : no module named 'requests'
How to install requests module in python 3.4 version on windows?
0.033321
0
1
108,064
30,536,946
2015-05-29T18:48:00.000
58
0
1
0
pip,python-3.4
30,537,052
6
true
0
0
python -m pip install requests or py -m pip install requests
4
20
0
What command should I use in command prompt to install requests module in python 3.4 version ??? pip install requests is not useful to install requests module in python 3.4 version. Because while running the script below error is coming ImportError : no module named 'requests'
How to install requests module in python 3.4 version on windows?
1.2
0
1
108,064
30,536,946
2015-05-29T18:48:00.000
3
0
1
0
pip,python-3.4
42,662,073
6
false
0
0
On Windows, I found navigating to my Python folder via CMD worked cd C:\Python36\ and then running the commandline: python -m pip install requests
4
20
0
What command should I use in command prompt to install requests module in python 3.4 version ??? pip install requests is not useful to install requests module in python 3.4 version. Because while running the script below error is coming ImportError : no module named 'requests'
How to install requests module in python 3.4 version on windows?
0.099668
0
1
108,064
30,536,946
2015-05-29T18:48:00.000
0
0
1
0
pip,python-3.4
64,150,710
6
false
0
0
After installing python which comes with pip run and exe and input "pip install requests" It should do
4
20
0
What command should I use in command prompt to install requests module in python 3.4 version ??? pip install requests is not useful to install requests module in python 3.4 version. Because while running the script below error is coming ImportError : no module named 'requests'
How to install requests module in python 3.4 version on windows?
0
0
1
108,064
30,538,062
2015-05-29T19:56:00.000
0
0
0
0
python,function,matplotlib,dataset
30,674,814
1
false
0
0
At the end the solution has been easier than I thought. I had simply to define the continuous variable through the discrete data, as, for example: w=x/y, then define the function as already said: exfunct=w**4 and finally plot the "continuous-discrete" function: plt.plot(x,x/exfunct),'k-',color='red',lw=2) I hope this can be useful.
1
0
1
I need to plot a ratio between a function introduced thorough a discrete data set, imported from a text file, for example: x,y,z=np.loadtxt('example.txt',usecols=(0,1,2),unpack=True), and a continuous function defined using the np.arange command, for example: w=np.arange(0,0.5,0.01) exfunct=w**4. Clearly, solutions as plt.plot(w,1-(x/w),'k--',color='blue',lw=2) as well plt.plot(y,1-(x/w),'k--',color='blue',lw=2) do not work. Despite having looked for the answer in the site (and outside it), I can not find any solution to my problem. Should I fit the discrete data set, to obtain a continuous function, and then define it in the same interval as the "exfunct"? Any suggestion? Thank you a lot.
matplotlib discrete data versus continuous function
0
0
0
1,808
30,538,356
2015-05-29T20:17:00.000
1
1
0
0
python,robotframework
30,659,802
2
false
0
0
I took a relatively quick look through the sources, and it seems that the execution context does have any reference to currently executing keyword. So, the only way I can think of resolving this is: Your library needs also to be a listener, since listeners get events when a keyword is started You need to go through robot.libraries.BuiltIn.EXECUTION_CONTEXT._kw_store.resources to find out which resource file contains the keyword currently executing. I did not do a POC, so I am not sure whether this actually doable, bu that's the solution that comes to my mind currently.
1
1
0
I want to create a python library with a 0 argument function that my custom Robot Framework keywords can call. It needs to know the absolute path of the file where the keyword is defined, and the name of the keyword. I know how to do something similar with test cases using the robot.libraries.BuiltIn library and the ${SUITE_SOURCE} and ${TEST NAME} variables, but I can't find anything similar for custom keywords. I don't care how complicated the answer is, maybe I have to dig into the guts of Robot Framework's internal classes and access that data somehow. Is there any way to do this?
Robot Framework location and name of keyword
0.099668
0
0
2,627
30,538,673
2015-05-29T20:40:00.000
2
0
1
0
python,list,python-2.7,list-comprehension
30,538,735
2
false
0
0
You can use the builtin any function for that [item for item in some_list if any(s in item for s in string)]
1
2
0
I have a situation where I am using list comprehension to scan one list and return items that match a certain criteria. [item for item in some_list_of_objects if 'thisstring' in item.id] I want to expand this and have a list of things that can be in the item, the list being of unknown length. Something like this: string_list = ['somestring', 'another_string', 'etc'] [item for item in some_list_of_objects if one of string_list in item.id] What is a pythonic way to accomplish this? I know I could easily rewrite it to use the standard loop structure, but I would like to keep the list comprehension if I can do so without producing very ugly code. Thanks in advance.
Python: List Comprehension with a Nested Loop
0.197375
0
0
144
30,540,776
2015-05-30T00:00:00.000
1
0
1
0
python,python-3.x,currency
30,541,140
2
false
0
0
Look into the regular expressions module. You can compile a pattern that matches your dollars/cents format and extract the floating-point number from it.
1
2
0
I've looked through the 'currency' threads, but they're all for going the other way. I'm reading financial data that comes in as $1,234.56 &c, with everything a string. I split the input line, and want to convert the value item to float for add/subtract (I'm mot worried about roundoff error). Naturally, the float() throws an error. I could write a function to call as 'amount = float(num(value_string)), but woder if there's a "dollar_string_to_float()" function in one of the 32,000 Python modules.
How to read '$1,234.56' as 1234.56
0.099668
0
0
292
30,540,872
2015-05-30T00:16:00.000
1
0
0
0
python,image,pillow
30,541,163
3
false
0
1
I haven't used Pillow, and I haven't seen your images or code, but let's say you have an image with a resolution of 400x200 and you want to resize it to 200x100, then each of the new pixels needs to have some color. Since the new image is smaller, the colors from the original will have to be mashed together to form the new colors. So, in this case where it gets smaller by a factor of two in each dimension, the color of each pixel will be the average of four pixels from the original. Similarly, if you resize to a larger image, depending on how that is done, the new pixels could be blocky (like when you zoom in to any pixel image) or smooth, which would mean that they are some interpolation of the pixels from the original image.
1
0
0
I'm experiencing something strange when using Image.resize(). I have two images, one is a cropped version of the other. I've been working out the aspect ratio so that I can resize both images by the same factor and then resizing them separately. Now for some reason, the larger of the two images is resized fine but the cropped version has some colour distortion. It almost appears like the image has been slightly saturated. Has anyone experienced this before or know why it might be happening? Thanks for reading.
Using Python to resize images using Pillow, why are the colours changing?
0.066568
0
0
1,342
30,543,041
2015-05-30T06:23:00.000
1
0
1
1
python,macos
30,543,046
1
true
0
0
python is installed on OSX by default and you just need to open terminal and write ‘python’ command, then you can start your python coding
1
1
0
I'm new to computer programming! I want to learn python and write my program and run it on my mac OS X machine. How can i setup python programming tools on OS X and how can i use that ? I before this never use any other programming language.
How can i install python on OSX?
1.2
0
0
36
30,545,494
2015-05-30T11:18:00.000
0
0
0
0
python,wxpython
30,546,887
1
false
0
1
I found out that wx has a EVT_LIST_ITEM_ACTIVATED event that is triggered when a list item is double clicked. I used this to capture the selected item and display the data accordingly.
1
0
0
I have been working on my first wxpython app and this app has a search functionality. The app has to search for elements from database, list them and display details of one single element, when the element is clicked from the list. I found there is ListView or ObjectListView to be used for this. But what should be used so that on click of a single element in that list, I should display the panel which displays the dynamic data for that particular element?
Display single element page on click of list element wxpython
0
0
0
33
30,547,102
2015-05-30T14:02:00.000
2
0
0
0
python,django,python-2.7,numpy
30,549,380
1
true
1
0
I assume your question is about "how do I do these calculations in the restful framework for django?", but I think in this case you need to move away from that idea. You did everything correctly but RESTful APIs serve resources -- basically your model. A computation however is nothing like that. As I see it, you have two ways of achieving what you want: 1) Write a model that represents the results of a computation and is served using the RESTful framework, thus your computation being a resource (can work nicely if you store the results in your database as a way of caching) 2) Add a route/endpoint to your api, that is meant to serve results of that computation. Path 1: Computation as Resource Create a model, that handles the computation upon instantiation. You could even set up an inheritance structure for computations and implement an interface for your computation models. This way, when the resource is requested and the restful framework wants to serve this resource, the computational result will be served. Path 2: Custom Endpoint Add a route for your computation endpoints like /myapi/v1/taxes/compute. In the underlying controller of this endpoint, you will load up the models you need for your computation, perform the computation, and serve the result however you like it (probably a json response). You can still implement computations with the above mentioned inheritance structure. That way, you can instantiate the Computation object based on a parameter (in the above case taxes). Does this give you an idea?
1
1
0
I have developed a RESTful API using the Django-rest-framework in python. I developed the required models, serialised them, set up token authentication and all the other due diligence that goes along with it. I also built a front-end using Angular, hosted on a different domain. I setup CORS modifications so I can access the API as required. Everything seems to be working fine. Here is the problem. The web app I am building is a financial application that should allow the user to run some complex calculations on the server and send the results to the front-end app so they can be rendered into charts and other formats. I do not know how or where to put these calculations. I chose Django for the back-end as I expected that python would help me run such calculations wherever required. Basically, when I call a particular api link on the server, I want to be able to retrieve data from my database, from multiple tables if required, and use the data to run some calculations using python or a library of python (pandas or numpy) and serve the results of the calculations as response to the API call. If this is a daunting task, I at least want to be able to use the API to retrieve data from the tables to the front-end, process the data a little using JS, and send it to a python function located on the server with this processed data, and this function would run the necessary complex calculations and respond with results which would be rendered into charts / other formats. Can anyone point me to a direction to move from here? I looked for resources online but I think I am unable to find the correct keywords to search for them. I just want a shell code kind of a thing to integrate into my current backed using which I can call some python scripts that I write to run these calculations. Thanks in advance.
Running complex calculations (using python/pandas) in a Django server
1.2
0
0
2,516
30,553,766
2015-05-31T04:12:00.000
11
0
1
0
python
30,553,899
5
true
0
0
Types aren't used the same way in Python as statically types languages. A hashable object is simply one with a valid hash method. The interpreter simply calls that method, no type checking or anything. From there on out, standard hash map principles apply: for an object to fulfill the contract, it must implement both hash and equals methods.
1
8
0
I understand that the following is valid in Python: foo = {'a': 0, 1: 2, some_obj: 'c'} However, I wonder how the internal works. Does it treat everything (object, string, number, etc.) as object? Does it type check to determine how to compute the hash code given a key?
Python: how does a dict with mixed key type work?
1.2
0
0
16,230
30,554,687
2015-05-31T06:52:00.000
0
0
1
0
python,nlp,nltk
46,657,638
2
false
0
0
The first step is to try and do this job yourself by hand with a pencil. Try it on not just one but a collection of news stories. You really do have to do this and not just think about it. Draw the graphics just as you'd want the computer to. What this does is forces you to create rules about how information is transformed to graphics. This is NOT always possible, so doing it by hand is a good test. If you can't do it then you can't program a computer to do it. Assuming you have found a paper and pencil method. What I like to do is work BACKWARDS. Your method starts with the text. No. Start with the numbers you need to draw the graphic. Then you think about where are these numbers in the stories and what words do I have to look at to get these numbers. Your job is now more like a hunting trip, you know the data is there, but how to find it. Sorry for the lack of details but I don't know your exact problem but this works in every case. First learn to do the job yourself on paper then work backwards from the output to the input. If you try to design this software in the forward direction you get stuck soon because you can't possibly know what to do with your text because you don't know what you need, it's like pushing a rope it don't work. Go to the other end and pull the rope. Do the graphic work FIRST then pull the needed data from the news stories.
1
0
0
Objective: I am trying to do a project on Natural Language Processing (NLP), where I want to extract information and represent it in graphical form. Description: I am considering news article as input to my project. Removing unwanted data in the input & making it in Clean Format. Performing NLP & Extracting Information/Knowledge Representing Information/Knowledge in Graphical Format. Is it Possible?
How to extract Information?
0
0
0
301
30,554,696
2015-05-31T06:53:00.000
2
0
0
0
python,python-2.7,turtle-graphics
30,554,792
2
true
0
1
By default, turtle has a draw delay of 10 milliseconds. Every time it updates the canvas, it will pause 10 milliseconds as a simple way of controlling the animation speed. This delay is independent of the speed of the turtle itself. If you want to speed up the animation, you can set a shorter delay, e.g. with turtle.delay(3) or turtle.delay(0). Note that turtle graphics are more of an educational tool than a serious way to do graphics. If you don't have a specific reason to use turtle, consider switching to other graphics libraries.
2
3
0
I got program that draws spaceship (Turtle Graphics) forward,backward etc. By using a lot of orders and lines drawing spaceship takes 5 seconds using turtle.speed(0). And whenever you click the right/left key it draws it again in other direction. It major thing in my project. Is there a way to draw it faster? Thanks in advance.
Faster drawing in python
1.2
0
0
1,123
30,554,696
2015-05-31T06:53:00.000
1
0
0
0
python,python-2.7,turtle-graphics
30,555,631
2
false
0
1
you can use screen.tracer(n) where a bigger n value means a faster drawing speed but less details
2
3
0
I got program that draws spaceship (Turtle Graphics) forward,backward etc. By using a lot of orders and lines drawing spaceship takes 5 seconds using turtle.speed(0). And whenever you click the right/left key it draws it again in other direction. It major thing in my project. Is there a way to draw it faster? Thanks in advance.
Faster drawing in python
0.099668
0
0
1,123
30,555,149
2015-05-31T07:59:00.000
0
1
0
0
python,database,algorithm
30,555,758
2
false
0
0
Try to do hack on client side in recording email attempt to a log file. Then you can read that file to count frequency of emails sent. I think that you can put data in memory in dict for some time say for ex 5 or 10 min. Then you can send data to DB thus not putting load on DB of frequent writes. If you put a check in your code for sudden surge in email from a particular domain then it might provide you a solution for your problem.
1
1
0
Sorry if the title is misleading. I am trying to write a program that calculates frequency of emails being sent out of different email ids. We need to trigger alerts based on number and frequency of mails sent. For example for a particular email if in past 60 minutes more than 25 mails were sent, a trigger needs to be sent. A different trigger for another directory based on another rule. Fundamental rules are about how many mails sent over past 60 minutes, 180 minutes, 12 hours and 24 hours. How do we come up with a strategy to calculate frequency and store it without too much of system/cpu/database overheads. The actual application is a legacy CRM system. We have no access to the Mail Server to hack something inside the Postfix or MTA. Moreover there are multiple domains involved, so any suggestion to do something on the mail server may not help. We have ofcourse access to every attempt to send a mail, and can look at recording them. My challenge is on a large campaign database writes would be frequent, and doing some real time number crunching resource intensive. I would like to avoid that and come up with an optimal solution Language would be Python, because CRM is also written using the same.
Strategies for storing frequency of dynamic data
0
0
0
50
30,555,943
2015-05-31T09:38:00.000
6
0
1
0
python,conda,tox
30,555,944
5
true
0
0
While tox can't make use of conda, you can use conda to "install" different Python versions where tox can find them (like it would find "normal" Python installations in those folders). The following is tested on Windows: You need virtualenv installed via pip in the root conda environment. I suspect this is the virtualenv that will be used by tox. (I had to install virtualenv using pip install virtualenv to get the virtualenv command to work, even though conda list showed it as installed.) Install the Python versions you want to test. This is easily done using conda create. tox will autodetect Python binaries on Windows in C:\python27, C:\python33, etc., so create environments using conda create -p C:\python27 python=2.7 etc.
1
29
0
The Python testing tool tox seems to be designed to work with virtualenv. Can it also work on conda/anaconda-based Python installations?
Is it possible to use tox with conda-based Python installations?
1.2
0
0
7,948
30,560,147
2015-05-31T17:05:00.000
0
0
0
0
python,widget,pyqt,pyqt4
30,563,200
1
false
0
1
Use a Qt layout (like a QVBoxLayout, QHBoxLayout, or a QGridLayout)
1
0
0
I'm working on developing a PyQt4 application that will require a lot of widgets and I have run into an issue. When you say where to move the widget to (such as: btn.move(100, 100) it moves it properly, but if you resize the window, you can't see it). I'm not sure how to fix this. I don't want to restrict resizing of the window from the user, but I can't have widgets not showing up on screen. So if the user resizes the program window to 600x600, how can I have widgets automatically change their location?
How to update PyQt4 widget locations based on window size?
0
0
0
42
30,561,194
2015-05-31T18:38:00.000
0
0
1
0
python
30,561,999
4
false
0
0
Well, del uses just a little less space in the computer as the person above me implied. The computer still accepts the variable as the same code, except with a different value. However, when you variable is assigned something else, the computer assigns a completely different code ID to it in order to account for the change in memory required.
1
34
0
Please what is the most efficient way of emptying a list? I have a list called a = [1,2,3]. To delete the content of the list I usually write a = [ ]. I came across a function in python called del. I want to know if there is a difference between del a [:] and what I use.
what is the difference between del a[:] and a = [] when I want to empty a list called a in python?
0
0
0
5,503
30,561,907
2015-05-31T19:42:00.000
1
0
1
0
python,canopy
31,528,989
1
false
0
0
This is what I did. Install Canopy 1.5.5 Now open Canopy terminal Go to Package Manager-> Updates -> Install all 7 updates Once you install all the updates , you will see both the kernel and 'pip' getting updated. I have not encountered the 'Python kernel has crashed error since then'. Note: I have a Linux OS
1
1
0
When I restart my kernel in Canopy (latest version), it goes into a loop where the kernel will crash repeatedly, and even when restarting repeatedly, it does not exit this loop. It's very annoying to have to do this when something doesn't work, and I'm trying to avoid having to re-install. Any suggestions? P.S.: I have contacted Enthought.
Python Canopy kernel keeps crashing
0.197375
0
0
4,429
30,563,177
2015-05-31T22:01:00.000
1
1
1
0
file,python-2.7,encryption,binary
42,172,135
3
false
0
0
Your binary file is coming out looking like text because the file is being treated like it is encoded in an 8 bit encoding (ASCII or Latin-1, etc). Also, in Python 2, bytes and (text) characters are used interchangeably... i.e. a string is just an array of ASCII bytes. You should search the differences between python 2 and 3 text encoding and you will quickly see why anomalies such as you are encountering can develop. Most of the Python 2 version encryption modules use the python byte strings. Your "binary" non-text files are not really being treated any differently from the text ones; they just don't map to an intelligible coding that you recognize, whereas the text ones do.
1
3
0
I was trying to build an encryption program in python 2.7. It would read the binary from a file and then use a key to encrypt it. However, I quickly ran into a problem. Files like image files and executables read as hex values. However, text files do not using open(). Even if i run file=open("myfile.txt", "rb") out=file.read() it still comes out as just text. I'm on windows 7, not linux which i think may make a difference. Is there any way i could read the binary from ANY file (including text files), not just image and executable files?
Python read text file as binary?
0.066568
0
0
8,188
30,564,015
2015-06-01T00:10:00.000
8
0
0
0
python,distribution,point
30,564,059
7
false
0
0
FIRST ANSWER: An easy solution would be to do a check to see if the result satisfies your equation before proceeding. Generate x, y (there are ways to randomize into a select range) Check if ((x−500)^2 + (y−500)^2 < 250000) is true if not, regenerate. The only downside would be inefficiency. SECOND ANSWER: OR, you could do something similar to riemann sums like for approximating integrals. Approximate your circle by dividing it up into many rectangles. (the more rectangles, the more accurate), and use your rectangle algorithm for each rectangle within your circle.
1
14
1
I am wondering how i could generate random numbers that appear in a circular distribution. I am able to generate random points in a rectangular distribution such that the points are generated within the square of (0 <= x < 1000, 0 <= y < 1000): How would i go upon to generate the points within a circle such that: (x−500)^2 + (y−500)^2 < 250000 ?
How to generate random points in a circular distribution
1
0
0
31,584
30,565,431
2015-06-01T03:57:00.000
2
0
0
0
python,amazon-web-services,websocket,webserver,amazon-elastic-beanstalk
30,565,453
1
true
1
0
AWS doesn't "know" anything about your content. The webserver that you install will be configured to point to the "root" directory in which index.html (or something equivalent) should be. Since it depends on which webserver (django, flask, Jinja etc) you install - you should lookup its documentation!
1
1
0
I'm deploying an python web server on AWS now and I have a some question about it. I'm using websocket to communicate between back end and front end. Do I have to use framework like django or flask? If not, where should I put the index.html file? in other word, after deploying, how do AWS know the default page of my application? Thanks in advance.
Deploy python web server on AWS Elastic Beanstalk
1.2
0
0
284
30,565,824
2015-06-01T04:53:00.000
1
1
0
0
python,polling,remote-server
30,566,324
1
true
0
0
There are so many options. 'Polling' is generally a bad idea, as it assumes CPU occupation. You could have your script send you status changes. You could have your script write it's actual status into a (remote) file (wither overwriting or appending to a log file) and you can look into that file. This is probably the easiest way. You can monitor the file with tail -f file over the link And many more - and more complicated - other options.
1
0
0
I'm looking for a best way to invoke a python script on a remote server and poll the status and fetch the output. It is preferable for me to have this implementation also in python. The python script takes a very long time to execute and I want to poll the status intermediately on what all actions are being performed. Any suggestions?
Invoke a python script on a remote server and poll the status
1.2
0
1
235
30,567,284
2015-06-01T06:48:00.000
1
0
0
0
python,http,cron,connection,monitor
30,567,385
2
true
0
0
I would suggest just a GET request (you just need a ping to indicate that the PC is on) sent periodically to maybe a Django server and if you query a page on the Django server, it shows a webpage indicating the status of each. In the Django server have a loop where the time each GET is received is indicated, if the time between the last GET and current time is too large, set a flag to false. That flag will later be visible when the URL is queried, via the views. I don't think this would end up sloppy, just a trivial solution where you don't really have to dig too deep to make it work.
1
1
0
I have two PCs and I want to monitor the Internet connectivity in both of them and make it available in a page as to whether they're currently online and running. How can I do that? I'm thinking of a cron job that gets executed every minute that sends a POST to a file located in a server, which in turn would write the connectivity status "online" to a file. In the page where the statuses are displayed, read from both the status files and display whether they're online or not. But this feels like a sloppy idea. What alternative suggestion do you have? (The answer doesn't necessarily have to be code; I'm not looking for copy-paste solutions. I just want an idea, a nudge in the right directio,)
How to monitor the Internet connectivity on two PCs simultaneously?
1.2
0
1
80
30,568,433
2015-06-01T08:00:00.000
1
0
1
0
python,tarfile
30,568,538
1
true
0
0
Internal structure. tar stands for "tape archive", and the big design point is the ability to work sequentially with small RAM, while writing to (or reading from) a sequential-access IO device (also known as tape): loading everything into memory and then processing it in some specific order was not possible. Thus, files are extracted in the order they are found in the archive, by reading the archive in order.
1
0
0
I have a simple question yet I didn't manage to find a lot of information about it or understand it very well. When I open a tarfile in python using the tarfile.open() method, how exactly are the files in the tarfile read? I have a tarfile with data on people, each person has his own folder and in that folder his data is divided between different folders. Will the files be accessed depending on internal structure or is there another way to determine which file will be accessed next when I use tarfile.extractfile()? Thank you in advance
Order of opening files when using tarfile.open() in Python
1.2
0
0
293
30,568,605
2015-06-01T08:11:00.000
2
0
1
0
python,python-2.7,pycharm,anaconda
30,578,236
1
true
0
0
Anaconda installs a completely separate Python, so there is no need to do anything with the old one. The Anaconda installer sets the PATH variable automatically. As to the packages, your best bet if there is a package you need that doesn't come with Anaconda is to install it with conda, or using pip if it isn't available with conda. I don't know about PyCharm but if you search StackOverflow you should find another question about the same.
1
2
0
Currently I have python 2.7 installed, and I decided to install Anaconda (same version). My questions are: What is the safest way to do it? Uninstall python 2.7 first? Can I move packages installed on my old python version manually (without reinstalling them again)? Should I change something in my PATH variable afterwards? I'm working with Pycharm. Is there a way to change automatically the interpreter of all existing projects? My motivation: I'm in charge of ~50 students (Python noobies) in a university course. Since I'm having some difficulties supporting the installation of each and every one of them, I thought that moving to Anaconda can help me save some time and future problems. Since some of them already started working on their projects, I want to do this transition as clean as possible. Thanks!
installing Anaconda on windows
1.2
0
0
1,408
30,568,702
2015-06-01T08:17:00.000
1
0
0
1
python,google-app-engine,flask,google-authentication,google-app-engine-python
30,570,660
1
false
1
0
You can't reuse the Users service authentication across different applications. A possible solution could be using a OAuth2 (or a similar mechanism) - create an application (or use one of you existing applications) which will be the authentication provider. Each application will redirect to the authentication provider where the user will get authenticated, and then redirected back. If the user is already authenticated, they will not need to authenticate again when switching applications. This way you won't be able to use the Users service on the end applications, only in the provider, so you will need to rely on another way to store the currently logged in user in each application (like datastore+memcache).
1
0
0
Let's say I run two different apps on different domains, coded in Python flask and running as GAE instances; one is site.co.uk and one is site.us. If I use the GAE authentication on one site is it possible to have them authenticated for the other site too? I don't really want to make them have to authenticate for each country specific domain.
GAE - Share user authentication across apps
0.197375
0
0
51
30,575,409
2015-06-01T13:53:00.000
1
0
0
0
python,django,django-queryset
30,579,869
1
true
1
0
You can reload each object in the start of the loop body. Just use TheModel.objects.get(pk=curr_instance.pk) to do this.
1
0
0
I have a queryset which I iterate through in a loop. I use data from the queryset to change data inside the queryset which might be needed in a lter step of the loop. Now the queryset is only loaded once at the beginning of the loop. How can I make django reload the data in every iteration?
Django how to not use cached queryset while iterating?
1.2
0
0
49
30,577,055
2015-06-01T15:11:00.000
3
0
1
0
python,multithreading,python-multithreading,python-multiprocessing
30,577,516
1
true
0
0
The actual limitation is a CPython limitation, not a language one. Given that, threading can be used to run concurrent tasks, as long as (in CPython) only one task has python code. Besides IO, an example would be c extensions that perform lengthy computations and release the GIL (in CPython's case).
1
2
0
I'm trying to get my head around this question. I did some research and found out that the multithreading a task would be lot slower than sequential task. The same is mentioned by respected David Beazley in some of his concurrency tasks. To achieve similar (kind of) behavior I can spawn a new process using multiprocessing module but spawning a process takes more time than spawning a thread. So, I'm wondering what would be the use cases where can I use threading module other than I/O bound tasks. Please help me understand this.
What is the use of threading in Python despite of the limitations imposed by GIL?
1.2
0
0
303
30,579,022
2015-06-01T16:54:00.000
1
0
1
0
python,multithreading
30,584,402
2
false
0
0
As far as I know, I don't believe there's a way to retrieve the thread by its thread_id. Your best bet would be to store a reference to the thread object itself.
1
1
0
I have a multithreading Python(2.7) program, which runs multiple threads for different tasks. I am storing the thread-ids, for tracking the status of threads in a separate thread for status-tracking. How can I check the thread is alive or not ( isAlive() ) by having the thread-id ?
python: Check status of thread by ID
0.099668
0
0
1,081
30,579,494
2015-06-01T17:23:00.000
-1
1
0
0
python,p2p
30,579,803
1
false
0
0
I think the simpliest way to do this is reading socket server in this battleship game. But here is a problem, in this case you will have a problem with connecting, in case when your ip is invisible from the internet.
1
0
0
this is a conceptual question. As part hobby, part art project I'm looking to build a Python script that allows two people to play battleships between their computers (across the net, without being on the same network). The idea would be you could run the program something like: python battleships.py 192.168.1.1 Where the IP address would be the computer you wanted to do battle with. I have some modest Python coding abilities but I'm curious how hard it would be to build this and how one might go about it? One key goal is that it must require almost zero set-up: I'm hoping anyone can download the python script, open the terminal and play battleships with someone else. Thanks!
Conceptual: how to code battleships between two computers in Python?
-0.197375
0
0
282
30,580,929
2015-06-01T18:46:00.000
0
0
0
0
python,class,tree,nodes
30,581,078
4
false
0
0
Pretty much both of your solutions are what is done in practice. Your first solution is to just increment a number will give you uniqueness, as long as you don't overflow (with python bigintegers this isnt really a problem). The disadvantage of this approach is if you start doing concurrency you have to make sure you use locking to prevent data races when increment and reading your external value. The other approach where you generate a random number works well in the concurrency situation. The larger number of bits you use, the less likely it is you will run into a collision. In fact you can pretty much guarantee that you won't have collisions if you use say 128-bits for your id. An approach you can use to further guarantee you don't have collisions, is to make your unique ids something like TIMESTAMP_HASHEDMACHINENAME_PROCESSID/THREADID_UNIQUEID. Then pretty much can't have collisions unless you generate two of the same UNIQUEID on the same process/thread within 1 second. MongoDB does something like this where they just increment the UNIQUEID. I am not sure what they do in the case of an overflow (which I assume doesn't happen too often in practice). One solution might be just to wait till the next second before generating more ids. This is probably overkill for what you are trying to do, but it is a somewhat interesting problem indeed.
1
2
0
I am making a class in Python that relates a lot of nodes and edges together. I also have other operations that can take two separate objects and merge them into a single object of the same type, and so on. However, I need a way to give every node a unique ID for easy lookup. Is there a "proper way" to do this, or do I just have to keep an external ID variable that I increment and pass into my class methods every time I add more nodes to any object? I also considered generating a random string for each node upon creation, but there is still a risk of collision error (even if this probability is near-zero, it still exists and seems like a design flaw, if not a longwinded overengineered way of going about it anyway).
Giving unique IDs to all nodes?
0
0
1
1,452
30,581,807
2015-06-01T19:36:00.000
0
0
1
0
python,multithreading
30,582,034
2
false
0
0
If you want your app to be really multithreaded then consider using standalone queue (like activemq or zeromq) and consume it from your scripts running in different os processes because of GIL (with standalone queue it is very easy even to use it in network - plus to scalability).
1
4
0
I want to write a script which consumes data over the internet and places the data which is pulled every n number of seconds in to a queue/list, then I will have x number of threads which I will create at the start of the script that will pick up and process data as it is added to the queue. My questions are: How can I create such a global variable (list/queue) in my script that is then accessible to all my threads? In my threads, I plan to check if the queue has data in it, if so then retrieve this data, release the lock and start processing it. Once the thread is finished working on the task, go back to the start and keep checking the queue. If there is no data in the queue, sleep for a specified number of time and then check the queue again.
Having a global queue (or list) that is available to all threads
0
0
1
1,389
30,590,085
2015-06-02T07:36:00.000
4
0
1
0
python,django
30,590,516
2
true
1
0
Django app is actually a python package that follows the Django convention. Django-admin startapp is just a helper command to create the files in that convention. If you want to create an app without using startapp, then can create a folder and create __init__.py file and create the necessary files(for views and models). And you should include it in the INSTALLED_APPS. That's all.
2
1
0
If I create a normal python package (with __init__.py), instead of manage.py startapp won't I still be able to use it like a django app.?
How is a django app different from a python package?
1.2
0
0
55
30,590,085
2015-06-02T07:36:00.000
1
0
1
0
python,django
30,590,448
2
false
1
0
Yes, you will be able to use it as a django app. Django is a web framework, hence its main aim is to allow their users to focus on their applications rather than to make them hard-code every single bit of information.
2
1
0
If I create a normal python package (with __init__.py), instead of manage.py startapp won't I still be able to use it like a django app.?
How is a django app different from a python package?
0.099668
0
0
55
30,590,100
2015-06-02T07:37:00.000
1
0
0
0
python,robotframework
30,597,578
3
false
1
0
You want to add top-level metadata. And that metadata would be an HTML link. Create a suit setup for the master suite (create a file called __init__.robot in the parent test folder) And in it: *** Settings *** Documentation The main init phase for all robot framework tests. Suite Setup Setup *** Keywords *** Setup Set Suite Metadata Link to my cool external site http://www.external.com top=True
1
2
0
How can I customize my robot framework log.html and output so that I can add some external links to my output files like log.html and output.xml file.
How to add some external links to ROBOT Framework Test Statistics in log.html and output.xml?
0.066568
0
0
2,191
30,591,965
2015-06-02T09:16:00.000
1
0
0
0
python,django,database-design
30,592,097
1
false
1
0
django.contrib.auth has groups and group permissions, so all you have to do is to define landlords and tenants groups with the appropriate permissions then on your models's save() method (or using signals or else) add your Landlord and Tenant instances to their respective groups.
1
0
0
I know how permissions/groups/user work together in a "normal" way. However, I feel incomfortable with this way to do in my case, let me explain why. In my Django models, all my users are extended with models like "Landlord" or "Tenant". Every landlord will have the same permissions, every tenant will have other same permissions.. So it seems to me there is not interest to handle permission in a "user per user" way. What I'd like to do is link the my Tenant and Landlord models (not the instances) to lists of permissions (or groups). Is there a way to do this? Am I missing something in my modelisation? How would you do that?
Can I link Django permissions to a model class instead of User instances?
0.197375
0
0
111
30,592,411
2015-06-02T09:36:00.000
3
0
0
0
python,html,boto,bottle
30,595,791
1
false
1
0
What I have ended up doing to fix this issue is used bottle to make a url which completes the needed function. Then just made an html button that links to the relevant url.
1
1
0
I'm currently trying to right a python script that overnight turns off all of our EC2 instances then in the morning my QA team can go to a webpage and press a button to turn the instances back on. I have written my python script that turns the severs off using boto. I also have a function which when ran turns them back on. I have an html doc with buttons on it. I'm just struggling to work out how to get these buttons to call the function. I'm using bottle rather than flask and I have no Java SCript experience. So I would like t avoid Ajax if possible. I dont mind if the whole page has to reload after the button is pressed. After the single press the webpage isnt needed anyway.
html buttons calling python functions using bottle
0.53705
0
1
617
30,595,908
2015-06-02T12:20:00.000
1
0
0
0
python,user-interface,events,kivy
56,684,592
3
false
0
1
I've dealt with similar problem and creating new thread didn't do the trick. I had to use Clock.schedule_once(new_func) function. It schedules function call to the next frame, so it is going to run almost immediately after callback ends.
1
9
0
I am writing a Kivy UI for cmd line utility I have developed. Everything works fine, but some of the processes can take from a few seconds to a few minutes to process and I would like to provide some indication to the user that the process is running. Ideally, this would be in the form of a spinning wheel or loading bar or something, but even if I could update my display to show the user that a process is running, it would be better than what I have now. Currently, the user presses a button in the main UI. This brings up a popup that verifies some key information with the user, and if they are happy with those options, they press a 'run' button. I have tried opening a new popup to tell them that the process is running, but because the display doesn't update until the process finishes, this doesn't work. I have a lot of coding experience, but mostly in the context of math and engineering, so I am very new to the designing of UIs and having to handle events and threads. A simple self-contained example would be greatly appreciated.
Building a simple progress bar or loading animation in Kivy?
0.066568
0
0
12,311
30,596,353
2015-06-02T12:38:00.000
1
0
1
0
python,python-3.x,pyside
38,494,564
1
false
0
0
Right-click the file, click properties, under general it says "opens with:"... Click the "Change" button to the right of that, and then click more options. On that menu there should be an option called "pythonw" click that. Then on the bottom-right click "apply", then "OK". Then just double-click on the file and it should run with no console window so you won't be able to see it running.
1
1
0
Someone gave me a python file to open and use as a resource. The only issue is I don't know anything about python, it's very different from my basic knowledge of coding. The file is not a normal .py file, but rather a console-less .pyw file. I have imported the newest version of python and installed PySide, but I have had no successful attempts at opening the file. I was wondering if someone might know how to open this kind of file? Does it need to be somewhere specific?
How to open PYW files in Windows 8
0.197375
0
0
3,111
30,600,487
2015-06-02T15:35:00.000
0
0
1
0
python,class
30,694,447
3
false
0
0
If the search method you are talking about is really so specific and you will never need to reuse it somewhere else, I do not see any reason to make it static. The fact that it doesn't require access to instance variables doesn't make it static by definition. If there is a possibility, that this method is going to be reused, refactor it into a helper/utility class (no static again). ADDED: Just wanted to add, that when you consider something being static or not, think about how method name relates to the class name. Does this method name makes more sense when used in class context or object context?
2
5
0
In Python, I have a class that I've built. However, there is one method where I apply a rather specific type of substring-search procedure. This procedure could be a standalone function by itself (it just requires a needle a haystack string), but it feels odd to have the function outside the class, because my class depends on it. What is the typical design paradigm for this? Is it typical to just have myClassName.py with the main class, as well as all the support functions outside the class itself, in the same file? Or is it better to have the support function embedded within the class at the expense of modularity?
Where is the best place to put support functions in a class?
0
0
0
659
30,600,487
2015-06-02T15:35:00.000
0
0
1
0
python,class
30,601,234
3
false
0
0
If you can think or any reason to override this function one day, make it a staticmethod, else a plain function is just ok - FWIW, your class probably depends on much more than this simple function. And if you cannot think of any reason for anyone else to ever use this function, keep it in the same module as your class. As a side note: "myClassName.py" is definitly unpythonic. First because module names should be all_lower, then because the one-module-per-class stuff is a nonsense in Python - we group related classes and functions (and exceptions and whatnots) together.
2
5
0
In Python, I have a class that I've built. However, there is one method where I apply a rather specific type of substring-search procedure. This procedure could be a standalone function by itself (it just requires a needle a haystack string), but it feels odd to have the function outside the class, because my class depends on it. What is the typical design paradigm for this? Is it typical to just have myClassName.py with the main class, as well as all the support functions outside the class itself, in the same file? Or is it better to have the support function embedded within the class at the expense of modularity?
Where is the best place to put support functions in a class?
0
0
0
659
30,603,175
2015-06-02T17:52:00.000
1
0
1
1
python,python-2.7,python-3.x
30,603,317
2
true
0
0
Minor versions of python are mostly backwards compatible, however major versions do not maintain backwards compatibility. There are many libraries that work with both, but the language itself does not make that guarantee.
2
1
0
I am deciding whether to install python 3.4 or 2.7 on my home server running Ubuntu Server 14.04.2. I want to ensure that it has support from all the most used python libraries (scipy, numpy, requests, etc) but I am not sure how many of these packages fully support 3.4. Do all 2.7 packages work on 3.4? If no, what are the differences between the two that causes this errors?
Is Python 3.4 backwards compatible for 2.7 programs/libraries?
1.2
0
0
2,376
30,603,175
2015-06-02T17:52:00.000
4
0
1
1
python,python-2.7,python-3.x
30,603,204
2
false
0
0
No, only packages specifically written to support both Python 2 and 3 will run on either. It is possible to write polyglot Python, but this requires effort from the library author. Code written for Python 2.7 will not automatically work on Python 3.
2
1
0
I am deciding whether to install python 3.4 or 2.7 on my home server running Ubuntu Server 14.04.2. I want to ensure that it has support from all the most used python libraries (scipy, numpy, requests, etc) but I am not sure how many of these packages fully support 3.4. Do all 2.7 packages work on 3.4? If no, what are the differences between the two that causes this errors?
Is Python 3.4 backwards compatible for 2.7 programs/libraries?
0.379949
0
0
2,376
30,603,407
2015-06-02T18:05:00.000
4
0
0
0
python,ibm-cloud
30,603,436
3
false
1
0
The cause for this problem was that I was not correctly telling my Python app the needed configuration information when I pushed it out to Bluemix. What I ended up having to do was add a requirements.txt file and a Procfile file into the root directory of my Python application, to draw that connection between my Python app and the needed libraries/packages. In the requirements.txt file I specified the library packages needed by my Python app. These are the file contents: web.py==0.37 wsgiref==0.1.2 where web.py==0.37 is the version of the web.py library that will be downloaded, and wsgiref==0.1.2 is the version of the web server gateway interface that is needed by the version of web.py I am using. My Procfile contains the following information: web: python .py $PORT where myappname is the name of my Python app, and $PORT is the port number that my Python app uses to receive requests. I found out too that $PORT is optional because when I did not specify $PORT my app ran with the port number under the VCAP_APP_PORT environment variable for my app. From there it was just a matter of pushing my app out to Bluemix again only this time it ran fine.
1
2
0
My Python app needs web.py to run but I'm unable to figure out how to get it up to bluemix. I see no options using cf push. I tried to "import web" and added some additional code to my app without success. When I push my Python app to bluemix without web.py it fails (naturally) since it does not have what it needs to run. I'm sure I'm just missing an import mechanism. Any help?
How to import a 3rd party Python library into Bluemix?
0.26052
0
0
1,450
30,605,118
2015-06-02T19:39:00.000
0
0
1
0
python,python-2.7
30,605,397
1
false
0
0
You can have the pexpect module in the directory where you have the tool.py and the import should work just fine.
1
1
0
I have a python script meant to run as a standalone tool invoked via: python tool.py This requirement for this tool is that it remain a standalone script. The issue I'm running into is that I'm replying heavily on a module, namely pexpect, and since it's not part of the standard python library I can't ask users to install the module via pip or any other means and then run my script. The script in question is a commandline tool. Is there any way for me to package my script in such a way that it pulls in the code from pexpect module? I've tried py2app etc to no avail. This tool is meant to run on macs. Any help would be greatly appreciated. thanks!
How to embed python module in a script
0
0
0
1,101
30,610,850
2015-06-03T04:28:00.000
0
0
0
1
python,python-2.7,flask,tornado,bottle
32,857,200
1
false
1
0
I would recommend you use a separate process for your app that will receive REST commands (use Pyramid or Flask), and have it send messages over RabbitMQ to the real time part. I like Kombu myself for interfacing with RabbitMQ, and your message bus will nicely decouple your web/rest needs from your event driven needs. Your event driven part just gets messages off the bus, and doesn't need to know anything about REST.
1
0
0
I am working on a Python 2.7 project with a simple event loop that checks a variety of data sources (rabbitmq, mongodb, postgres, etc) for new data, processes the data and writes data to the next stage. I would like to embed a web server in the application so it can receive simple REST commands, for shutting it down, diagnosis etc. However, from reading the documentation on the available web servers it wasn’t clear if they will allow the event loop described above to function outside of the web server’s event loop. Ie. it looks like I would have to do something like launch the event loop using a REST call and have the loop live on an io thread, or similar. Can someone explain which embedded server (cherrypy, bottle, flask, etc) / concurrency framework (tornado, gevent, twisted etc.) are best suited for this problem? Thank you in advance!
Python Embed Web Server in Data Processing Node
0
0
0
124
30,610,892
2015-06-03T04:31:00.000
5
1
1
0
python,api,github
30,612,182
1
true
0
0
Why do you need to post your API key? Why not post your app code to Github without your API key and have a configuration parameter for your users to add their own API key?
1
7
0
I'm writing a Python application that utilizes the Tumblr API and was wondering how I would go about hiding, or encrypting, the API key. Github warns against pushing this information to a repo, so how would I make the application available to the public and still follow that policy?
API key encryption for Github?
1.2
0
0
248
30,611,649
2015-06-03T05:39:00.000
0
0
0
1
python
30,611,798
1
false
0
0
The good part is: you have read access on the script. What you need is a python installation on your local machine and preferably a drive letter mapping to the script's folder. If not already mapped, this can be scripted with net use <letter>: \\<remote host>\<shared folder>. Then it's as easy as cd <letter>:\<path>\ ; python <script>.py. Then to the output of the script. Apparently it creates files. Can you supply the target folder on the script's command line? In that case just supply a local path.
1
1
0
I've a python script in the network machine, I can open the network through explorer and have access to the script. Through that script from network,I want to create some folders/files and write/read some files in my localhost. Is there a way to do this in python?
How to run python from a network machine to local host?
0
0
1
2,012
30,613,811
2015-06-03T07:42:00.000
0
0
1
0
python,regex,quotes
30,648,960
1
false
0
0
not sure i understood exactly what you wanted but it is possible to reuse the value of captured group in a regex. may the following pattern do the job: (['"])(.*)\1 explanation: (['"]) : a quote or double-quote is captured as first group (.*) : the second group captures everything... \1 : ...until the first group value is met again the result is available in the second group
1
0
0
I have a regex that extracts everything between 2 double quotes and another regex that does the same for 2 single quotes. The strings within the quotes can include escaped quotes. I'd like to make these 2 expressions into a single one: 1) re.findall(r'"(.*?)(?<!\)"', string) 2) re.findall(r"'(.*?)(?<!\)'", string) So something like: 1+2) re.findall(r"'|\"(?<!\)['|\"]", string) but this isn't working. I'd like to have 'abc\"\"' "abc\'\'" be evaluated using the same regex. 'abc\"\"" isn't expected to work. If the quotes were exchanged, allow the same regex to work on it also. Is it possible?
python combining 2 regexes that search strings within single and double quotes
0
0
0
75
30,614,994
2015-06-03T08:41:00.000
0
0
0
0
python,matplotlib,sublimetext
30,615,424
1
false
0
0
How about saving your figure to a file with plt.savefig("fig.png")? If you open that file with your image viewer it will be updated after running your program.
1
0
0
I've been using the IEP from pyzo before trying out Sublime Text (ST). There is an annoying behaviour with ST that IEP doesn't have. In IEP, much like with Matlab or Octave, the editor and the interactive console talk to each other. Typically if you compute some_stuff and plot it in a script, after execution of this script you can go to the console and check some values: print some_stuff[0:10] or modify your plot: plt.whatever() which will update your figure. Also if you run your script several times with different parameters, the figure is simply updated. However when you do so in ST, even with REPL, after execution of the script nothing is left, you can't access some_stuff[0:10] from REPL. Similarly, you can't modify your figure. And if you run your script several times with different parameters, a new figure is generated in a new window each time instead of updating the existing figure. Is there an easy work around this? Thanks!
How to update existing matplotlib (python) figures with sublime text
0
0
0
112
30,615,160
2015-06-03T08:49:00.000
3
0
1
0
python,logic,pseudocode,mathematical-expressions
30,615,220
2
true
0
0
It usually means store the value 0 on the variable p In python it would be p = 0
2
0
0
I have pseudo code am trying to implement in python but I cant seem to remember what p ← 0 would mean in logic or calculus.
Mathematical expressions in pseudo code
1.2
0
0
207
30,615,160
2015-06-03T08:49:00.000
1
0
1
0
python,logic,pseudocode,mathematical-expressions
30,615,247
2
false
0
0
It is equivalent to the assignment operator, p <- 0 in python is expressed as p = 0.
2
0
0
I have pseudo code am trying to implement in python but I cant seem to remember what p ← 0 would mean in logic or calculus.
Mathematical expressions in pseudo code
0.099668
0
0
207
30,615,536
2015-06-03T09:06:00.000
0
0
1
0
python,algorithm,iterator,permutation
30,615,998
2
false
0
0
This is a generic issue and rather not a Python-specific. In most languages, even when iterators are used for using structures, the whole structure is kept in memory. So, iterators are mainly used as "functional" tools and not as "memory-optimization" tools. In python, a lot of people end up using a lot of memory due to having really big structures (dictionaries etc.). However, all the variables-objects of the program will be stored in memory in any way. The only solution is the serialization of the data (save in filesystem, Database etc.). So, in your case, you could create a customized function that would create the list of the permutations. But, instead of adding each element of the permutation to a list, it would save the element either in a file (or in a database with the corresponding structure). Then, you would be able to retrieve one-by-one each permutation from the file (or the database), without bringing the whole list in memory. However, as mentioned before, you will have to always know in which permutation you currently are. In order to avoid retrieving all the created permutations from Database (which would create the same bottleneck), you could have an index for each place holding the symbol used in the previously generated permutation (and create the permutations adding the symbols and a predefined sequence).
1
6
1
I'd like to create a random permutation of the numbers [1,2,...,N] where N is a big number. So I don't want to store all elements of the permutation in memory, but rather iterate over the elements of my particular permutation without holding former values in memory. Any idea how to do that in Python?
Generate random permutation of huge list (in Python)
0
0
0
1,933
30,616,967
2015-06-03T10:05:00.000
1
0
1
0
python,data-structures,pickle
30,617,635
1
true
0
0
how did you serialize the data? (pickle/json/...) also note that elements in a dictionary are not sorted (except if you used a collections.OrderedDict). so retrieving a range of elements may not give what you expect. if the amount of data you are trying to handle exceeds the memory wouldn't it be better to use some kind of database? if your data is a dict, something like shelve or redis might be appropriate.
1
0
0
I have a very huge dictionary that is serialized in the hard disk. I don't have enough memory to load it completely in memory. I need to read only a particular range of the dictionary (say 100th - 200th element in the dictionary). Is it possible to load only these elements from the file? Note that the keys and values of the dictionary are strings.
Is it possible to load a particular range of a serialized python dictonary from hard disk?
1.2
0
0
24
30,618,064
2015-06-03T10:54:00.000
0
0
1
0
ipython-notebook
30,618,159
1
false
0
0
By command mode? IPython has a set of predefined ‘magic functions’ that you can call with a command line style syntax. There are two kinds of magics, line-oriented and cell-oriented. Line magics are prefixed with the % character and work much like OS command-line calls: they get as an argument the rest of the line, where arguments are passed without parentheses or quotes
1
1
0
I've run an IPython cell and it seems to take a long time. Is there a way to attach to this running kernel? I'd like to see the values of the current loop iteration variable while the cell still has not finished.
How to see variables of running IPython cell?
0
0
0
64
30,619,740
2015-06-03T12:10:00.000
-1
0
0
0
python,audio,wav,wave,downsampling
65,321,335
6
false
1
0
First, you need to import 'librosa' library Use 'librosa.load' to resample the audio file librosa.load(path,sr) initiallly sr(sampling rate) = 22050.If you want to preserve native sampling rate make sr=None. otherwise the audio will be resampled to the sampling rate provided
1
24
0
I have to downsample a wav file from 44100Hz to 16000Hz without using any external Python libraries, so preferably wave and/or audioop. I tried just changing the wav files framerate to 16000 by using setframerate function but that just slows down the entire recording. How can I just downsample the audio file to 16kHz and maintain the same length of the audio?
Downsampling wav audio file
-0.033321
0
0
66,340
30,625,748
2015-06-03T16:29:00.000
0
1
0
0
python,ruby-on-rails,session
31,658,762
1
true
1
0
Sorry for the delay, I have the asnwer of my question: I captured the HTTP traffic of some python and ruby on rails applications, the most common sessions ids for each language are the following: -Python: sessionid -Ruby on Rails: The format of session id is: ` _{Name of application}_session For example, for my "ExampleRoRApp" application, the sessiond id is: _ExampleRoRApp_session Thanks for your comments.
1
0
0
I want to know the names of session identifiers for Python and Ruby, for example, the names of session identifier for J2EE is JSESSIONID, for PHP is PHPSESSID. Can you help me please?
What are the names of session ids for Python and Ruby?
1.2
0
0
129
30,626,789
2015-06-03T17:25:00.000
0
0
1
0
python,windows,vba,dll,com
30,715,150
1
true
0
0
well the -- unregister command works well for unregistering COM objects , but -- debug is more usable in this position , when someone wants to modify the code. Also if you keep getting the Old instance of the COM object just restart the programs , and it'll detect the new changed code. The -- debug command will produce results in the PythonWin Trace Collector Windows under Tools Menu.
1
0
0
I'm developing a very simple COM server for educational purpose. I can get it to work but every time i have to change anything (code/logic) , i have to delete every instance of the COM Server Name in the regedit.exe under various headings till it disappears from the PythonWin >> Tools Menu >> Python COM Server Browser. I have tried the -- unregister command in the Command prompt , it says over there that the com server is unregistered but I can still see in the "Python COM Server Browser". Moreover, even after deleting all the instances of the COM server from the regedit.exe and re-registering the COM server brings me back to the OLD code instead of the new saved code which i want to run.(i.e new objects won't be detected, etc). So for each time I make a change, I have to register a new COM server in a new file with a new name. Can someone please tell me what I'm doing wrong ? Is there any easier way of doing this.
How to unregister Python COM server
1.2
0
0
831
30,631,062
2015-06-03T21:25:00.000
1
0
0
0
python,html,css,django
30,631,241
3
false
1
0
You can do this is many ways. In general you need to return some variable from your view to the html and depending on this variable select a style sheet, if your variable name will match you style sheet's name you can do "{{variable}}.css", if not you can use JQuery.
1
4
0
Sorry in advance if there is an obvious answer to this, I'm still learning the ropes with Django. I'm creating a website which has 6 pre determined subjects (not stored in DB) english, civics, literature, language, history, bible each subject is going to be associated with a unique color. I've got a template for a subject.html page and a view that loads from the url appname/subject/subjectname what I need to do is apply particular css to style the page according to the subject accessed. for example if the user goes to appname/subject/english I want the page to be "themed" to english. I hope I've made myself clear, also I would like to know if there is a way I can add actual css code to the stylesheet and not have to change attributes one by one from the back-end. thanks very much!
Changing css styles from view in Django
0.066568
0
0
6,764
30,631,885
2015-06-03T22:27:00.000
1
0
1
0
python,zip,ziparchive,openpyxl
30,635,916
1
false
0
0
openpyxl does not modify the files in place because you can't do this with zipfiles. You must extract, modify and archive. We just hide this process in the library.
1
0
0
I've been told in the past that there is simply no easy way to write a string a zip file. It's okay to READ from a zip archive, but if you want to write to a zip file, the best option is to extract it, make the changes, and then zip it back up again. However, the library I am using (openpyxl) accomplishes the feat of writing to a zip file without any extraction. This package uses the writestr() function in the python ZipFile library to make changes. Can someone explain to me how exactly this is possible? I know it has something to do with writing bytes but I can't fine a good explanation. I'm aware of the vagueness of this question, but that's a circumstance of my lack of knowledge on the topic.
Writing data to a zip archive in Python
0.197375
0
0
226
30,632,089
2015-06-03T22:40:00.000
0
0
1
0
python,pdf,reportlab
31,349,780
2
false
0
0
The python program img2pdf will store images into PDFs (it might not work on some PNGs, though), and there are several n-up examples in Python for putting multiple images on a page. My own library pdfrw has a 4-up example script with it. (The generic term for n-up for printing purposes is "page imposition", so that is sometimes a good search term.)
1
1
0
I am relatively new to the Python-PDF relationship. If I had a list of .PNG files/pictures, how would I go about creating a PDF document with these files/pictures? And is it possible to have, for example, 4 per page? I would not need any other formatting... the requirement is very basic. Thanks
Python Multiple .PNG Files per PDF Page
0
0
0
1,725
30,632,560
2015-06-03T23:27:00.000
3
0
1
0
python,multithreading,timer
30,632,605
2
false
0
0
sleep() does not guarantee scheduling after the time given. It just guarantees to sleep at least this time. It can very well sleep longer, depending on system load. Just compare against the system time from time.time (or time.clock, depends on OS - read the notes in the documentation). after wakeup and use sleep() just not to occupy the CPU all time. Edit: There are several ways to improve that: - If your timer has just to wakeup after the time elapsed, you could simply sleep() until short before wakeup and busy-wait for the time completely elapse. That would block the GUI/app during the timeout. Not so good. You can use threading with one thread for display update at ca. 1s (or whatever you think appropriate) and another thread for the timeout as shown above. A bit complicated. A third approach would to use sleep() for GUI-update without busy-wait for each second. That would give a bitt less accurate display, but less CPU performance. Short before the time elapsed, you could busy wait for the time remaining. Ok, that much for voluntary consulting.
2
0
0
I'm using wxpython and threading module of python to build a simple timer with GUI. I used time.slee(0.5) to measure the time. The problem is: Once I start the timer and switch to other program like chrome. Some times later when I switched back, the time displayed on my timer is very inaccurate. For example, if I set 3 minutes to run, it turns out that my timer is 3 times slower than the operating system clock. After searching, I know that time.sleep() is unreliable, but it seems the gap is so huge that makes me doubt if there is something wrong. I also noticed that If I don't put the script into background, then there is only 2-3 seconds difference. I can accept several seconds difference, but not 2 "minutes".How can I solve it?
how to solve "when python script is running in background, the time.sleep() performs very bad?"
0.291313
0
0
262
30,632,560
2015-06-03T23:27:00.000
0
0
1
0
python,multithreading,timer
30,696,303
2
false
0
0
Finally I solved this problem by using event.wait() and multiprocessing module with good resolution and low cpu usage.
2
0
0
I'm using wxpython and threading module of python to build a simple timer with GUI. I used time.slee(0.5) to measure the time. The problem is: Once I start the timer and switch to other program like chrome. Some times later when I switched back, the time displayed on my timer is very inaccurate. For example, if I set 3 minutes to run, it turns out that my timer is 3 times slower than the operating system clock. After searching, I know that time.sleep() is unreliable, but it seems the gap is so huge that makes me doubt if there is something wrong. I also noticed that If I don't put the script into background, then there is only 2-3 seconds difference. I can accept several seconds difference, but not 2 "minutes".How can I solve it?
how to solve "when python script is running in background, the time.sleep() performs very bad?"
0
0
0
262
30,632,887
2015-06-04T00:07:00.000
2
1
0
0
python,paypal,oauth,payment
30,634,280
2
true
1
0
PayPal doesn't have a way for people to sign up entirely through your site, although there are some ways to facilitate the process. You'd probably have to call PayPal to get access to some of those as they are aimed primarily at larger businesses. However, don't neglect the easy/automatic assistance that PayPal gives you: you can pay to any email account, and if that email is not already active on a PayPal account then the payment will be waiting for them when they activate that email into an existing or new PayPal account. So you can onboard merchants to your site and leave the PayPal signup for later, when money will be waiting for them to claim. Psychologically, walking away from money is harder than than deciding not to start processing :).
1
0
0
I'd like to be able to pay the users of my site using PayPal Mass Payment. I think this is pretty straightforward if they have a PayPal account. However, if they do not have a PayPal account, is there any way to have them sign up through my site, without leaving? Or just with a nice redirect? Whatever is least friction. I just don't want to lose users in the onboarding experience. This is analogous to Stripe managed accounts, but I'm not sure if PayPal has such an analogue.
managed paypal accounts with least friction?
1.2
0
0
47
30,632,967
2015-06-04T00:19:00.000
1
1
1
0
python,unit-testing,code-coverage
30,633,030
1
true
0
0
IMO, If current framework supports the attribute based categorization then you can separate them by adding separate categories to have separate results from old and new tests. On the other hand you can also go for multiple framework if they're supported and have no conflict of interest(E.g. asserts, test reports) by the test runner in your project. But in this case you'll end up having two separate reports from your test executions.
1
0
0
My project has existing (relatively low-coverage; maybe 50%, and a couple of them can't actually test the result, only that the process completes) tests using Python's built-in unittest suite. I've worked with hypothesis before and I'd like to use that as well - but I'm not sure I want to throw out the existing tests. Has anyone tried having two completely separate testing frameworks and test sets on a project? Is this a good idea, or is it going to cause unexpected problems down the line?
Is it wise to use two completely separate unit testing suites?
1.2
0
0
25
30,638,640
2015-06-04T08:23:00.000
0
0
0
0
python,authentication,ip
30,776,062
1
false
0
0
I don't think there is a bulletproof solution, if the users are behind NAT. To differentiate those users you would need the private IP address, which you can not get on the IP level. If the users are behind NAT, you can only see the public/external IP which would be the same for all clients. You could try to get the private IP on the application level (client side), but that would be tricky (and also if the private address is obtained via DHCP, it can change between requests) The other solution I can think of is identifying users via cookies. So a HTTP response to each failed login request would contain a cookie which uniquely identifies that client. In that way you can differentiate users with the same IP. You would have to sign the cookie values to preserve their integrity. However, this does not help if the client deletes cookies after each failed request.
1
0
0
I have created a service login module by python. I want to limit login. The login based on failed attempts per IP address might not work well and annoy the users connected to the Internet through a local network since they'll have same external IP address. Is there a way to uniquely identify such users? Thanks a lot!
How to uniquely identify users with the same external IP address?
0
0
1
103
30,639,573
2015-06-04T09:06:00.000
4
0
1
0
python,apache-spark,pyspark,gil
30,650,255
1
true
0
0
Parallelization in pyspark is achieved by daemon.py calling os.fork() to create multiple worker processes, so there won't be GIL issues.
1
1
0
Generally python doesn't work well with multi threading because of the Global Interpreter Lock. Does this affect also pyspark applications running in multi threaded local mode (local[n])?
Are 'local[n]' pyspark applications effected by the GIL?
1.2
0
0
484
30,642,356
2015-06-04T11:12:00.000
5
0
0
0
python,pandas,dataframe
30,648,685
1
true
0
0
Generally creating a new object and binding it to a variable will allow the deletion of any object the variable previously referred to. del, mentioned in @EdChum's comment, removes both the variable and any object it referred to. This is an over-simplification, but it will serve.
1
2
1
Tips are there for dropping column and rows depending on some condition. But I want to drop the whole dataframe created in pandas. like in R : rm(dataframe) or in SQL: drop table This will help to release the ram utilization.
how to drop dataframe in pandas?
1.2
0
0
9,398
30,643,240
2015-06-04T11:57:00.000
1
0
1
0
python,multithreading,timer
30,643,353
1
true
0
0
Accuracy should match the computer clock: milliseconds. The real problem is the jobs you're running. Do they finish before the period expires? That's dependent on the job and the machine load. The Timer can't help with that.
1
4
0
I have a server application that needs to schedule functions to be called at various times during the week, with a desired accuracy of plus or minus 15 seconds, let's say. threading.Timer is the simplest solution, but I'm concerned about accuracy when using intervals of several hundred thousand seconds. Everything I can find concerning Timer's accuracy focuses on timers with comparatively tiny periods. Tests using timers with intervals on the order of an hour or two yield almost perfect results, but I'd like some assurance this is something I can rely on.
How accurate is python's threading.Timer over extremely long intervals (days)?
1.2
0
0
1,002
30,645,470
2015-06-04T13:43:00.000
0
0
0
0
python,kivy,gesture
30,648,659
1
false
0
1
I'm not sure that there is, not many people use the gestures. We have a gsoc project that will probably bring some improved tools for this to kivy core, though.
1
0
0
I'm wondering if there are any preloaded gestures on kivy, such as pinch, expand, etc. The examples have check, square, circle, and cross gestures. Is there a database for more?
Preloaded gestures on Kivy?
0
0
0
63
30,645,699
2015-06-04T13:53:00.000
2
0
1
0
python,regex,eclipse,logging,replace
30,645,815
1
false
0
0
Search for (^\s+)print (.*)$ Replace with $1logger.info($2) Python should complain pretty fast about all the places where a print goes over more than a single line. You'll have to fix those places manually. Note: This skips comments The alternative is to look into the source for 2to3.py which replaces print ... with print(...) to convert code from Python 2 to 3.
1
0
0
I am trying to refactor a large source base in a company I work in. Instead of using the print function in python 2.7x I want to use the logger function. for example: print "Sample print %d" % timestamp With logger.info("Sample print %d" % timestamp) so basically, I want to remove the print , and insert what remains into parentheses and logger.info (Ill assume all current prints are INFO until a full refactor is possible). Thanks in advance
Replace ends of string in eclipse
0.379949
0
0
35
30,646,650
2015-06-04T14:29:00.000
0
1
1
0
python,module,komodo
70,898,035
5
false
0
0
If the command Import math is present more than once you will get the error: UnboundLocalError: local variable 'math' referenced before assignment
2
6
0
I am pretty new in programming, just learning python. I'm using Komodo Edit 9.0 to write codes. So, when I write "from math import sqrt", I can use the "sqrt" function without any problem. But if I only write "import math", then "sqrt" function of that module doesn't work. What is the reason behind this? Can I fix it somehow?
"from math import sqrt" works but "import math" does not work. What is the reason?
0
0
0
66,714
30,646,650
2015-06-04T14:29:00.000
3
1
1
0
python,module,komodo
30,646,701
5
false
0
0
When you only use import math the sqrt function comes in under a different name: math.sqrt.
2
6
0
I am pretty new in programming, just learning python. I'm using Komodo Edit 9.0 to write codes. So, when I write "from math import sqrt", I can use the "sqrt" function without any problem. But if I only write "import math", then "sqrt" function of that module doesn't work. What is the reason behind this? Can I fix it somehow?
"from math import sqrt" works but "import math" does not work. What is the reason?
0.119427
0
0
66,714
30,647,336
2015-06-04T14:57:00.000
1
0
0
1
python,unix,networking,network-programming,server
30,647,917
2
true
0
0
I always found it easier to utilize a switch's 'port mirror' to copy all data in and out of the proxy's switchport to a separate port that connects to a dedicated capture box, which does the tcpdump work for you. If your switch(es) have this capability, it reduces the load on the busy proxy. If they don't, then yes, tcpdump full packets to a file: "tcpdump -i interface -s 0 -w /path/to/file". You can then (on a different machine) throw together some code to examine and report on anything you want, or even open it in wireshark for detailed analysis.
1
0
0
I have a proxy traffic server which is an extra hop on a network and is handling large quantity's of traffic. I would like to calculate the cost in seconds of how long it takes for the proxy server to handle the incoming request, process them and forward it on. I had been playing to write a python script to perform a tcpdump and some how time packets entering into the server until they had left. I would probably have to perform tcpdump for a certain period of time and then analysis it to calculate times? Is this a good way of achieving what I want or would there be a more elegant solution?
Timing packets on a traffic server
1.2
0
1
57
30,647,758
2015-06-04T15:15:00.000
1
0
0
0
python,numpy,h5py,solid-state-drive,memory-mapping
30,650,664
1
false
0
0
cat /sys/block/sda/queue/rotational is a good way of finding out if your hard drive is a SSD or a hard disk. You can also slightly change this command in order to get other useful information like cat /sys/block/sdb/queue/rotational.
1
0
1
I want to randomly access the elements of a large array (>7GB) that I load into Python as a either an HDF5 dataset (h5py.Dataset), or a memory-mapped array (numpy.memmap). If this file lives on an spinning-platter HD, these random accesses take forever, for obvious reasons. Is there a way to check (assert) that the file in question lives on an SSD, before attempting these random accesses? I am running python in Linux (Ubuntu 14.04). I don't mind non-cross-platform solutions.
In python, can I see if a file lives on an HD or an SSD?
0.197375
0
0
889
30,647,952
2015-06-04T15:24:00.000
0
0
1
1
python,vb.net
30,648,878
1
false
1
0
1) I am not very familiar with Python, but for the .net application you will likely want to push change notifications to it, rather than pull. The system.net.webclient.downloadstring is a request (pull). As I am not a Python developer I cannot assist in that. 3) As you are requesting data, it is possible to create some errors of the read/write while updating and reading at the same time. Even if this does not happen your data may be out of date as soon as you read it. This can be an acceptable problem, this just depends of how critical your data is. This is why I would do a push notification rather than a pull. If worked correctly this can keep data synced and avoid some timing issues.
1
0
0
I am writing a client-server type application. The server side gathers constantly changing data from other hardware and then needs to pass it to multiple clients (say about 10) for display. The server data gathering program will be written in Python 3.4 and run on Debian. The clients will be built with VB Winforms on .net framework 4 running on Windows. I had the idea to run a lightweight web server on the server-side and use system.net.webclient.downloadstring calls on the client side to receive it. This is so that all the multi-threading async stuff is done for me by the web server. Questions: Does this seem like a good approach? Having my data gathering program write a text file for the web server to serve seems unnecessary. Is there a way to have the data in memory and have the server just serve that so there is no disk file intermediary? Setting up a ramdisk was one solution I thought of but this seems like overkill. How will the web server deal with the data being frequently updated, say, once a second? Do webservers deal with this elegantly or is there a chance the file will be served whilst it is being written to? Thanks.
Serve dynamic data to many clients
0
0
0
29
30,649,428
2015-06-04T16:35:00.000
1
0
0
0
python,django,url-routing
30,649,463
1
true
1
0
Well, that really is not how it works. Each view is separate and is only called from the URLs that map to it. If you have shared code, you probably want to either factor it out into separate functions that you can call from each view, or use something like a template tag or context processor to add the relevant information to the template automatically.
1
0
0
I was wondering how to call my index(request) function thats in views.py upon every page reload. Currently index(request) only gets called when the app originally loads. Every other page reload after that calls another function in views.py called filter_report(request). The problem I am running into is that 85% of the code in filter_report(request) is also in index(request) and from my understanding you don't really want 2 functions that do a lot of the same stuff. What I would like to do is take that 15% of code that isn't in index(request) but is in filter_report(request) and split it into different methods and just have index(request) call those other methods based on certain conditionals.
Django: call index function on page reload
1.2
0
0
519
30,654,526
2015-06-04T21:30:00.000
0
0
0
0
python,machine-learning,lda,topic-modeling,gensim
30,656,766
1
false
0
0
So, you only have 11 documents, and are trying to get 2 topics out of them? Maybe it could be the case of not having enough data but try iterating more. BTW, is the negative log-likelihood or the perplexity going down after each iteration? Just looking at the results, I think if you iterate more, you will get the right result, because the algorithm has correctly put semantically close things together in one topic already. (post, posts, tweets; months, years)
1
0
0
so I am relatively new working with gensim and LDA, started about two weeks ago and I am having trouble trusting these results. The following are the topics produced by using 11 1-paragraph documents. topic #0 (0.500): 0.059*island + 0.059*world + 0.057*computers + 0.056*presidential + 0.053*post + 0.047*posts + 0.046*tijuana + 0.045*vice + 0.045*tweets + 0.045*president 2015-06-04 16:22:07,891 : INFO : topic #1 (0.500): 0.093*computers + 0.064*world + 0.060*posts + 0.053*eurozone + 0.052*months + 0.049*tijuana + 0.048*island + 0.046*raise + 0.044*rates + 0.042*year These topics just don't quite seem right. In fact they seem almost non-sensical. How exactly should I read these results? Also, is it normal that the topic distributions are exactly the same for both topics?
LDA generated topics
0
0
0
118
30,655,378
2015-06-04T22:35:00.000
0
0
1
0
python,arrays,numpy,multidimensional-array,netcdf
30,713,394
1
false
0
0
After talking to a few people where I work we came up with this solution: First we made an array of zeroes using the following argument: array1=np.zeros((28,5,24,4)) Then appended this array by specifying where in the array we wanted to change: array1[:,0,0,0]=list1 This inserted the values of the list into the first entry in the array. Next to write the array to a netCDF file, I created a netCDF in the same program I made the array, made a single variable and gave it values like this: netcdfvariable[:]=array1 Hope that helps anyone who finds this.
1
0
1
This question has potentially two parts but maybe only one if the first part can be encapsulated by the second. I am using python with numpy and netCDF4 First: I have four lists of different variable values (hereafter referred to elevation values) each of which has a length of 28. These four lists are one set of 5 different latitude values of which are one set of the 24 different time values. So 24 times...each time with 5 latitudes...each latitude with four lists...each list with 28 values. I want to create an array with the following dimensions (elevation, latitude, time, variable) In words, I want to be able to specify which of the four lists I access,which index in the list, and specify a specific time and latitude. So an index into this array would look like this: array(0,1,2,3) where 0 specifies the first index of the the 4th list specified by the 3. 1 specifies the 2nd latitude, and 2 specifies the 3rd time and the output is the value at that point. I won't include my code for this part since literally the only things of mention are the lists list1=[...] list2=[...] list3=[...] list4=[...] How can I do this, is there an easier structure of the array, or is there anything else I a missing? Second: I have created a netCDF file with variables with these four dimensions. I need to set those variables to the array structure made above. I have no idea how to do this and the netCDF4 documentation does a 1-d array in a fairly cryptic way. If the arrays can be made directly into the netCDF file bypassing the need to use numpy first, by all means show me how. Thanks!
Creating and Storing Multi-Dimensional Array in a netCDF File
0
0
0
1,886
30,655,876
2015-06-04T23:27:00.000
0
0
1
0
python,powershell,virtualenv,virtualenvwrapper
47,241,664
4
false
0
0
I had the same issue today on a fresh win 10 system. For some reason the VirtualEnvWrapper seems to create a User folder in the site-packages folder inside your Python installation, where it can't be found. Just moving the "Modules" with all its content did the trick. -> (in my case from:) C:\Program Files (x86)\Python27\Lib\site-packages\Users\*USER*\Documents\WindowsPowerShell\**Modules**\VirtualEnvWrapper -> to C:\Users\*USER*\Documents\WindowsPowerShell\**Modules**\VirtualEnvWrapper ..did the trick... where your powershell profile can find it ;)
1
3
0
I have python 2.7 installed perfectly, and also pip, and I have been running the PowerShell as admin. I did: pip install virtualenv and pip install virtualenvwrapper-powershell and they both were succesfull. I also did this: mkdir '~.virtualenvs' However, whenever I try to: Import-Module virtualenvwrapper it always gets me this error: Import-Module: The specific module 'virtualenvwrapper' was not loaded because no valid module file was found in any module directory. I did pip install virtualenvwrapper-powershell again just to make sure, and I got this: Requirement already satisfied (use --upgrade to upgrade): virtualenvwrapper-powershell in c:\python27\lib\site-packages What could be wrong ?
Error importing virtualenvwrapper to the Powershell
0
0
0
3,039
30,656,968
2015-06-05T01:34:00.000
1
0
1
0
python,visual-studio-2013,msdn
30,663,455
2
false
0
0
To clean up the installation, you can just delete its installation folder and restart VS. In case you would like to be extra careful, you can run devenv.exe /setup from a VS command prompt before restarting Let me know if that works for you.
2
1
0
Please help.I cant install python tools on visual studio 2013. The installer works fine but towards end of installation it says "cannot find one or more components please reinstall the application.And the installer closes with an error message "installation stopped prematurely". Ive already tired devnav/resetuserdata.
cannot find one or more components. please reinstall application error while installing python tools
0.099668
0
0
1,588
30,656,968
2015-06-05T01:34:00.000
0
0
1
0
python,visual-studio-2013,msdn
68,296,663
2
false
0
0
I had the same issue, tried almost all the tricks but none worked out. In the end I re-installed the latest version of VS, repaired the old version and the issue was resolved.
2
1
0
Please help.I cant install python tools on visual studio 2013. The installer works fine but towards end of installation it says "cannot find one or more components please reinstall the application.And the installer closes with an error message "installation stopped prematurely". Ive already tired devnav/resetuserdata.
cannot find one or more components. please reinstall application error while installing python tools
0
0
0
1,588
30,658,172
2015-06-05T04:05:00.000
1
0
1
0
python,sequence
30,658,283
1
false
0
0
Ok, so from what you are saying, it seems like you have a f: Z_10 x Z_10 -> Z_10 A good way to represent this function is to use a dictionary data structure to hold the values. Then iterate over the sequence (most likely a list), and take each element and it's successor in the sequence and use it to index into the dictionary. I think it is a bit elementary to code. Judging by the question, you might be a beginner. Show me what you have got, and I will point you in the right directions (in the comments)
1
0
0
I want the program to take blocks of numbers from a numeric sequence (which I enter, could be 1000 numbers). Each two numbers equals one number. For example: the numbers 8,9 in a row equal 1. Then numbers 4,8 equal 6, it goes on. Each number from 0-9 paired with another number from 0-9 has its own value. say the sequence goes like this 927284629146 I want the program to pick the two numbers in groups like this (92)(72)(84)(62)(91)(46) and return 871236 (depending on what value each group makes) I'm sorry if this might sound confusing.
Transforming number groups
0.197375
0
0
32
30,662,065
2015-06-05T08:42:00.000
3
0
0
0
python,numpy
30,662,556
1
true
0
0
You can use gnumpy.concatenate. For 1D arrays you need to reshape to 2D first.
1
1
1
I want to transfer python codes in CPU to GPU, but I failed to find the numpy function hstack in gnumpy. Who can give me some hints to implement adding some extra rows to a existing matrix(garray) like hstack in numpy. Thank you.
Is there any implementation of hstack in gnumpy
1.2
0
0
75
30,665,447
2015-06-05T11:31:00.000
1
0
0
0
python,csv,pandas
30,667,520
1
false
0
0
In pandas.read_csv you can use the "chunksize" option, if you do, the object returned by pandas will be an iterator (of type TextFileReader) which when iterated over will return a DataFrame reading over number_of_rows<=chunksize (I hadn't realized the option existed until I read the source code...).
1
1
1
I've compared the built-in csv reader with Pandas's read_csv. The former is significantly slower. However, I have a need to stream csv files due to memory limitation. What streaming csv reader that is as fast or almost as fast as Pandas?
What is the fastest way to stream a large csv file?
0.197375
0
0
1,231
30,667,534
2015-06-05T13:15:00.000
1
0
1
0
python,visual-studio-2013,ironpython,openpyxl
30,673,230
1
true
0
0
openpyxl does not work with IronPython. But that should not affect using it with VisualStudio. Presumably you need to set the path for the project.
1
2
0
I have installed openpyxl. Working through the examples in Idle, I encounter no issues. Trying to use my VisualStudio python editor, module imports fail. Does openpyxl need to be added to IronPython for this to work? If so, how?
Python module imports Visual Studio
1.2
0
0
2,102