Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
20,277,537
2013-11-29T02:40:00.000
6
0
0
0
python,django,nginx,gunicorn
20,278,315
2
true
1
0
When you hold down F5: You've started hundreds of requests. Those requests have filled your gunicorn request queue. The request handlers have not been culled as soon as the connection drops. Your latest requests are stuck in the queue behind all the previous requests. Nginx times out. For everyone. Solutions: Set up rate-limiting buckets in Nginx, keyed on IP, such that one malicious user can't spam you with requests and DOS your site. Set up a global rate-limiting bucket in Nginx such that you don't overfill your request queue. Make Nginx serve a nice "Reddit is under heavy load" style page, so users know that this is a purposeful event Or: Replace gunicorn with uwsgi. It's faster, more memory efficient, integrates smoothly with nginx, and most importantly: It will kill the request handler immediately if the connection drops, such that F5 spam can't kill your server.
1
8
0
I'm trying to publish a Django application on the production server using Nginx + Gunicorn. When I doing a simple stress test on the server (holding the F5 key for a minute) the server returns a 504 Gateway Time-out error. Why does this happen? This error only appears for the user when doing multiple concurrent requests, or the system will be fully unavailable to everyone?
Django Nginx Gunicorn = 504 Timeout
1.2
0
0
5,881
20,281,990
2013-11-29T09:12:00.000
0
0
1
0
python,plugins,intellij-idea
34,194,349
2
false
0
0
may be my intellij version is different with you guys. on Windows platform I fix this problem by: 1.File > Project Structure > Modules 2.on the module's dependencies panel,change the module SDK from JDK to python 3.done
2
3
0
I use intellij with python plugin. when I want to import python libs like import random I got editor error. No module named random less... (Ctrl+F1) This inspection detects names that should resolve but don't. Due to dynamic dispatch and duck typing, this is possible in a limited but useful number of cases. Top-level and class-level items are supported better than instance items. when I run the code every thing is ok what can I do to make the intelij recognize this libs?
Intellij python no recoginiize lib
0
0
0
2,375
20,281,990
2013-11-29T09:12:00.000
3
0
1
0
python,plugins,intellij-idea
25,123,680
2
false
0
0
You may have fixed this by now, but I was having the same problem and finally solved it, so I figured I'd post the solution for anyone who came here by Googling/Binging the error message: I went to File > Project Structure > Modules, highlighted the main module, then pressed the plus sign and then "Python" under "Framework". Hope that helps you or someone else.
2
3
0
I use intellij with python plugin. when I want to import python libs like import random I got editor error. No module named random less... (Ctrl+F1) This inspection detects names that should resolve but don't. Due to dynamic dispatch and duck typing, this is possible in a limited but useful number of cases. Top-level and class-level items are supported better than instance items. when I run the code every thing is ok what can I do to make the intelij recognize this libs?
Intellij python no recoginiize lib
0.291313
0
0
2,375
20,283,021
2013-11-29T10:00:00.000
1
1
0
0
python,staruml
45,105,412
1
false
0
0
There is a simple possibility: Use PyCharm, a python IDE, which has an integrated UML generator (only in the pro version which is free for students).
1
3
0
Is there a way using StarUML for reverse engineering Python code to a class diagramm? In the StarUML docs, they say there are modules for language support, but I couldn't find any further information about where and how to install and use. Other UML tools I found didn't match my idea of how a diagramm should look like. I know it is a bit problematic generating class diagramms for python, because it's compiled to runtime and will probably change then. But I'm using Python to build my bachelor thesis and my Prof. loves UML. He really takes care of doing this correctly. Can anybody help me pls?
StarUML for Python
0.197375
0
0
2,332
20,284,421
2013-11-29T11:12:00.000
5
0
1
0
python,interpreter
20,284,486
1
true
0
0
That's a feature of the terminal, not the interpreter. EscEsc is interpreted as Tab by your terminal, which your interpreter then further interprets as a completion request.
1
1
0
Today by accident I found pressing escesc in a python interpreter lists the contents of the directory the interpreter was started from. Stranger still is that the sequence needs to be performed twice for it to work initially but afterwards works every time. I couldn't find this feature documented anywhere and I am wondering if there are other undocumented features of the interpreter.
python interpreter command inputs
1.2
0
0
68
20,289,809
2013-11-29T16:18:00.000
1
0
0
0
python,bash,upload,download,bandwidth
20,290,664
1
true
0
0
Just running a few wget's should easily saturate your download bandwidth. For upload, you might set up a web server on your computer (carefully poking a hole through your firewall for it), and then connect to a web proxy (there are a few of these that'll anonymise your data) and back to your web server. Then connect to your web server through the proxy and download (or upload!) a bunch of stuff. It may be more effective to do these things one at a time, rather than both at the same time, as they may interfere with each other a bit.
1
0
0
As the question explain, I would like to saturate my bandwidth. For download I had an idea: Download a random file (5MB for example) in loop for n time using wget or urllib2. Delete the file each completed download, with the same loop. (For wget using a Bash script / For urllib2 using a Python script) But, I have two questions: How do I saturate the download bandwidth without files downloading? How do I saturate the upload bandwidth? (I have no idea in this) I mean a total saturation, but if I want a partial saturating?
How do I saturate my upload and download available bandwidth?
1.2
0
1
1,937
20,291,176
2013-11-29T17:53:00.000
0
0
0
0
python,list,google-app-engine,jinja2
20,301,699
2
false
1
0
It sounds like you want to use itertools.product(list1, list2). This will create all combinations of list1 and list2. For example, if list1 = [1,2] and list2 = [1,2,3] then itertools.products(list1,list2) = [ (1,1),(2,1),(3,1),(2,1),(2,2),(2,3)]
1
0
0
I am using Jinja2 template in python for Google App Engine. I need to iterate through 2 lists list1 and list2 in the same loop in the html file. I tried using zip as described in some of the posts but it is not working. Something similar in C : for(i=0.j=0; I<len(list1) && j < len(list2) ; I++,j++) Can anyone suggest some ways to implement the same?
Iterate through 2 loops in GAE Python templates
0
0
0
86
20,293,848
2013-11-29T21:54:00.000
-1
0
1
1
python,windows
20,293,883
2
false
0
0
You can use os.chdir(target_directory) to change your program's working directory before starting the external application.
1
0
0
For example I know this method: os.system("cmd") but it starts console in the directory of the script or in the dir of the interpreter, is there a way to gain control of this issue ?
how to start external system application in python specifying startup directory?
-0.099668
0
0
46
20,294,504
2013-11-29T23:04:00.000
1
0
1
0
c#,java,python,path
20,294,526
5
true
0
0
A PATH is a file directory on your computer. If you need to install a programming language, you might need to put it in your system PATH variable. This means that the system looks to these files for different information, IE where the libraries for the code you are using are. Hope that helped!
1
6
0
This is probably a rudimentary question but I am still kinda new to programming and I've wondered for awhile. I've done multiple projects in Python, C#, and Java, and when I try to use new libraries (especially for Python) people always say to make sure its in the right PATH and such. I just followed an online tutorial on how to install Java on a new computer and it rekindled my question of what a path really is. Is the Path just were the programming language looks for a library in the file system? I get kinda confused on what it's significance is. Again, I'm sorry for the wide question, its just something that I've never quite gotten on my own programming. EDIT: I just wanted to thank everyone so much for answering my question. I know it was a pretty dumb one now that I've finally figured out what it is, but it really helped me. I'm slowly working through as many C#, Java and Python tutorials as I can find online, and it's nice to know I have somewhere to ask questions :)
Significance of a PATH explained
1.2
0
0
670
20,294,916
2013-11-29T23:58:00.000
13
0
0
0
python,image,matlab,image-processing,gaussian
20,295,628
1
true
0
0
Successively applying multiple gaussian blurs to an image has the same effect as applying a single, larger gaussian blur, whose radius is the square root of the sum of the squares of the blur radii that were actually applied. In your case, s2 = sqrt(n*s1^2), and the blur radii is approximated as 3*si where i = 1, 2, which means pixels at a distance of more than 3si are small enough to be considered effectively zero during the blurring process.
1
10
1
I have two questions relating to repeated Gaussian blur. What happens when we repeatedly apply gaussian blur to an image keeping the sigma and the radius same ? And is it possible that after n iterations of repeatedly applying gaussian blur (sigma = s1) the image becomes the same as it would be on applying gaussian blur ( of sigma = s2; s1 < s2 ) 1 time on the original image. And if so what is the general formula for deriving that n number of times we have to apply gaussian blur with s1, given s1 and s2 (s1 < s2).
Repeated Gaussian Blur in Image Processing
1.2
0
0
2,451
20,295,446
2013-11-30T01:18:00.000
0
0
0
0
python,c,arrays,algorithm,sorting
20,295,483
6
false
0
0
Maybe use a nested for loop with the outside one looking at the ith point. then inside loop and calculate all the distances. After calculation use a native python sort for that row and then add it to the main 2d array.
1
0
1
Let's say I am given an array of n points(pair of coordinates). I want to generate a 2D array of points, where ith row has all elements sorted according to their distance from the ith point. There may be better and more efficient algorithms to get the final result, but for some reasons I want to do it by naive algorithm, i.e., brute-force. Also I don't want to write my own sorting function. In C language, one can use the qsort function, but it's compare function only takes two parameters whereas I will be needing to pass three parameters: the reference point and two other points to be compared. In Python too, one can use sorted function, but again, it's key function only takes one parameter, whereas in this case, I will need to pass two parameters. So how do I do it?
Custom sorting in for loop
0
0
0
202
20,295,723
2013-11-30T02:10:00.000
0
0
1
0
python,oop
20,295,792
2
false
0
0
No, its the other way round. But I can see why the doubt, these statements are confusing: you put self as first argument in instance methods, and you don't put it in static methods. Note that although there is no need in naming that first parameter self, it is better to follow that convention as it is clearer to the reader. Also, don't name selfthe first parameter in a static method, it is possible, yes, but you will for sure confuse the reader.
1
0
0
I am learning OOP, so my question on the two statements below probably seems very basic to many, but I just want to check in case I am using Python/OO lingo in the wrong way: - You must explicitly define 'self' as the first parameter in every class method. - Python's explicit requirement of self as the first parameter of methods defined in classes, makes it so that it's clear to people what the difference between instance and static methods are. If the statements are correct concerning the 'self' keyword, is it correct of me to infer that, either as a parameter in method definitions or as a prefix to variables; the presence of the word 'self' commonly indicates that a method or variable is static, whereas the absence of the word 'self' usually indicates that the method or variable is an instance method or variable?
Is this thinking correct on how 'self' identifies between instance and static members?
0
0
0
60
20,295,961
2013-11-30T02:52:00.000
1
0
0
0
python,algorithm,indexing,subset
20,295,997
2
false
0
0
I think you could do this by recursively narrowing down ranges, right? You know that all subsets beginning with a given integer will be adjacent, and that for a given first element d there will be (n - d) choose (k-1) of them. You can skip ahead as far as necessary in the virtual list of subsets until you're in the range of subsets beginning with the first element of the target sorted subset, then recurse to narrow it down precisely. EG, suppose n=20, k=6. If your target subset is {5, 8, 12, 14, 19}, none of the subsets beginning with 1-4 are valid choices. You know that the index first subset beginning with 5 will be ((19 choose 5) + (18 choose 5) + (17 choose 5) + (16 choose 5)). Call that index i0. Now you have (15 choose 5) subsets that all begin with 5 to index into, and none of the ones beginning with 5, 1-7... are interesting. (14 choose 4) of them start with 1, (13 choose 4) start with 2, etc. So the index of the first set beginning with 5, 8 will be i0 + (14 choose 4) + (13 choose 4) + (12 choose 4) + (11 choose 4) + (10 choose 4) + (9 choose 4) + (8 choose 4). Etc. Writing the algorithm out is kind of painful, but I think it should work nicely with a computer keeping track of the fiddly details.
1
2
1
There are n choose k subsets of {1,2,...,n} of size k. These can be naturally ordered by sorting the elements and using the lexigraphical order. Is there a fast way to determine the index of a given subset, i.e. its index in the sorted list of all subsets of size k? One method would be to create a dictionary from subsets to indices by enumerating all subsets of size k, but this requires n choose k space and time. For my application n and k are infeasibly large, but I only need to determine the indices of comparatively few subsets. I'm coding in Python, but I'm more interested in a general algorithm than any specific implementation. Of course, if there's an especially fast way to do this in Python that would be great. Motivation: The subsets of {1,2,...,n} of size k correspond bijectively to basis vectors of the kth exterior power of a vector space with dimension n. I'm performing computations in the exterior algebra of a vector space and trying to convert the resulting lists of vectors into sparse matrices to do linear algebra, and to do that I need to index the vectors by integers rather than by lists.
Indexing subsets of size k
0.099668
0
0
397
20,296,712
2013-11-30T04:57:00.000
0
0
0
0
python,pygame,lines,pixels
20,298,727
2
false
0
1
Using float you should get all points without duplications - but you need int. Using int you always get duplications - int(0.1) == int(0.2) == int(0.3) == etc.. So you have to check if point is on your list.
1
0
0
Assuming I have a line with coordinates x1,y1 and x2,y2, and I know the length of the hypotenuse connecting those two points (thus also knowing the angle of rotation of the line through trig), if the line is 1 pixel thick how can I find every pixel on that line and store it to a list? I first proposed the simple vector calculation, stating with x1,y1 and performing line/z*math.cos(angle),line/z*math.sin(angle) (for x1 and y1 respectively) until I reached point x2,y2, but the problem with that is finding variable 'z' such that every single pixel is covered without duplicating pixels. So what would be the best way of calculating this?
How can I find every pixel on a line with pygame?
0
0
0
369
20,297,249
2013-11-30T06:18:00.000
77
0
0
0
python,python-2.7
20,297,273
6
true
0
0
bisect.bisect_left returns the leftmost place in the sorted list to insert the given element. bisect.bisect_right returns the rightmost place in the sorted list to insert the given element. An alternative question is when are they equivalent? By answering this, the answer to your question becomes clear. They are equivalent when the the element to be inserted is not present in the list. Hence, they are not equivalent when the element to be inserted is in the list.
3
41
0
In my understanding, bisect_left and bisect_right are two different ways of doing the same thing: bisection, one coming from the left and the other coming from the right. Thus, it follows that they have the same result. Under what circumstances are these two not equal, i.e. when will they return different results, assuming the list and the value that is being searched are the same?
When are bisect_left and bisect_right not equal?
1.2
0
0
39,432
20,297,249
2013-11-30T06:18:00.000
5
0
0
0
python,python-2.7
33,235,082
6
false
0
0
There are two things to be understood: bisect.bisect and bisect.bisect_right work the same way. These return the rightmost position where the element can be inserted without breaking the order of elements. But as opposed to the above, bisect.bisect_left returns the leftmost position where the element can be inserted. Use carefully.
3
41
0
In my understanding, bisect_left and bisect_right are two different ways of doing the same thing: bisection, one coming from the left and the other coming from the right. Thus, it follows that they have the same result. Under what circumstances are these two not equal, i.e. when will they return different results, assuming the list and the value that is being searched are the same?
When are bisect_left and bisect_right not equal?
0.16514
0
0
39,432
20,297,249
2013-11-30T06:18:00.000
12
0
0
0
python,python-2.7
56,491,997
6
false
0
0
To me this interpretation of bisect_left/bisect_right makes it more clear: bisect_left returns the largest index to insert the element w.r.t. < bisect_right returns the largest index to insert the element w.r.t. <= For instance, if your data is [0, 0, 0] and you query for 0: bisect_left returns index 0, because that's the largest possible insert index where the inserted element is truly smaller. bisect_right returns index 3, because with "smaller or equal" the search advances through identical elements. This behavior can be simplified to: bisect_left would insert elements to the left of identical elements. bisect_right would insert elements to the right of identical elements.
3
41
0
In my understanding, bisect_left and bisect_right are two different ways of doing the same thing: bisection, one coming from the left and the other coming from the right. Thus, it follows that they have the same result. Under what circumstances are these two not equal, i.e. when will they return different results, assuming the list and the value that is being searched are the same?
When are bisect_left and bisect_right not equal?
1
0
0
39,432
20,298,833
2013-11-30T09:53:00.000
1
1
0
0
python,api,flask,arduino
20,298,873
2
false
1
0
Configure your WSGI container or its associated web server to only allow access to the Flask application from the IP address assigned to the Arduino's network interface.
1
0
0
I am basically writing an API in Python, using Flask, and I would like to restrict access to its endpoints so that only an entity, namely an Arduino, can have GET and POST access. How should I make this possible and what should I be looking for?
Restrict Python API access to single Arduino entity
0.099668
0
0
78
20,303,411
2013-11-30T17:34:00.000
2
0
0
0
python,sockets,multiplayer
20,303,470
1
true
0
0
I think you shouldn't even consider storing data like health client-side. Doing that will allow super-easy hacks to be made and the fact that the game is written in Python makes this a lot easier. So I think you should keep these data in the server-side and use it from there.
1
2
0
I'm making a multilayer, text-based game in python sockets. The game is two player with a server which the clients connect to. I'm not sure whether to store player information (name, health, etc) on the server or on the client? What are the advantages of both? I was thinking about storing the information on the client and sending the player object to the server when ever it changes, though this probably isn't very efficient. Any help would be appreciated!
Should I store player information on server or client?
1.2
0
1
162
20,304,863
2013-11-30T19:46:00.000
0
0
1
0
python,database,python-2.7,beautifulsoup,mysql-python
20,305,193
4
false
1
0
You're doing it wrong! Make an object that represents a row in the database, use __getitem__ to pretend it's a dictionary. Put your database logic in that. Don't go all noSQL unless your tables are not related. Just by being tables they are ideal for SQL!
2
0
0
I'm getting different information for a particular thing and i'm storing those information in a dictionary e.g. {property1:val , property2:val, property3:val} now I have several dictionary of this type (as I get many things ..each dictionary is for a thing) now I want to save information in DB so there would be as many columns as key:value pair in a dictionary so what is the best or simplest way to do that. Please provide all steps to do that (I mean syntax for login in DB, push data into row or execute sql query etc... I hope there wont be more than 4 or 5 steps ) PS: All dictionaries have the same keys, and each key always has the same value-type . and also number of columns are predefined.
How to save Information in Database using BeautifulSoup
0
1
0
861
20,304,863
2013-11-30T19:46:00.000
0
0
1
0
python,database,python-2.7,beautifulsoup,mysql-python
20,305,076
4
false
1
0
If your dictionaries all have the same keys, and each key always has the same value-type, it would be pretty straight-forward to map this to a relational database like MySQL. Alternatively, you could convert your dictionaries to objects and use an ORM like SQLAlchemy to do the back-end work.
2
0
0
I'm getting different information for a particular thing and i'm storing those information in a dictionary e.g. {property1:val , property2:val, property3:val} now I have several dictionary of this type (as I get many things ..each dictionary is for a thing) now I want to save information in DB so there would be as many columns as key:value pair in a dictionary so what is the best or simplest way to do that. Please provide all steps to do that (I mean syntax for login in DB, push data into row or execute sql query etc... I hope there wont be more than 4 or 5 steps ) PS: All dictionaries have the same keys, and each key always has the same value-type . and also number of columns are predefined.
How to save Information in Database using BeautifulSoup
0
1
0
861
20,305,033
2013-11-30T20:03:00.000
0
0
1
0
python,multiprocessing
20,305,232
3
false
0
0
You could touch all your .py's, and then force an import. But why do you want to do this?
2
3
0
I'm developing an scientific python application that uses multiprocessing and process pools. Sometimes I make a mistake and fork bomb myself. This causes my laptop to freeze and I need to do a hard reset. However, when I load again my python installation appears to be corrupted. I get strange errors on basic imports like import string. I traced this down to a point where it looks like python is trying to make pyc's/pyo's for some of the system modules installed in the system directory (I am working in Windows). I can temporarily fix this problem by clicking through the installation directory, sorting by time modified and manually deleting all the pyc/pyo's created on that run based on the modified date. Is there a way to force python to ignore any existing pyc/pyos and recreate them on a launch of the interpreter? Alternatively, is it safe to delete all the pyc/pyo objects in the system Python installation? In my case that's C:\Python27.
How to force python to ignore or recreate pyc/pyo files?
0
0
0
2,173
20,305,033
2013-11-30T20:03:00.000
0
0
1
0
python,multiprocessing
22,438,305
3
false
0
0
One main use case I see in addition is when you work out of a git repository and switch forth and back on branches. .py files might get checked out with different content but with older time stamp. There you have an issue if you don't force the re-creation of the .pyc files one way or the other. CKol
2
3
0
I'm developing an scientific python application that uses multiprocessing and process pools. Sometimes I make a mistake and fork bomb myself. This causes my laptop to freeze and I need to do a hard reset. However, when I load again my python installation appears to be corrupted. I get strange errors on basic imports like import string. I traced this down to a point where it looks like python is trying to make pyc's/pyo's for some of the system modules installed in the system directory (I am working in Windows). I can temporarily fix this problem by clicking through the installation directory, sorting by time modified and manually deleting all the pyc/pyo's created on that run based on the modified date. Is there a way to force python to ignore any existing pyc/pyos and recreate them on a launch of the interpreter? Alternatively, is it safe to delete all the pyc/pyo objects in the system Python installation? In my case that's C:\Python27.
How to force python to ignore or recreate pyc/pyo files?
0
0
0
2,173
20,306,249
2013-11-30T22:13:00.000
0
1
1
0
python,json,unicode,utf-8
20,464,991
2
false
0
0
Well, since you won't post your solution as an answer, I will. This question should not be left showing no answer. jsonEncoder has an option ensure_ascii. If ensure_ascii is True (the default), all non-ASCII characters in the output are escaped with \uXXXX sequences, and the results are str instances consisting of ASCII characters only. Make it False and the problem will go away.
2
2
0
I'm using twitter python library to fetch some tweets from a public stream. The library fetches tweets in json format and converts them to python structures. What I'm trying to do is to directly get the json string and write it to a file. Inside the twitter library it first reads a network socket and applies .decode('utf8') to the buffer. Then, it wraps the info in a python structure and returns it. I can use jsonEncoder to encode it back to the json string and save it to a file. But there is a problem with character encoding I guess. When I try to print the json string it prints fine in the console. But when I try to write it into a file, some characters appear such as \u0627\u0644\u0644\u06be\u064f I tried to open the saved file using different encodings and nothing has changed. It suppose to be in utf8 encoding and when I try to display it, those special characters should be replaced with actual characters they represent. Am I missing something here? How can I achieve this? more info: I'm using python 2.7 I open the file like this: json_file = open('test.json', 'w') I also tried this: json_file = codecs.open( 'test.json', 'w', 'utf-8' ) nothing has changed. I blindly tried, .encode('utf8'), .decode('utf8') on the json string and the result is the same. I tried different text editors to view the written text, I used cat command to see the text in the console and those characters which start with \u still appear. Update: I solved the problem. jsonEncoder has an option ensure_ascii If ensure_ascii is True (the default), all non-ASCII characters in the output are escaped with \uXXXX sequences, and the results are str instances consisting of ASCII characters only. I made it False and the problem has gone away.
Python unicode file writing
0
0
1
984
20,306,249
2013-11-30T22:13:00.000
2
1
1
0
python,json,unicode,utf-8
20,528,987
2
true
0
0
jsonEncoder has an option ensure_ascii If ensure_ascii is True (the default), all non-ASCII characters in the output are escaped with \uXXXX sequences, and the results are str instances consisting of ASCII characters only. Make it False and the problem will go away.
2
2
0
I'm using twitter python library to fetch some tweets from a public stream. The library fetches tweets in json format and converts them to python structures. What I'm trying to do is to directly get the json string and write it to a file. Inside the twitter library it first reads a network socket and applies .decode('utf8') to the buffer. Then, it wraps the info in a python structure and returns it. I can use jsonEncoder to encode it back to the json string and save it to a file. But there is a problem with character encoding I guess. When I try to print the json string it prints fine in the console. But when I try to write it into a file, some characters appear such as \u0627\u0644\u0644\u06be\u064f I tried to open the saved file using different encodings and nothing has changed. It suppose to be in utf8 encoding and when I try to display it, those special characters should be replaced with actual characters they represent. Am I missing something here? How can I achieve this? more info: I'm using python 2.7 I open the file like this: json_file = open('test.json', 'w') I also tried this: json_file = codecs.open( 'test.json', 'w', 'utf-8' ) nothing has changed. I blindly tried, .encode('utf8'), .decode('utf8') on the json string and the result is the same. I tried different text editors to view the written text, I used cat command to see the text in the console and those characters which start with \u still appear. Update: I solved the problem. jsonEncoder has an option ensure_ascii If ensure_ascii is True (the default), all non-ASCII characters in the output are escaped with \uXXXX sequences, and the results are str instances consisting of ASCII characters only. I made it False and the problem has gone away.
Python unicode file writing
1.2
0
1
984
20,308,097
2013-12-01T02:33:00.000
4
0
0
0
python,mysql,mysql-python,pythonanywhere
20,309,286
1
true
1
0
It normally because your mysql network connect be disconnected, may by your network gateway/router, so you have two options. One is always build a mysql connect before every query (not using connect pool etc). Second is try and catch this error, then get connect and query db again.
1
2
0
I am hosting a web app at pythonanywhere.com and experiencing a strange problem. Every half-hour or so I am getting the OperationalError: (2006, 'MySQL server has gone away'). However, if I resave my wsgi.py file, the error disappears. And then appears again some half-an-hour later... During the loading of the main page, my app checks a BOOL field in a 1x1 table (basically whether sign-ups should be open or closed). The only other MySQL actions are inserts into another small table, but none of these appear to be associated with the problem. Any ideas for how I can fix this? I can provide more information as is necessary. Thanks in advance for your help. EDIT Problem turned out to be a matter of knowing when certain portions of code run. I assumed that every time a page loaded a new connection was opened. This was not the case; however, I have fixed it now.
Periodic OperationalError: (2006, 'MySQL server has gone away')
1.2
1
0
2,432
20,308,674
2013-12-01T04:17:00.000
0
0
0
0
importerror,qpython
31,173,939
2
false
1
1
To install _csv or any other module, follow these steps. Here's what I did to install websocket (and all of its dependencies): To install websocket on the phone: Start QPython Click the big button Select “run local script” Select “pip_console.py” Type “pip install websocket”
2
0
1
I put goog_appengine inside android located at /mnt/sdcard I also put wsgiref folder at same location. from Qpython I manage to "send control key" + "d" I got sh $ I put command like the ff: "$python /mnt/sdcard/google_appengine/appcfg.py" But Igot ImportError: no module _csv I feel putting these is not same architecture " /usr/lib/python2.7/lib-dynload/_csv.x86_64-linux-gnu.so" That come from ubuntu 13.04. What to do next, Where I can find _csv module for Qpython+android_version. Is it possible to upload my code through android?
ImportError: No module named _csv . Qpython for android logs
0
0
0
840
20,308,674
2013-12-01T04:17:00.000
0
0
0
0
importerror,qpython
21,482,814
2
false
1
1
You can install _csv from QPython's system. ( You can find system icon in qpython's new version 0.9.6.2 )
2
0
1
I put goog_appengine inside android located at /mnt/sdcard I also put wsgiref folder at same location. from Qpython I manage to "send control key" + "d" I got sh $ I put command like the ff: "$python /mnt/sdcard/google_appengine/appcfg.py" But Igot ImportError: no module _csv I feel putting these is not same architecture " /usr/lib/python2.7/lib-dynload/_csv.x86_64-linux-gnu.so" That come from ubuntu 13.04. What to do next, Where I can find _csv module for Qpython+android_version. Is it possible to upload my code through android?
ImportError: No module named _csv . Qpython for android logs
0
0
0
840
20,308,788
2013-12-01T04:41:00.000
1
0
0
0
python,django
20,308,829
2
false
1
0
The views.py must be in the same folder as in the urls.py. If not, while importing the views.py, its better to specify the exact path of the views.py, from the SYS_PATH of the project (from the folder where you have the manage.py file), which will help
1
1
0
I have a Django project that I wrote. I am testing it on the development server. It is saying that there is no module named views, even though the file views.py is one level higher than it in the directory views. Why is this? Does the views.py need to be in the same folder as the urls.py? Is that causing this?
Why does Django say there is no module named views?
0.099668
0
0
166
20,311,225
2013-12-01T10:36:00.000
3
0
1
0
python,python-2.7,argparse,command-line-arguments
20,314,797
1
false
0
0
Your best choice is to test for the presence of various combinations after parse_args and use parser.error to issue an argparse compatible error message. And write your own usage line. And make sure the defaults clearly indicate whether an option has been parsed or not. If you can change the -a and -e options to command names like cmda or build, you could use subparsers. In this case you might define a command_a subparser that accepts -b, -c, and -d, and another command_e subparser that has none of these. This is closes argparse comes to 'required together' groups of arguments. mutually exclusive groups can define something with a usage like [-a -b -c], but that just means -b cannot occur along with -a and -c. But there's nothing fancy about that mechanism. It just constructs a dictionary of such exclusions, and checks it each time it parses a new option. If there is a conflict it issues the error message and quits. It is not set up to handle fancy combinations, such as your (-e | agroup). Custom actions can also check for the absence or presence of non-default values in the namespace, much as you would after parsing. But doing so during parsing isn't any simpler. And it raises questions about order. Do you want to handle -b -c -a the same way as -a -c -b? Should -a check for the presence of the others, or should -b check that -a has already been parsed? Who checks for the presence, or absence of -e. The are a number of other stack questions about argparse groups, exclusive and inclusive, but I think these are the essential issues.
1
0
0
I am using python2.7 and argparse for my script. I am executing script as below: python2.7 script.py -a valuefora -b valueforb -c valueforc -d valueford Now what I want is that, if option -a is given, then only -b, -c, -d options should be asked. In addition to above, I also want to make this group -a -b -c -d as a EITHER OR for -e i.e. ([-a -b -c -d] | -e ) Please correct me anywhere I am wrong.
argparse: how to make group of options required only as group
0.53705
0
0
348
20,312,814
2013-12-01T13:48:00.000
1
0
1
0
python,ipython
20,312,836
2
false
0
0
Have you tried the %save magic: Save a set of lines or a macro to a given filename.
1
0
0
I've written code and many functions inside IPython and now I want to export them to a file in a structured way as in a legible script.py. Is there any mechanism in Ipython to provide such an opportunity?
Save the sequence of the written code to an export file in IPython?
0.099668
0
0
85
20,314,048
2013-12-01T15:55:00.000
2
1
0
0
python,web,amazon-web-services,server-communication
20,316,557
2
false
0
0
Since you are already using aws, for something like this you couldconsider using AWS SQS to add a queue between the two hosts and communicate thru it instead of directly. Using SQS, it would be easy to write a script to add messages to the SQS queue when something needs to be run on the other host, and equally easy for the second host to poll the queue looking for messages. Adding a queue between the two hosts decouples them, and adds a bit of fault tolerance (i.e. one of the hosts could go off line for a bit without the messages being lost), and possibly gives you the ability to scale up if you need to a bit easier (for example, if you ever needed multiple instances at aws processing jobs from the other host you could just add them and tell them to also poll the same queue, as opposed to building in a 1-1 'always on' dependency between the two. Lots of different ways to skin this cat, so maybe the approach above is overkill in your case, but thought I'd throw it out there as something to consider.
1
2
0
I have two web hosts. One with a standard shared hosting provider, and the other: AWS Free Tier. I would like for these two servers to be able to communicate with one another. Basically, I would like the shared server to send some information to the AWS server, causing the AWS server to run certain scripts. I am familiar with python, so am wondering if there is some python library I can use to quickly cook up a script that would listen to a certain port (on AWS). Also, security is in issue: I want AWS to only listen to requests from a certain IP. Is this possible? What python libraries could I use for something like this? I am fairly new to web programming and wasn't able to google the solution to this. Thanks!
What options are there to allow one server to send information to another server?
0.197375
0
1
43
20,315,057
2013-12-01T17:29:00.000
3
0
0
0
python,xml,openerp,overwrite
20,322,373
1
true
1
0
yes you can. you just need to create a module with two files. one is __openerp__.py with correct dependency to the base modules and an xml file for updating the menu name.
1
2
0
I want to change an openerp module menu name. I know how to do it, i'ts actually pretty easy, but this one is a core module sale and i don't want to touch it's code, because of updates issues and stuff. So, i'll need to inherit this view and change it's name, from another module, can i do this without a module.py or a full __init__.py file? Just the openerp manifest file and an xml to overwrite it? Thanks in advance!
Change menu name openerp
1.2
0
0
325
20,315,970
2013-12-01T19:03:00.000
1
0
1
1
java,python,jvm
20,315,993
1
false
1
0
Without fixing python so it doesn't do you this, you can start a java service which calls your code and have python talk to it via TCP e.g. using protobuf. This way the service can be running all the time.
1
0
0
We've been working on a web application using Django. One library we needed to use was written in java, so I made a single jar file containing all the java code we need to use. The python script simply calls the java program using subprocess module and resumes its execution. Everytime the java program is called, it initializes the jvm, does a little work, and then uninitializes itself. This introduces some overhead which might not be that significant in the end but nevertheless having to go through this construct/destroy circle every time we need something from the java library bothers me. Is there an elegant way of doing this without the overhead i just described above?
Keeping JVM running
0.197375
0
0
469
20,317,477
2013-12-01T21:29:00.000
0
0
0
0
python,html,css,web-scraping,web-analytics
20,317,674
2
false
0
0
It sounds like you want to write a spider to do breadth-first search from the first url until you find a link to the second url. I suggest you look at the Scrapy package; it makes it very easy to do.
1
2
0
I am trying to find a way to extract the depth of a website using python. Depth of a subwebsite is equal to the number of clicks required from the main website (e.g. www.ualberta.ca) in order for a user to get to the subwebsite (e.g. www.ualberta.ca/beartracks). so for instance if it takes one additional click to get to a subwebsite from the main domain, the depth of the subwebsite would be 1. is there anyway for me to measure this using python? thank you!
measuring a depth of a website using python
0
0
1
470
20,320,642
2013-12-02T04:05:00.000
1
0
0
0
python,python-2.7,sqlite,sqlalchemy
20,320,905
1
true
0
0
Aren't you over-optimizing? You don't need the best solution, you need a solution which is good enough. Implement the simplest one, using dicts; it has a fair chance to be adequate. If you test it and then find it inadequate, try SQLite or Mongo (both have downsides) and see if it suits you better. But I suspect that buying more RAM instead would be the most cost-effective solution in your case. (Not-a-real-answer disclaimer applies.)
1
2
0
Have some programming background, but in the process of both learning Python and making a web app, and I'm a long-time lurker but first-time poster on Stack Overflow, so please bear with me. I know that SQLite (or another database, seems like PostgreSQL is popular) is the way to store data between sessions. But what's the most efficient way to store large amounts of data during a session? I'm building a script to identify the strongest groups of employees to work on various projects in a company. I have received one SQLite database per department containing employee data including skill sets, achievements, performance, and pay. My script currently runs one SQL query on each database in response to an initial query by the user, pulling all the potentially-relevant employees and their data. It stores all of that data in a list of Python dicts so the end-user can mix-and-match relevant people. I see two other options: I could still run the comprehensive initial queries but instead of storing it in Python dicts, dump it all into SQLite temporary tables; my guess is that this would save some space and computing because I wouldn't have to store all the joins with each record. Or I could just load employee name and column/row references, which would save a lot of joins on the first pass, then pull the data on the fly from the original databases as the user requests additional data, storing little if any data in Python data structures. What's going to be the most efficient? Or, at least, what is the most common/proper way of handling large amounts of data during a session? Thanks in advance!
What's faster: temporary SQL tables or Python dicts for session data?
1.2
1
0
2,649
20,323,084
2013-12-02T07:38:00.000
1
1
0
1
python,python-2.7,gcc,python-3.x,include-path
20,325,445
1
true
0
0
GCC probably prefers the 3.3 version if it's installed as the default that's run when you call 'python' without a version? You could always point that binary at the 2.7 to make it the default on your system.. Looking at the m4 source, seems like you might be able to do the following on one line: PYTHON=/path/to/python2.7 PYTHON_INCLUDES="-I/usr/include/python2.7" ./configure --prefix /bla/bla
1
3
0
I am trying to compile a code which uses the Python.h header. In fact it is the lcm library. Now, I have Python2.7 and Python3.3 installed on my system. The respective header filer are found in /usr/include/python2.7/ and /usr/include/python3.3m/. The problem is that the code needs the 2.7 version, but gcc always prefers the 3.3 version. I tried setting ./configure --prefix /bla/bla CPPFLAGS=-I/usr/include/python2.7/ and export C_INCLUDE_PATH=/usr/include/python2.7, none of which worked. An intermediate workaround is to change the code to #include <python2.7/Python.h> but that makes it unportable, so it will not serve as a fix for the lcm people... There must be a way!!!
Python3.3 header preferred over Python2.7 header by gcc
1.2
0
0
806
20,324,828
2013-12-02T09:29:00.000
0
1
0
1
python,django,ubuntu
20,324,883
2
true
1
0
Sounds like an issue with your path - python not finding django becuase it doesnt know where to look for it. Look up issues regarding path and see if those help.
1
0
0
and thanks ahead of time. I am relatively new to Linux and am using Ubuntu 12.04.3. Basically, I've been messing around with some files trying to get Django to work. Well, I though I should do another install of Python2.7 for some reason. Stupidly, I manually installed it. Now when I open the Python shell and do 'import django', it can't be found. I just want to go back to using the Python that was on Ubuntu by default, or overwrite the one I installed manually with one using apt-get. However, I am unable to figure out how to do this nor have I found a question that could help me. Any help is much appreciated. I've been working on this for 6 hours now... --EDIT-- Ok well I'm just trying to go ahead and have the PYTHONPATH look in the right place. I've seen in other posts that you should do this in the ~/.profile file. I went into that file and added this line export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python2.7/dist-packages "import django" is still coming up with "no module found" I tried doing "import os" and then "os.environ["PYTHONPATH"], which gave me: Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python2.7/UserDict.py", line 23, in getitem raise KeyError(key) KeyError: 'PYTHONPATH' As far as I can tell, this means that I do not have a PYTHONPATH variable set, but I am unsure as to what I am doing wrong. --ANOTHER EDIT-- As I am not a very reputable member, I am not allowed to answer my own question before 8 hours from my original question, so I am putting it as an update. Hey guys, thank you all for the quick responses and helpful tips. What I did was open a python shell and type: sys.path.append('/usr/local/lib/python2.7/dist-packages') and it worked! I should have done this from the beginning instead of trying to overwrite my manual Python installation. Once again, thank you all for the help. I feel so relieved now :)
I need to overwrite an existing Python installation in ubuntu 12.04.3
1.2
0
0
533
20,325,738
2013-12-02T10:15:00.000
2
0
1
0
python,performance,coding-style,standards
20,325,839
2
false
0
0
The first one is the better of the two. The second one would get tripped up if the filename had two or more periods in it.
1
1
0
I am trying to change an extension for a file, and I got two options. os.path.splitext(os.path.basename(g_filename))[0] + ".new" os.path.basename(g_filename).split('.')[0] + ".new" Both gives the same output. So i am getting a new file called oldfile.new from oldfile.old No possibility of having too many '.' in the file name. Which is better of these two? What is the thumb rule (if any)?
Python-Which command is better for changing the file extension?
0.197375
0
0
117
20,331,622
2013-12-02T15:16:00.000
0
0
0
0
python,pywinauto
20,384,256
1
false
0
1
ListView in pywinauto inherited from HwndWrapper which has DoubleClick() method, try it. Also make sure you've tried Select(item) for ListView. (You've mentioned select())
1
0
0
I want to use pywinauto to write an auto-click script. I know I can use app.ListView2.Select(4) to select an item. So I tried to use select() or Check(), but it's not working. How can I doubleclick an item?
How to doubleclick a listview item with pywinauto?
0
0
0
2,317
20,332,320
2013-12-02T15:53:00.000
4
0
1
0
python,pycharm,built-in
20,691,051
3
false
0
0
I was getting a similar issue. @property and ValueError were being shown as 'undefined'. I had mucked about with PyCharm's interpreter settings a bit beforehand, and I was able to fix it by using the File -> Invalidate Caches / Restart... and choosing the "Invalidate and Restart" command.
1
9
0
I'm using PyCharm and just trying out some simple stuff. When I try to use raw_input(), the editor is showing an unresolved reference error. I'm not sure what the issue is. Has anyone seen this before?
pycharm builtin unresolved reference
0.26052
0
0
13,327
20,334,880
2013-12-02T18:05:00.000
1
0
1
0
python
20,334,991
3
false
0
0
I also use __all__: that explictly tells module users what you intend to export. Searching the module for names is tedious, even if you are careful to do, e.g., import os as _os, etc. A wise man once wrote "explicit is better than implicit" ;-)
2
2
0
From the perspective of an external user of the module, are both necessary? From my understanding, by correctly prefix hidden functions with an underscore it essentially does the same thing as explicitly define __all__, but I keep seeing developers doing both in their code. Why is that?
Should I define __all__ even if I prefix hidden functions and variables with underscores in modules?
0.066568
0
0
245
20,334,880
2013-12-02T18:05:00.000
0
0
1
0
python
20,334,984
3
false
0
0
Defining all will overide the default behaviour. There is actually might be one reason to define __all__ When importing a module, you might want that doing from mod import * will import only a minimal amount of things. Even if you prefix everything correctly, there could be reasons not to import everything. The other problem that I had once was defining a gettext shortcut. The translation function was _ which would not get imported. Even if it is prefixed "in theory" I still wanted it to get exported. One other reason as stated above is importing a module that imports a lot of thing. As python cannot make the difference between symbols created by imports and the one defined in the actual module. It will automatically reexport everything that can be reexported. For that reason, it could be wise to explicitely limit the thing exported to the things you want to export. With that in mind, you might want to have some prefixed symbols exported by default. Usually, you don't redefine __all__. Whenever you need it to do something unusual then it may make sense to do it.
2
2
0
From the perspective of an external user of the module, are both necessary? From my understanding, by correctly prefix hidden functions with an underscore it essentially does the same thing as explicitly define __all__, but I keep seeing developers doing both in their code. Why is that?
Should I define __all__ even if I prefix hidden functions and variables with underscores in modules?
0
0
0
245
20,337,169
2013-12-02T20:27:00.000
1
0
1
0
python,import
20,337,294
1
true
0
0
Modules that are imported more than one are generally only initialized once and the namespace is introduced into the scope of the module. So in your example above there is one class fruit and the two classes that inherit from it and if you were to introduce 3 varieties of apple there would still only be one underlying fruit class. This is how professional packages do it. In other languages like C/C++ you need to use wards to prevent multiple imports python does if for you.
1
1
0
For instance, consider 3 modules namely "apple", "orange" and "fruit". Module "apple" imports "orange" and "fruit". Module "orange" imports "fruit" only. Since "fruit" is common for both, could this be done in a different way? Is this inefficient in terms of memory usage and speed? I wonder how this is done in professionally distributed packages. Say, if a standard library module (viz. httplib) is needed throughout various modules that has GUI code and other complicated stuff. importing this module in every GUI file would be impractical, wouldn't it?
Import in python, two modules sharing common resource
1.2
0
0
131
20,339,183
2013-12-02T22:28:00.000
9
0
1
0
python,setuptools,setup.py
20,339,277
1
true
0
0
develop creates an .egg-link file in the site-packages directory, which points back to the location of the project files. The same path is also added to the easy-install.pth file in the same location. Uninstalling with setup.py develop -u removes that link file again. Do note that any install_requires dependencies not yet present are also installed, as regular eggs (they are easy_install-ed). Those dependencies are not uninstalled when uninstalling the development egg.
1
10
0
I am trying to improve my workflow when developing python modules and have a rather basic question. What exactly happens when choosing either option. To my knowledge develop leaves the files in place so I can modify them and play around with the package whereas install copies them in the site-packages folder of my python installation. How is the package linked to my python installation when using the develop option.
Difference between setup.py install and setup.py develop
1.2
0
0
3,236
20,339,505
2013-12-02T22:51:00.000
3
1
1
0
python,git,github,version,fileupdate
20,339,747
1
true
0
0
Assuming all the computers you are setting this up on have accurate times, you could have it create a timestamp file for each dotfile, which just contains the local modified time of the dotfile. Then you can compare the local timestamp with the remote one. You could also just do the commit locally and try to merge with the remote branch. If the merge succeeds then assume it's ok. If it failed then there were two different changes to the same part of a file and the conflict needs to be resolved, in which case you notify yourself somehow. A potentially simpler and less error prone solution (because it's more manual) would be to have your dotfiles be symlinked to the dotfiles in the git. Then when you edit a dotfile the git one is updated, and you can manually commit and push the change easily.
1
0
0
I'm writing a python script to keep my dotfiles up to date with a repository on GitHub. It copies the dot files into a separate directory ( ~/dotfiles ) so that my home directory is not a git repo. Before copying the files, it does a filecmp.cmp( fileInLocalRepo,fileInHomeDir ) to see if the file has changed since it was last copied into the local repo. Once all the files are updated, if there have been any changes the changed files are pushed to GitHub. That works fine until I start updating dot files from more than one computer, then older files could potentially overwrite my remote ones. If I pull the files down to my local dotfiles repo first, filecmp.cmp() will still say the files are different and the script will overwrite the pulled down file with the local one, then push because it thinks there was a change. Is there any way I can figure out which file is actually newer? I know that git doesn't preserve update times in file properties, so I can't use that. How can I pull down files from GiHut to a local repo ( ~/dotfiles ) then compare them with the same dot files that are in my home directory to see which of each file is actually newer?
Best way to compare remote git file versions with local file versions?
1.2
0
0
311
20,343,484
2013-12-03T05:21:00.000
3
0
1
0
python
20,343,624
1
false
0
0
The best I can tell you is importing time and using time.sleep(60)
1
0
0
I need to execute a function on python on intervals of one minute repeatedly. There is another alternative to the "Busy Waith" method. Maybe some like a timer that wake up every minute.
Timed execution of a function
0.53705
0
0
62
20,346,189
2013-12-03T08:31:00.000
23
0
1
0
python,django,import
20,346,290
3
true
1
0
You surely must have noticed that almost all Python code does the imports at the top of the file. There's a reason for that: the overhead of importing is minimal, and the likelihood is that you will be importing the code at some point during the process lifetime anyway, so you may as well get it out of the way. The only good reason to import at function level is to avoid circular dependencies. Edit Your comments indicate you haven't understood how web apps generally work, at least in Python. They don't start a new process for each request and import the code from scratch. Rather, the server instantiates processes as required, and each one serves many requests until it is finally killed. So again it is likely that during that lifetime, all the imports will end up being needed.
2
17
0
I am developing an app in django and I had a doubt if importing a library at the global level has any impact over the memory or performance than importing at the local ( per-function) level. If it is imported per function or view, the modules that are needed alone are imported saving space right? Or are there any negatives in doing so?
Which is a better practice - global import or local import
1.2
0
0
4,974
20,346,189
2013-12-03T08:31:00.000
-2
0
1
0
python,django,import
20,346,377
3
false
1
0
When the script is run, it will store the modules in memory, i'm sure you understand that. If you're importing on a local scope, the module will be imported each time the client calls the function. But if the module is imported at a global level, there will be no need for that! So, in this case: global import wins.
2
17
0
I am developing an app in django and I had a doubt if importing a library at the global level has any impact over the memory or performance than importing at the local ( per-function) level. If it is imported per function or view, the modules that are needed alone are imported saving space right? Or are there any negatives in doing so?
Which is a better practice - global import or local import
-0.132549
0
0
4,974
20,346,778
2013-12-03T09:02:00.000
2
0
0
0
python,mysql,django,ftp
20,346,988
1
true
1
0
ftp stands for "file transfer protocol", not for "remote shell", so no, you cannot use ftp to execute a command / program / script / whatever. But why don't you just ask your hosting how to get a dump of your data ?
1
0
0
I am newbie to python and django, but this time I need a fast solution. I've got a problems, using hosting where my django application is deployed so I need to migrate to another server, but I have no ssh or telnet to server, only ftp connection for this server. I need to export data from django database. I wanted to write a script and put it somewhere in django application for data export, but when I put my modification on server behavior does not change(as nothing changed). Also when I remove .pyc files from djagno (for example. views.pyc) - no changes, and when I remove .py file - nothings changes (for example views.py). As far as I read about django, it is possible that server is running with option "-noreload". So question is it any possible way to dump database only via ftp and django/python? (remote connection via mysql is disabled)
Is it possible to run scripts on django using only ftp?
1.2
0
0
219
20,347,235
2013-12-03T09:24:00.000
0
1
0
0
python,mechanize,mechanize-python
22,406,798
1
true
1
0
Answering my own question. Browser giving an alert message simply means that our the node is injected into DOM. By simply looking for the string that I injected in the response body, I could determine whether the given input is reflected through the browser without proper sanitization.
1
0
0
I am trying to develop a small automated tool in python that can check Forms inputs of a web application for XSS vulnerability. I hope to do this using python mechanize library so that I can automate form filling and submit and get the response from the python code. Though mechanize is also works as a browser, is there a way to detect a browser alert message for an input containing a script. Or else is there any other library for python such that I can perform this functionality. Any sample code will be a great favor. PS : I am trying to develop this so that I can find them in an application we are developing and include them in a report and NOT for Hacking purpose. Thank you.
Identify Browser alert messges in Mechanize - Python
1.2
0
1
244
20,347,467
2013-12-03T09:37:00.000
1
0
1
0
python,file,text
20,347,640
2
false
0
0
Have you debug your code, I mean, you must have to debug and see that it returns the Entry box value or not, if it returns value, then check what value it return.
1
0
0
I read some strings which are usernames from a file into my Python program, then these strings are tested in if statement to see if they are equal to the value entered in the Entry box, but the value from the Entry box and the file even are not equating in the if statement even when I saw they were the same.
Strings from text file not equating in if statement with program strings in Python
0.099668
0
0
84
20,348,584
2013-12-03T10:28:00.000
0
0
0
0
python,mysql,sql,database,mysql-python
20,348,851
2
false
0
0
Not sure if I understand what it is you want to do. You want to match a value from a column from one table to a value from a column from another table? If you'd have the data in two tables in a database, you could make an inner join. Depending on how big the file is, you could use a manual comparison tool like WinMerge.
2
0
0
I have two databases (infact two database dump ... db1.sql and db2.sql) both database have only 1 table in each. in each table there are few columns (not equal number nor type) but 1 or 2 columns have same type and same value i just want to go through both databases and find a row from each table so that they both have one common value now from these two rows(one from each table) i would extract some information and would write into a file. I want efficient methods to do that PS: If you got my question please edit the title EDIT: I want to compare these two tables(database) by a column which have contact number as primary key. but the problem is one table has it is as an integer(big integer) and other table has it is as a string. now how could i inner-join them. basically i dont want to create another database, i simply want to store two columns from each table into a file so I guess i dont need inner-join. do i? e.g. in table-1 = 9876543210 in table-2 = "9876543210"
Compare two databases and find common value in a row
0
1
0
1,032
20,348,584
2013-12-03T10:28:00.000
0
0
0
0
python,mysql,sql,database,mysql-python
20,348,719
2
false
0
0
You can use Join with alias name.
2
0
0
I have two databases (infact two database dump ... db1.sql and db2.sql) both database have only 1 table in each. in each table there are few columns (not equal number nor type) but 1 or 2 columns have same type and same value i just want to go through both databases and find a row from each table so that they both have one common value now from these two rows(one from each table) i would extract some information and would write into a file. I want efficient methods to do that PS: If you got my question please edit the title EDIT: I want to compare these two tables(database) by a column which have contact number as primary key. but the problem is one table has it is as an integer(big integer) and other table has it is as a string. now how could i inner-join them. basically i dont want to create another database, i simply want to store two columns from each table into a file so I guess i dont need inner-join. do i? e.g. in table-1 = 9876543210 in table-2 = "9876543210"
Compare two databases and find common value in a row
0
1
0
1,032
20,349,474
2013-12-03T11:08:00.000
3
0
0
0
python,django
20,349,538
3
false
1
0
You run the runserver command only when you develop. After you deploy, the client does not need to run python manage.py runserver command. Calling the url will execute the required view. So it need not be a concern
1
2
0
I'm developing some Python project with Django. When we render the Python/Django application, we need to open the command prompt and type in python manage.py runserver. That's ok on for the development server. But for production, it looks funny. Is there anyway to run the Python/Django project without opening the command prompt?
How to run Python server without command prompt
0.197375
0
0
1,565
20,350,171
2013-12-03T11:40:00.000
0
0
1
0
python,python-2.7,scipy,integrate
61,066,320
1
false
0
0
I had the same problem. My issue was that python-2.7 would not let me import scipy.integrate, but python-3.x would allow the import.
1
3
0
I have a reoccurring issue with importing the integrate module of scipy. Periodically, I get the Error message "ImportError: cannot import name integrate". Usually, I use the statement import scipy.integrate to import the module. Only using import scipy successfully imports scipy but without the integrate module. The funny thing is that this behavior can change each time I start Python. So sometimes it works fine even when the same script is run. Anybody has any suggestions?
Problems importing scipy.integrate module
0
0
0
2,625
20,350,989
2013-12-03T12:17:00.000
0
0
1
0
python,python-2.7,floating-point,floating-point-precision
20,351,305
2
false
0
0
The struct module can handle 64 bit floats. Decimals are another matter - the binary representation is a string of digits. Probably not what you want. You could covert it to BCD to halve the amount of storage.
1
0
1
In Python 2.7,I need to record high precision floats (such as np.float64 from numpy or Decimal from decimal module) to a binary file and later read it back. How could I do it? I would like to store only bit image of a high precision float, without any overhead. Thanks in advance!
How to store high precision floats in a binary file, Python 2.7?
0
0
0
239
20,355,886
2013-12-03T16:01:00.000
1
0
1
0
python,json
20,356,067
3
false
0
0
The problem is that the data structure has a list enclosing the dictionaries. If you have any control over the data source, that's the place to fix it. Otherwise, the best course is probably to post-process the data after parsing it to eliminate these extra list structures and merge the dictionaries in each list into a single dictionary. If you use an OrderedDict you can even retain the ordering of the items (which is probably why the list was used).
1
0
0
First, here is a sample JSON feed that I want to read in Python 2.7 with either simplejson or the built in JSON decoder. I am loading the .json file in Python and then searching for a key like "Apple" or "Orange" and when that key is found, I want to bring in the information for it like the types and quantities. Right now there is only 3 items, but I want to be able to search one that may have up to 1000 items. Here is the code: { "fruits": [ { "Apple": [ { "type": "Gala", "quant": 5 }, { "type": "Honeycrisp", "quant": 10 }, { "type": "Red Delicious", "quant": 4 } ] }, { "Banana": [ { "type": "Plantain", "quant": 5 } ] }, { "Orange": [ { "type": "Blood", "quant": 3 }, { "type": "Navel", "quant": 20 } ] } ] } My sample Python code is as follows: import simplejson as json # Open file fjson = open('/home/teg/projects/test/fruits.json', 'rb') f = json.loads(fjson.read()) fjson.close() # Search for fruit if 'Orange' in json.dumps(f): fruit = f['fruits']['Orange'] print(fruit) else: print('Orange does not exist') But whenever I test it out, it gives me this error: TypeError: list indices must be integers, not str Was it wrong to have me do a json.dumps and instead should I have just checked the JSON feed as-is from the standard json.loads? I am getting this TypeError because I am not specifying the list index, but what if I don't know the index of that fruit? Do I have to first search for a fruit and if it is there, get the index and then reference the index before the fruit like this? fruit = f['fruits'][2]['Orange'] If so, how would I get the index of that fruit if it is found so I could then pull in the information? If you think the JSON is in the wrong format as well and is causing this issue, then I am up for that suggestion as well. I'm stuck on this and any help you guys have would be great. :-)
Python: TypeError in referencing item in JSON feed
0.066568
0
0
65
20,359,021
2013-12-03T18:36:00.000
3
0
1
0
python,encryption,dictionary
20,359,187
2
false
0
0
Searching for English words in a block of undifferentiated text like that is certainly possible, and doing it efficiently is a genuinely interesting problem. But it's problematic for lots of reasons, just one of which is that your encrypted text may include text that randomly happens to form an English word completely by chance. For example, just the text you've posted here includes HELL, IF, LIP, LOG, ASP and probably others. You could trim down the alternatives by only searching for words the same length as your target word, and only for words with the same letter pattern. But that's really quite a lot of work to get around the fact that your initial output has a lot of useless data in it. You can check easily to find out whether a specific word is in an English dictionary by doing this: Read in the lines from a dictionary file (/usr/share/dict/words on most systems). Strip whitespace, convert to lowercase and store each line in a Python dictionary. After decrypting each word, check to see if it's present as a key in the Python dictionary. Taking that approach probably makes a lot more sense than trying to grovel through the unspaced initial output.
1
4
0
I am using python to build a Caesar cipher decrypter, It works and decrypts the already encrypted word. However, it shows all its brute force decryption attempts, for example, "HELLO" encrypted with a key of 3 is KHOOR. The results after the decryption is "KHOORJGNNQIFMMPHELLOGDKKNFCJJMEBIILDAHHKCZGGJBYFFIAXEEHZWDDGYVCCFXUBBEWTAADVSZZCURYYBTQXXASPWWZROVVYQNUUXPMTTWOLSSVNKRRUMJQQTLIPPS" I am wondering if there is a way to use a dictionary with Python to search for an English word in this output or can I improve my code to only print out known English words. Apologies if this has been asked before, I searched around and couldn't seem to find the right thing.
Finding words in a non-spaced paragraph?
0.291313
0
0
576
20,360,686
2013-12-03T20:09:00.000
24
0
1
0
python,windows,multiprocessing
20,360,812
2
false
0
0
The multiprocessing module works by creating new Python processes that will import your module. If you did not add __name__== '__main__' protection then you would enter a never ending loop of new process creation. It goes like this: Your module is imported and executes code during the import that cause multiprocessing to spawn 4 new processes. Those 4 new processes in turn import the module and executes code during the import that cause multiprocessing to spawn 16 new processes. Those 16 new processes in turn import the module and executes code during the import that cause multiprocessing to spawn 64 new processes. Well, hopefully you get the picture. So the idea is that you make sure that the process spawning only happens once. And that is achieved most easily with the idiom of the __name__== '__main__' protection.
2
24
0
While using multiprocessing in python on windows, it is expected to protect the entry point of the program. The documentation says "Make sure that the main module can be safely imported by a new Python interpreter without causing unintended side effects (such a starting a new process)". Can anyone explain what exactly does this mean ?
Compulsory usage of if __name__=="__main__" in windows while using multiprocessing
1
0
0
15,856
20,360,686
2013-12-03T20:09:00.000
36
0
1
0
python,windows,multiprocessing
20,361,032
2
true
0
0
Expanding a bit on the good answer you already got, it helps if you understand what Linux-y systems do. They spawn new processes using fork(), which has two good consequences: All data structures existing in the main program are visible to the child processes. They actually work on copies of the data. The child processes start executing at the instruction immediately following the fork() in the main program - so any module-level code already executed in the module will not be executed again. fork() isn't possible in Windows, so on Windows each module is imported anew by each child process. So: On Windows, no data structures existing in the main program are visible to the child processes; and, All module-level code is executed in each child process. So you need to think a bit about which code you want executed only in the main program. The most obvious example is that you want code that creates child processes to run only in the main program - so that should be protected by __name__ == '__main__'. For a subtler example, consider code that builds a gigantic list, which you intend to pass out to worker processes to crawl over. You probably want to protect that too, because there's no point in this case to make each worker process waste RAM and time building their own useless copies of the gigantic list. Note that it's a Good Idea to use __name__ == "__main__" appropriately even on Linux-y systems, because it makes the intended division of work clearer. Parallel programs can be confusing - every little bit helps ;-)
2
24
0
While using multiprocessing in python on windows, it is expected to protect the entry point of the program. The documentation says "Make sure that the main module can be safely imported by a new Python interpreter without causing unintended side effects (such a starting a new process)". Can anyone explain what exactly does this mean ?
Compulsory usage of if __name__=="__main__" in windows while using multiprocessing
1.2
0
0
15,856
20,362,190
2013-12-03T21:36:00.000
0
0
0
0
python,pygame
20,385,564
2
false
0
1
Just to add a bit to furas answer. Since this function is called get_ticks, you need to do the actual action to get the ticks. There is one more thing that troubles me, you are trying to blit a integer to the screen, and this is not possible. What you need to do is create a surface from the string value of the miliseconds, and blit that.
1
0
0
I have stated time=pygame.time.get_ticks then when bliting thing to the screen I try window.blit(time, (0,0)) but where the time is supposed to be it just says built in function. Why is this??
Trouble with pygame.time.get_ticks
0
0
0
244
20,363,655
2013-12-03T23:14:00.000
0
0
0
0
python,selenium
20,363,967
1
false
0
0
I'm guessing you need a $DISPLAY variable, and xauth or xhost. Selenium depends on a browser, and your browser on Linux depends on X11. $DISPLAY tells X11 where to find the X server (the thing that renders the graphics - usually this is on the computer you're sitting in front of), and xauth or xhost tells the remote host how to authenticate to the X server. If you're using putty to connect to the Linux host (or other X11-less ssh client), you'll probably need to install an X server on the machine you're sitting in front of, and then use Cygwin ssh -Y to forward xauth creds to the remote host. Another option that works pretty well for many people is to use VNC. This allows you to reboot the machine you're sitting in front of, without interrupting your Selenium tests. There are many interoperable VNC client/servers. You can easily test your X11 communication by just running "xterm &" or "xdpyinfo". If this displays a command window on the machine you're sitting in front of, X11's set up.
1
0
0
I am trying to run a python code on a Linux server, and my code involves running Selenium. soon after I started running the code, the following error popped up: The browser appears to have exited before we could connect. The output was: Error: cannot open display: I installed firefox and selenium, but for some reason the error is keep popping up how can I solve this issue? thank you
running a python code that includes selenium module gives an error
0
0
1
64
20,365,356
2013-12-04T01:52:00.000
1
0
1
0
python,json,http,flask
20,365,707
1
true
1
0
I never needed to do something like this myself, but you can probably do this by subclassing the Response class. Let's say you create a JSONResponse class as a subclass of Response. The constructor of this class takes the same arguments as the parent class, but instead of a string for the body it takes a dictionary. So you do not call jsonify() at this point, just pass the dictionary with the data into the JSONResponse object. The response object takes the dictionary and stores it aside in a member variable, and then calls the parent response class and sets an empty response body for now. I think this will fool Flask into thinking the response is valid. When you get to the after_request handler you have access to the response object, so you can get the dictionary, still in its native form, and make any modifications you want. The JSONResponse class has a method that is called, say, render() that you call when you are done modifying the dictionary. This method calls jsonify() on the final version of the data and updates the response body in the parent class.
1
0
0
I'm (attempting) to create a RESTlike API via Flask and would like to include the HTTP response code inside the returned json response. I've found the flask.jsonify() very helpful in generating my responses and I can manipulate the response via the app.after_request decorator. The issue I have is that the response data is already serialized by the time I can read it in the function I decorate with app.after_request so to insert the response.status_code would require de-serializing and re-serializing every request. I'm very new to flask and am unsure of the 'correct' way to get the status_code into the response, preferably before it gets serialized into a completed response.
Modifying JSON response via Flask after_request
1.2
0
0
1,188
20,366,522
2013-12-04T03:56:00.000
0
0
0
0
python,selenium,phantomjs
26,636,714
2
false
1
0
Not sure what version of Ghostdriver you are on, but I got that error on 1.9.7 until I upgraded to selenium 2.40
2
2
0
I am running a code that has selenium component, which requires phantomJS. I am getting the following error message: Unable to start phantomjs with ghostdriver.' ; Screenshot: available In my code, I specified my phantomJS path(the bin path), but such measure didn't work. I have placed the phantomJS-osx folder at the same location as my folder for selenium - would it be the cause of my problem? thanks
Unable to start phantomjs with ghostdriver
0
0
1
2,389
20,366,522
2013-12-04T03:56:00.000
-2
0
0
0
python,selenium,phantomjs
20,742,773
2
false
1
0
This is a bug. Please use selenium 2.37.2
2
2
0
I am running a code that has selenium component, which requires phantomJS. I am getting the following error message: Unable to start phantomjs with ghostdriver.' ; Screenshot: available In my code, I specified my phantomJS path(the bin path), but such measure didn't work. I have placed the phantomJS-osx folder at the same location as my folder for selenium - would it be the cause of my problem? thanks
Unable to start phantomjs with ghostdriver
-0.197375
0
1
2,389
20,366,533
2013-12-04T03:57:00.000
2
0
0
1
python,gtk,mayavi,spectral
20,513,075
1
true
0
1
While using OS X Mavericks one has to use: ipython --pylab=wx instead of ipython --pylab=osx to avoid crashing the X11 window. I don't know why this works.
1
4
0
I updated my MacBook to Mavericks, reinstalled Macports and all Python 2.7 modules I usually use. While running Python I get the following messages: when importing mlab: from mayavi import lab (process:1146): Gtk-WARNING **: Locale not supported by C library. Using the fallback 'C' locale. when running a mlab command such as mlab.mesh(), the display window opens, shows no content and freezes. I don't get this message while importing spectral, but I get it when running view_cube() the display window showing the image cube, freezes but shows the data cube. It seems there is something wrong with Xterm, but I can't figure it out. How can I keep the display window from freezing and get rid of the Gtk-WARNING? I checked locale and locale -a, but couldn't see anything unusual: locale: locale LANG= LC_COLLATE="C" LC_CTYPE="C" LC_MESSAGES="C" LC_MONETARY="C" LC_NUMERIC="C" LC_TIME="C" LC_ALL=
Gtk-WARNING **: Locale not supported by C library. while using several Python modules (mayavi, spectral)
1.2
0
0
5,430
20,367,296
2013-12-04T05:07:00.000
1
0
0
0
java,python,swing,user-interface
20,367,301
1
true
0
1
I would suggest you go for JavaFX. It comes with the JDK and has a lot of good features. Plus, it can inter-operate with Swing, backwards and forwards. It allows you to use CSS to pretty-paint your UI. t gives you best of both the worlds.
1
0
0
I am looking to develop an application as a personal project. What I have in mind is a network-based application that would entail writing code for a typical client-server architecture using TCP Sockets as well as a heavy use of GUIs. I believe I have quite a few choices: Swing in Java, PyQt, PyGTK, wxpython and the like in Python. I was just wondering if anyone could direct me to which language would be better in the above respects.
Which language has better support for developing GUIs coupled with network programming, Python or Java?
1.2
0
0
101
20,367,583
2013-12-04T05:26:00.000
2
0
1
0
python,for-loop
20,367,645
6
false
0
0
continue means "skip to the end of the loop body". If it's a while loop, the loop continues on to the loop test; if it's a for loop, the loop continues on to the next element of whatever it's iterating over. pass does absolutely nothing. It exists because you have to have something in the body of an empty block statement, and pass is more readable than executing 1 or None as a statement for that purpose.
2
8
0
My test shows that both pass and continue can be used equivalently to construct a empty for-loop for test purpose. Are there any difference between them?
What's the difference between pass and continue in python
0.066568
0
0
7,025
20,367,583
2013-12-04T05:26:00.000
3
0
1
0
python,for-loop
20,367,602
6
false
0
0
pass does nothing (no operation), while continue make control flow to continue to next cycle of the loop.
2
8
0
My test shows that both pass and continue can be used equivalently to construct a empty for-loop for test purpose. Are there any difference between them?
What's the difference between pass and continue in python
0.099668
0
0
7,025
20,369,840
2013-12-04T07:48:00.000
0
0
1
0
python,regex,email,validation,email-address
20,370,346
5
false
0
0
Doesn't the email library have functions to extract a sender and their domain? But if it doesn't, and if it's just one, known domain you want to check for, just check if the msg['from'] contains @foobar.net and be done with it.
2
2
0
I am writing a Python script that checks the inbox of an IMAP account, reads in those e-mails, and replies to specific e-mails. However, for security reasons I need to make sure that the original sender of the e-mail came from a particular domain. I am reading the e-mails in Python using the email library and its message_from_string function: msg = email.message_from_string(data[0][1]) This gives me easy access to the sender via msg['from']. After some testing, I've found that this is generally in the format John Doe <[email protected]> but I would think it could also be in the format [email protected]. As I said, I want to make sure the domain of the sender's e-mail is, say, foobar.net. What would be the best way to verify that? Should I use a regex so I can pull out the e-mail regardless what format msg['from'] is in? Or should I just do a string split on @ and then check the next 10 characters are foobar.net? Something else entirely?
Verify the domain of an e-mail address
0
0
0
3,996
20,369,840
2013-12-04T07:48:00.000
0
0
1
0
python,regex,email,validation,email-address
20,369,926
5
false
0
0
I'd definitely recommend to use a regex. However, you should also be aware that the 'from' field in emails is set by the email client and can be spoofed without much effort. So you should think if maybe also checking for the sending mail server might be an option. Cheers Hendrik
2
2
0
I am writing a Python script that checks the inbox of an IMAP account, reads in those e-mails, and replies to specific e-mails. However, for security reasons I need to make sure that the original sender of the e-mail came from a particular domain. I am reading the e-mails in Python using the email library and its message_from_string function: msg = email.message_from_string(data[0][1]) This gives me easy access to the sender via msg['from']. After some testing, I've found that this is generally in the format John Doe <[email protected]> but I would think it could also be in the format [email protected]. As I said, I want to make sure the domain of the sender's e-mail is, say, foobar.net. What would be the best way to verify that? Should I use a regex so I can pull out the e-mail regardless what format msg['from'] is in? Or should I just do a string split on @ and then check the next 10 characters are foobar.net? Something else entirely?
Verify the domain of an e-mail address
0
0
0
3,996
20,373,039
2013-12-04T10:38:00.000
16
0
0
0
python,numpy
20,375,614
4
false
0
0
You should use array.astype(bool) (or array.astype(dtype=bool)). Works with matrices too.
2
18
1
I have a n x n matrix in numpy which has 0 and non-0 values. Is there a way to easily convert it to a boolean matrix? Thanks.
How do I convert a numpy matrix into a boolean matrix?
1
0
0
34,445
20,373,039
2013-12-04T10:38:00.000
4
0
0
0
python,numpy
20,373,327
4
false
0
0
Simply use equality check: Suppose a is your numpy matrix, use b = (a == 0) or b = (a != 0) to get the boolean value matrix. In some case, since the value maybe sufficiently small but non-zero, you may use abs(a) < TH, where TH is the numerical threshold you set.
2
18
1
I have a n x n matrix in numpy which has 0 and non-0 values. Is there a way to easily convert it to a boolean matrix? Thanks.
How do I convert a numpy matrix into a boolean matrix?
0.197375
0
0
34,445
20,375,551
2013-12-04T12:35:00.000
24
0
1
0
python,regex
20,375,638
1
true
0
0
\d+ means one or more digit [0-9] (depending on LOCALE) \d- means a digit followed by a dash - \w+ means one or more word character [a-zA-Z0-9_] (depending on LOCALE) \w- means a word char followed by a dash -
1
9
0
As per title, what is the difference between: \d+ and \d- \w+ and \w- in regular expression terms? What influence has + and - ?
What is the difference between \d+ and \d- OR \w+ and \w- in regular expression terms?
1.2
0
0
16,412
20,380,661
2013-12-04T16:27:00.000
0
1
0
0
python,database,performance,chat
20,382,525
1
true
0
0
The only answer possible at this point is 'try it and see'. I would start with MySQL (mostly because it's the 'lowest common denominator', freely available everywhere); it should do everything you need up to several thousand users, and if you get that far you should have a far better idea of what you need and where the bottlenecks are.
1
0
0
So I was making a simple chat app with python. I want to store user specific data in a database, but I'm unfamiliar with efficiency. I want to store usernames, public rsa keys, missed messages, missed group messages, urls to profile pics etc. There's a couple of things in there that would have to be grabbed pretty often, like missed messages and profile pics and a couple of hashes. So here's the question: what database style would be fastest while staying memory efficient? I want it to be able to handle around 10k users (like that's ever gonna happen). heres some I thought of: everything in one file (might be bad on memory, and takes time to load in, important, as I would need to load it in after every change.) seperate files per user (Slower, but memory efficient) seperate files per data value directory for each user, seperate files for each value. thanks,and try to keep it objective so this isnt' instantly closed!
efficient database file trees
1.2
1
0
45
20,382,403
2013-12-04T17:51:00.000
2
0
1
0
python,module,importerror,cyclic-reference,cyclic-dependency
20,384,772
4
false
0
0
A cyclic module dependency is usually a code smell. It indicates that part of the code should be re-factored so that it is external to both modules.
3
8
0
Trying to find a good and proper pattern to handle a circular module dependency in Python. Usually, the solution is to remove it (through refactoring); however, in this particular case we would really like to have the functionality that requires the circular import. EDIT: According to answers below, the usual angle of attack for this kind of issue would be a refactor. However, for the sake of this question, assume that is not an option (for whatever reason). The problem: The logging module requires the configuration module for some of its configuration data. However, for some of the configuration functions I would really like to use the custom logging functions that are defined in the logging module. Obviously, importing the logging module in configuration raises an error. The possible solutions we can think of: Don't do it. As I said before, this is not a good option, unless all other possibilities are ugly and bad. Monkey-patch the module. This doesn't sound too bad: load the logging module dynamically into configuration after the initial import, and before any of its functions are actually used. This implies defining global, per-module variables, though. Dependency injection. I've read and run into dependency injection alternatives (particularly in the Java Enterprise space) and they remove some of this headache; however, they may be too complicated to use and manage, which is something we'd like to avoid. I'm not aware of how the panorama is about this in Python, though. What is a good way to enable this functionality? Thanks very much!
How to properly handle a circular module dependency in Python?
0.099668
0
0
1,088
20,382,403
2013-12-04T17:51:00.000
4
0
1
0
python,module,importerror,cyclic-reference,cyclic-dependency
20,387,991
4
true
0
0
As already said, there's probably some refactoring needed. According to the names, it might be ok if a logging modules uses configuration, when thinking about what things should be in configuration one think about configuration parameters, then a question arises, why is that configuration logging at all? Chances are that the parts of the code under configuration that uses logging does not belong to the configuration module: seems like it is doing some kind of processing and logging either results or errors. Without inner knowledge, and using only common sense, a "configuration" module should be something simple without much processing and it should be a leaf in the import tree. Hope it helps!
3
8
0
Trying to find a good and proper pattern to handle a circular module dependency in Python. Usually, the solution is to remove it (through refactoring); however, in this particular case we would really like to have the functionality that requires the circular import. EDIT: According to answers below, the usual angle of attack for this kind of issue would be a refactor. However, for the sake of this question, assume that is not an option (for whatever reason). The problem: The logging module requires the configuration module for some of its configuration data. However, for some of the configuration functions I would really like to use the custom logging functions that are defined in the logging module. Obviously, importing the logging module in configuration raises an error. The possible solutions we can think of: Don't do it. As I said before, this is not a good option, unless all other possibilities are ugly and bad. Monkey-patch the module. This doesn't sound too bad: load the logging module dynamically into configuration after the initial import, and before any of its functions are actually used. This implies defining global, per-module variables, though. Dependency injection. I've read and run into dependency injection alternatives (particularly in the Java Enterprise space) and they remove some of this headache; however, they may be too complicated to use and manage, which is something we'd like to avoid. I'm not aware of how the panorama is about this in Python, though. What is a good way to enable this functionality? Thanks very much!
How to properly handle a circular module dependency in Python?
1.2
0
0
1,088
20,382,403
2013-12-04T17:51:00.000
2
0
1
0
python,module,importerror,cyclic-reference,cyclic-dependency
20,389,635
4
false
0
0
So if I'm reading your use case right, logging accesses configuration to get configuration data. However, configuration has some functions that, when called, require that stuff from logging be imported in configuration. If that is the case (that is, configuration doesn't really need logging until you start calling functions), the answer is simple: in configuration, place all the imports from logging at the bottom of the file, after all the class, function and constant definitions. Python reads things from top to bottom: when it comes across an import statement in configuration, it runs it, but at this point, configuration already exists as a module that can be imported, even if it's not fully initialized yet: it only has the attributes that were declared before the import statement was run. I do agree with the others though, that circular imports are usually a code smell.
3
8
0
Trying to find a good and proper pattern to handle a circular module dependency in Python. Usually, the solution is to remove it (through refactoring); however, in this particular case we would really like to have the functionality that requires the circular import. EDIT: According to answers below, the usual angle of attack for this kind of issue would be a refactor. However, for the sake of this question, assume that is not an option (for whatever reason). The problem: The logging module requires the configuration module for some of its configuration data. However, for some of the configuration functions I would really like to use the custom logging functions that are defined in the logging module. Obviously, importing the logging module in configuration raises an error. The possible solutions we can think of: Don't do it. As I said before, this is not a good option, unless all other possibilities are ugly and bad. Monkey-patch the module. This doesn't sound too bad: load the logging module dynamically into configuration after the initial import, and before any of its functions are actually used. This implies defining global, per-module variables, though. Dependency injection. I've read and run into dependency injection alternatives (particularly in the Java Enterprise space) and they remove some of this headache; however, they may be too complicated to use and manage, which is something we'd like to avoid. I'm not aware of how the panorama is about this in Python, though. What is a good way to enable this functionality? Thanks very much!
How to properly handle a circular module dependency in Python?
0.099668
0
0
1,088
20,382,484
2013-12-04T17:56:00.000
0
0
0
0
python,machine-learning,scipy,scikit-learn,angle
49,957,628
4
false
0
0
Another simpler way could be to use time as angle measurements than degree measurements (not DMS though). Since many analytics software features time as a datatype, you can use its periodicity to do your job. But remember, you need to scale 360 degrees to 24 hours.
1
8
1
I'm using Python for kernel density estimations and gaussian mixture models to rank likelihood of samples of multidimensional data. Every piece of data is an angle, and I'm not sure how to handle the periodicity of angular data for machine learning. First I removed all negative angles by adding 360 to them, so all angles that were negative became positive, -179 becoming 181. I believe this elegantly handles the case of -179 an similar being not significantly different than 179 and similar, but it does not handle instances like 359 being not dissimilar from 1. One way I've thought of approaching the issue is keeping both negative and negative+360 values and using the minimum of the two, but this would require modification of the machine learning algorithms. Is there a good preprocessing-only solution to this problem? Anything built into scipy or scikit? Thanks!
Periodic Data with Machine Learning (Like Degree Angles -> 179 is 2 different from -179)
0
0
0
3,296
20,389,291
2013-12-05T00:51:00.000
1
0
0
0
python
20,389,326
3
false
0
0
You could write a very quick CLI which would load the data, and then ask for a python filename, which it would then eval() on the data...
2
0
1
I have a large dataset that I perform experiments on. It takes 30 mins to load the dataset from file into memory using a python program. Then I perform variations of an algorithm on the dataset. Each time I have to vary the algorithm, I have to load the dataset into memory again, which eats up 30 minutes. Is there any way to load the dataset into memory once and for always. And then each time to run a variation of an algorithm, just use that pre loaded dataset? I know the question is a bit abstract, suggestions to improve the framing of the question are welcome. Thanks. EDITS: Its a text file, contains graph data, around 6 GB. If I only load a portion of the dataset, it doesn't make for a very good graph. I do not do computation while loading the dataset.
load dataset into memory for future computation in python
0.066568
0
0
233
20,389,291
2013-12-05T00:51:00.000
0
0
0
0
python
68,645,793
3
false
0
0
One possible solution is to use Jupyter to load it once and keep the Jupyter session running. Then you modify your algorithm in a cell and always rerun that cell alone. You can operate on the loaded dataset in RAM as much as you want until you terminate the Jupyter session.
2
0
1
I have a large dataset that I perform experiments on. It takes 30 mins to load the dataset from file into memory using a python program. Then I perform variations of an algorithm on the dataset. Each time I have to vary the algorithm, I have to load the dataset into memory again, which eats up 30 minutes. Is there any way to load the dataset into memory once and for always. And then each time to run a variation of an algorithm, just use that pre loaded dataset? I know the question is a bit abstract, suggestions to improve the framing of the question are welcome. Thanks. EDITS: Its a text file, contains graph data, around 6 GB. If I only load a portion of the dataset, it doesn't make for a very good graph. I do not do computation while loading the dataset.
load dataset into memory for future computation in python
0
0
0
233
20,389,368
2013-12-05T00:59:00.000
1
0
0
0
python,sqlalchemy
20,389,560
1
true
0
0
You can look at session.new, .dirty, and .deleted to see what objects will be committed, but that doesn't necessarily represent the number of rows, since those objects may set extra rows in a many-to-many association, polymorphic table, etc.
1
0
0
Is there a way to know how many rows were commited on the last commit on a SQLAlchemy Session? For instance, if I had just inserted 2 rows, I wish to know that there were 2 rows inserted, etc.
SQLAlchemy, how many rows were commited on last commit
1.2
1
0
193
20,389,982
2013-12-05T02:00:00.000
0
0
0
0
c#,python,database,bigdata
20,390,085
2
false
0
0
If you are only doing this once, your approach should be sufficient. The only improvement I would make is to read the big file in chunks instead of line by line. That way you don't have to hit the file system as much. You'd want to make the chunks as big as possible while still fitting in memory. If you will need to do this more than once, consider pushing the data into some database. You could insert all the data from the big file and then "update" that data using the second, smaller file to get a complete database with one large table with all the data. If you use a NoSQL database like Cassandra this should be fairly efficient since Cassandra is pretty good and handling writes efficiently.
1
0
1
I have a giant (100Gb) csv file with several columns and a smaller (4Gb) csv also with several columns. The first column in both datasets have the same category. I want to create a third csv with the records of the big file which happen to have a matching first column in the small csv. In database terms it would be a simple join on the first column. I am trying to find the best approach to go about this in terms of efficiency. As the smaller dataset fits in memory, I was thinking of loading it in a sort of set structure and then read the big file line to line and querying the in memory set, and write to file on positive. Just to frame the question in SO terms, is there an optimal way to achieve this? EDIT: This is a one time operation. Note: the language is not relevant, open to suggestions on column, row oriented databases, python, etc...
Intersecting 2 big datasets
0
1
0
154
20,394,391
2013-12-05T07:54:00.000
6
0
1
0
python,caching
20,394,556
3
false
0
0
The ways to preserve data between totally separate executions of a process are: Saving a file. Handing the data off to another process such as a Memcached or Redis instance, or a database, which will keep the data in memory and/or write it to disk somewhere. Recording the data in some other, more unusual way such as changing the environment of the running operating system, printing out the data or otherwise displaying it so that the human operator can keep track of it, or something like that. When you use the word 'cache' and state that you do not wish to write the data to disk, the first thing that comes to mind is memcached or some other in-memory cache. But any file-based solution will certainly be less complex than setting up and maintaining an in-memory key-value store. Which solution you choose depends in part on what 'second time' means. Second time ever? Second time ever on a given computer? Second time since reboot? Since manual reset? Different methods of recording data are suited to different storage requirements.
2
0
0
I have a requirement like For the first time I run the process, I need to set a=1 And for the remaining times I run the same process, I need to set a=2 Is it possible to maintain cache that tells the process is ran for second time. I don't want another physical file to be created in my directory structure. I searched in internet, but found always the cache within the process. Thanks in advance
How do I maintain a cache in python
1
0
0
216
20,394,391
2013-12-05T07:54:00.000
0
0
1
0
python,caching
20,394,475
3
false
0
0
Data cached inside the process dies along with the process. You'll have to cache this info elsewhere since you want it to persist longer than the process lives. A file seems reasonable.
2
0
0
I have a requirement like For the first time I run the process, I need to set a=1 And for the remaining times I run the same process, I need to set a=2 Is it possible to maintain cache that tells the process is ran for second time. I don't want another physical file to be created in my directory structure. I searched in internet, but found always the cache within the process. Thanks in advance
How do I maintain a cache in python
0
0
0
216
20,395,090
2013-12-05T08:35:00.000
1
0
1
0
python,regex
20,395,227
3
true
0
0
Characters between numbers would be (?<=\d)[a-zA-Z]+(?=\d) (i. e. only those characters which are directly embraced by numbers, so for abc234def678hij it would be def), but I have the feeling you mean characters and numbers together; that would be [a-zA-Z0-9]+ plain and simple.
1
0
0
I need a pattern to match characters between numbers, what should I define as a pattern to match characters and number together? example string(the Bold string is what I should match): IP/51-0000523b ivr s 2 Up BackGround 5230f668473bd/MainIVR 4566658 00:00:22` (None) IP/51-0000523b ivr s 2 Up BackGround dh234926b9900/MainIVR 4566658 00:00:22` (None) IP/51-0000523b ivr s 2 Up BackGround l23423y98t232/MainIVR 4566658 00:00:22` (None) IP/51-0000523b ivr s 2 Up BackGround 5230f668473bd/MainIVR 4566658 00:00:22` (None)
python regex to match characters and numbers together
1.2
0
0
67
20,395,712
2013-12-05T09:08:00.000
2
0
0
0
python,timeout,sparqlwrapper
59,155,214
4
false
0
0
As of 2018, you can use SPARQLWrapper.setTimeout() to set the timeout for SPARQLWrapper requests.
3
3
0
I am new to python as well as new to the world of querying the semantic web. I am using SPARQLWrapper library to query dbpedia, I searched the library documentation but failed to find 'timeout' for a query fired to dbpedia from sparqlWrapper. Anyone has any idea about the same.
Python : SpaqrlWrapper, Timeout?
0.099668
0
0
459
20,395,712
2013-12-05T09:08:00.000
1
0
0
0
python,timeout,sparqlwrapper
62,625,256
4
false
0
0
As Karoo mentioned you can use SPARQLWrapper.setTimeout(timeout=(int)). If you want a timeout as a float, go to the Wrapper.py module and change self.timeout = int(timeout) to self.timeout = float(timeout) in the def setTimeout(self, timeout): function.
3
3
0
I am new to python as well as new to the world of querying the semantic web. I am using SPARQLWrapper library to query dbpedia, I searched the library documentation but failed to find 'timeout' for a query fired to dbpedia from sparqlWrapper. Anyone has any idea about the same.
Python : SpaqrlWrapper, Timeout?
0.049958
0
0
459
20,395,712
2013-12-05T09:08:00.000
0
0
0
0
python,timeout,sparqlwrapper
20,419,088
4
true
0
0
DBPedia uses Virtuoso server for it's endpoint and timeout is a virtuoso-specific option. SparqlWrapper doesn't currently support it. Next version will feature better modularity and proper vendor-specific extensions might be implemented after that, but I guess you don't have time to wait. Currently, the only way to add such parameter is to manually hardcode it into your local version of library
3
3
0
I am new to python as well as new to the world of querying the semantic web. I am using SPARQLWrapper library to query dbpedia, I searched the library documentation but failed to find 'timeout' for a query fired to dbpedia from sparqlWrapper. Anyone has any idea about the same.
Python : SpaqrlWrapper, Timeout?
1.2
0
0
459
20,397,744
2013-12-05T10:40:00.000
2
0
0
1
python,tornado
20,512,056
3
false
0
0
There is no public interface to find out whether a path is currently mapped in a Tornado Application. In general, you shouldn't be calling add_handlers after startup anyway - instead, add a wildcard rule (like (r'/game/(.*)', GameHandler)) and then in GameHandler you can check whether the requested game exists or not (and if not, raise HTTPError(404)).
2
1
0
How to check if application in Tornado listen some url ? I need to listen lot off urls, for new game I create new and add programmatically handler for that url, but I first need to check. How to check if Tornado already listen url?
How to check if Tornado already listen url?
0.132549
0
0
566
20,397,744
2013-12-05T10:40:00.000
1
0
0
1
python,tornado
20,493,472
3
false
0
0
I believe you'll need a fine-grained access to the active games, so better keep them in your domain model. Still, you can examine tornado.web.Application.handlers of your app.
2
1
0
How to check if application in Tornado listen some url ? I need to listen lot off urls, for new game I create new and add programmatically handler for that url, but I first need to check. How to check if Tornado already listen url?
How to check if Tornado already listen url?
0.066568
0
0
566
20,401,798
2013-12-05T13:53:00.000
0
0
1
0
performance,python-2.7,memory,ram
20,401,961
3
false
0
0
Once you've hit 100% of your memory, you're going to start hitting the swap on your machine, which uses the disk which is WAYYY slower than memory. After your program is done running, all of the other programs need to re-read thier information back into memory. If the problem doesn't go away after a couple of minutes, then you might have a seperate problem going on.
3
2
0
My hardware specs are listed below, though this is probably not important I use my machine for data processing, which often involves holding large amounts of data in memory (I have made many improvements in my code lately by reading in partial files to decrease memory use, but this remains problematic), with all of my analysis being done in Python 2.7 (numpy/scipy and pandas). When RAM usage == 100% the computer performance becomes expectedly sluggish. What I don't understand is why it remains sluggish (e.g. it takes ~20s to open a ~4kb file) after my code finishes running, memory usage has dropped to ~10-50%, and the CPU sits just above idle. I have even forced all Python processes to close in task manager, which does not help. Furthermore, the sluggishness persists until reboot. From my hardware status stats I would expect better performance, so I suspect there is something going on behind the scenes. Since this only happens after I push memory usage to 100% (which only happens when I am processing data in Python) I wonder if Python is doing something behind the scenes that I am not aware of (honestly, I cannot image what this would be, but I am truly stumped). Since I only see this in Python I give it a python tag (though I think this is not a python issue). Any insight will be appreciated. I have searched online, but everything I find relates to "why my computer runs slow when CPU/RAM usage == 100%"-type queries. OS: Windows 7 Enterprise, 64-bit Python: 2.7 via Anaconda CPU: Xeon E5645, six-core RAM: 12GB
Sluggish after usinge 100% RAM
0
0
0
542
20,401,798
2013-12-05T13:53:00.000
2
0
1
0
performance,python-2.7,memory,ram
20,402,123
3
true
0
0
There is a concept called 'system-load'. A system under a large load ( comprising of interrupts, IO wait, context switches, computation etc) takes time to recover. The load persists for some time because unlike core-computation, the other operations that make the system work hard, do not simply stop suddenly. For example- the script while(true) i++; for example, will consume only cpu, but the script- while(true) print(i++) will push i's value to something of a queue like nature- the stdout may be a monitor, a NW port or a printer queue. Now when you kill your process, what the kernel does is that it frees that process's memory and stop allocating it CPU. But there is a total 'sin', that stays behind- at different levels and components. These, consume the resources in turn, until they are done with. And because you can only kill your python script and not the n printing processes that you have irreversibly called, the kernel cannot do anything. But understand that this is just one of the SW case scenarios. There are Paging, Swapping, Thrashing operations (and a lot more) which too make the 'recovery' slower. Recovery from swap takes 1000X the time as memory (RAM) operations. All in all, a system under zero load can handle 10 tasks in x time whereas a system under full load will take 100x time to do the same tasks because each context swich, each housekeeping operation is 'costlier'. The do-able thing here is to run a utility like top or htop and see how long the high load-average persists.
3
2
0
My hardware specs are listed below, though this is probably not important I use my machine for data processing, which often involves holding large amounts of data in memory (I have made many improvements in my code lately by reading in partial files to decrease memory use, but this remains problematic), with all of my analysis being done in Python 2.7 (numpy/scipy and pandas). When RAM usage == 100% the computer performance becomes expectedly sluggish. What I don't understand is why it remains sluggish (e.g. it takes ~20s to open a ~4kb file) after my code finishes running, memory usage has dropped to ~10-50%, and the CPU sits just above idle. I have even forced all Python processes to close in task manager, which does not help. Furthermore, the sluggishness persists until reboot. From my hardware status stats I would expect better performance, so I suspect there is something going on behind the scenes. Since this only happens after I push memory usage to 100% (which only happens when I am processing data in Python) I wonder if Python is doing something behind the scenes that I am not aware of (honestly, I cannot image what this would be, but I am truly stumped). Since I only see this in Python I give it a python tag (though I think this is not a python issue). Any insight will be appreciated. I have searched online, but everything I find relates to "why my computer runs slow when CPU/RAM usage == 100%"-type queries. OS: Windows 7 Enterprise, 64-bit Python: 2.7 via Anaconda CPU: Xeon E5645, six-core RAM: 12GB
Sluggish after usinge 100% RAM
1.2
0
0
542
20,401,798
2013-12-05T13:53:00.000
0
0
1
0
performance,python-2.7,memory,ram
20,405,742
3
false
0
0
I would have thought that Task Manager, Processes, with appropriate columns selected by View, sorted by clicking on column heading (for Top), would suggest reasons, along with Performance tab info. Then there is Perfmon Resource Monitor and my favourite, logging via Perfmon, Performance, Data Collection Sets, User Defined.
3
2
0
My hardware specs are listed below, though this is probably not important I use my machine for data processing, which often involves holding large amounts of data in memory (I have made many improvements in my code lately by reading in partial files to decrease memory use, but this remains problematic), with all of my analysis being done in Python 2.7 (numpy/scipy and pandas). When RAM usage == 100% the computer performance becomes expectedly sluggish. What I don't understand is why it remains sluggish (e.g. it takes ~20s to open a ~4kb file) after my code finishes running, memory usage has dropped to ~10-50%, and the CPU sits just above idle. I have even forced all Python processes to close in task manager, which does not help. Furthermore, the sluggishness persists until reboot. From my hardware status stats I would expect better performance, so I suspect there is something going on behind the scenes. Since this only happens after I push memory usage to 100% (which only happens when I am processing data in Python) I wonder if Python is doing something behind the scenes that I am not aware of (honestly, I cannot image what this would be, but I am truly stumped). Since I only see this in Python I give it a python tag (though I think this is not a python issue). Any insight will be appreciated. I have searched online, but everything I find relates to "why my computer runs slow when CPU/RAM usage == 100%"-type queries. OS: Windows 7 Enterprise, 64-bit Python: 2.7 via Anaconda CPU: Xeon E5645, six-core RAM: 12GB
Sluggish after usinge 100% RAM
0
0
0
542
20,402,359
2013-12-05T14:17:00.000
0
0
0
0
listbox,wxpython
20,416,806
2
false
0
1
I don't think there's a direct way to do this, so the only way would be to do it by hand: catch click events, use the HitTest to find which item was selected, and then ignore the event if it's the "deactivated" item. (Tree Controls have the EVT_TREE_SEL_CHANGING which would be useful here, but there's no analog for a ListBox afaik.)
2
0
0
I am trying to disable only one item in a listbox with wxpython. I already searched in the Internet for a way to do this, but I found nothing... I hope you can give me a hint!
Disable only one item in a wxpython.listbox
0
0
0
389
20,402,359
2013-12-05T14:17:00.000
0
0
0
0
listbox,wxpython
20,426,709
2
false
0
1
You will need to bind to wx.EVT_LISTBOX and check if the selection is in your "deactivated" list. If so, set the selection to a different item in the control.
2
0
0
I am trying to disable only one item in a listbox with wxpython. I already searched in the Internet for a way to do this, but I found nothing... I hope you can give me a hint!
Disable only one item in a wxpython.listbox
0
0
0
389
20,403,387
2013-12-05T15:03:00.000
53
0
1
0
python,pypi
20,403,468
3
true
0
0
Login. Go to your packages. Check the "remove" checkbox for the particular package. Click "Remove" button.
1
58
0
How do I remove a package from Pypi? I uploaded a package to Pypi several months ago. The package is now obsolete and I'd like to formally remove it. I cannot find any documentation on how to remove my package.
How to remove a package from Pypi
1.2
0
0
27,292
20,403,921
2013-12-05T15:28:00.000
0
0
0
1
python,asynchronous,twisted,tornado,reactor
20,429,005
2
false
1
0
So far I found, that if you merge two twisted applications, you should remove reactor.run() from one of them, leaving only one reactor.run() in the end. And be sure that twisted.reactor implementation is same for both applications. More comments welcome.
1
2
0
I have two applications, written on twisted framework, for example one using twisted.web, and other using twisted.protocols.*, not web. How can I "merge" them in one, effectively sharing one reactor for both apps? What are the best practices for that tasks? Actually I need to connect SIPSimpleApp and TornadoWeb. They both can use twisted reactor.
How to "merge" two Python twisted applications?
0
0
0
247
20,406,598
2013-12-05T17:24:00.000
1
0
1
0
python
20,407,097
2
false
0
0
Builtins are just names in the global namespace, so the same rule applies to any other non-builtin imported into the namespace. Now consider what happens if a new name is added to the imported module, or a new builtin is added, that has a name that is already in use in your code. With the current rule, your code still works, but if you want to modify it to use the new names, you are forced to explicitly write module.name or __builtin__.name. If the rule was that it raised an exception, your code would immediately stop working until you changed all references to the name. I think the first scenario is preferable. And of course, it's not always a mistake. Being able to inject your own functions or objects into other parts of code is part of the design of dynamic languages like python.
1
4
0
So I see this a lot. People call their dictionaries dict, their lists list, and so on. I know this is frowned upon because it overwrites the built-in value in Python. The question is, why doesn't this raise an exception? I see a lot of people say to never do this, but why isn't it an error? The only conclusion I can come to is that there is a time when this sort of thing needs to happen. So, in what situation could someone gain a programming advantage by overwriting a builtin? Or, if my conclusion is wrong, why aren't there protections against such overwriting?
What is the Correct Time to Name a Variable after a Builtin?
0.099668
0
0
60