Q_Id
int64
337
49.3M
CreationDate
stringlengths
23
23
Users Score
int64
-42
1.15k
Other
int64
0
1
Python Basics and Environment
int64
0
1
System Administration and DevOps
int64
0
1
Tags
stringlengths
6
105
A_Id
int64
518
72.5M
AnswerCount
int64
1
64
is_accepted
bool
2 classes
Web Development
int64
0
1
GUI and Desktop Applications
int64
0
1
Answer
stringlengths
6
11.6k
Available Count
int64
1
31
Q_Score
int64
0
6.79k
Data Science and Machine Learning
int64
0
1
Question
stringlengths
15
29k
Title
stringlengths
11
150
Score
float64
-1
1.2
Database and SQL
int64
0
1
Networking and APIs
int64
0
1
ViewCount
int64
8
6.81M
1,285,024
2009-08-16T18:48:00.000
11
1
0
1
python,interactive
1,285,056
3
false
0
0
You should simply add a command-line switch in the scheduled task, and check for it in your script, modifying the behavior as appropriate. Explicit is better than implicit. One benefit to this design: you'll be able to test both behaviors, regardless of how you actually invoked the script.
1
2
0
I'd like for a script of mine to have 2 behaviours, one when started as a scheduled task, and another if started manually. How could I test for interactiveness? EDIT: this could either be a cron job, or started by a windows batch file, through the scheduled tasks.
How can I check to see if a Python script was started interactively?
1
0
0
1,179
1,285,150
2009-08-16T19:34:00.000
1
0
0
1
python,google-app-engine,comet,server-push,channel-api
2,784,805
6
false
1
0
30 seconds is more than enough; either way you should return a no-op message when a time passed and no new events occur. This prevents client timeouts and is done by everybody who does comet. Just send the request, and on the server make it wait until an event or timeout after 25 seconds.
4
25
0
How can I implement Comet / Server push in Google App Engine in Python?
Implement Comet / Server push in Google App Engine in Python
0.033321
0
0
11,567
1,285,150
2009-08-16T19:34:00.000
0
0
0
1
python,google-app-engine,comet,server-push,channel-api
3,918,651
6
false
1
0
Looking inside the App Engine 1.3.8-pre release, I see the Channel API service stub and more code. So it looks like we can start trying it out locally.
4
25
0
How can I implement Comet / Server push in Google App Engine in Python?
Implement Comet / Server push in Google App Engine in Python
0
0
0
11,567
1,285,150
2009-08-16T19:34:00.000
3
0
0
1
python,google-app-engine,comet,server-push,channel-api
1,285,251
6
false
1
0
At this time, I would rule out doing Comet in App Engine (any language). Comet is based on long-lived HTTP connections, and App Engine will time out any single connection in about 30 seconds or so at most; it's hard to conceive of a worse match!
4
25
0
How can I implement Comet / Server push in Google App Engine in Python?
Implement Comet / Server push in Google App Engine in Python
0.099668
0
0
11,567
1,285,150
2009-08-16T19:34:00.000
0
0
0
1
python,google-app-engine,comet,server-push,channel-api
4,480,891
6
false
1
0
Google App Engine supports server push using the Channel API since 2nd December.
4
25
0
How can I implement Comet / Server push in Google App Engine in Python?
Implement Comet / Server push in Google App Engine in Python
0
0
0
11,567
1,286,176
2009-08-17T04:38:00.000
1
0
0
0
python,django
1,286,187
6
false
1
0
I doubt there are technical manuals on the subject. It might take a bit of digging, but the API documentation and the source code are your best bets for reliable, up-to-date information.
2
8
0
Where can i get the technical manuals/details of how django internals work, i.e. i would like to know when a request comes in from a client; which django function receives it? what middleware get called? how is the request object created? and what class/function creates it? What function maps the request to the necessary view? How does your code/view get called ? etc... Paul.G
Where can i get technical information on how the internals of Django works?
0.033321
0
0
1,249
1,286,176
2009-08-17T04:38:00.000
12
0
0
0
python,django
1,286,186
6
false
1
0
"Use the source, Luke." The beauty of open source software is that you can view (and modify) the code yourself.
2
8
0
Where can i get the technical manuals/details of how django internals work, i.e. i would like to know when a request comes in from a client; which django function receives it? what middleware get called? how is the request object created? and what class/function creates it? What function maps the request to the necessary view? How does your code/view get called ? etc... Paul.G
Where can i get technical information on how the internals of Django works?
1
0
0
1,249
1,286,757
2009-08-17T08:24:00.000
9
0
1
0
python,optimization
1,286,777
5
true
0
0
Fast enough as opposed to what? How do you do it right now? What exactly are you storing, what exactly are you retrieving? The answer probably highly depends on this. Which brings us to ... Measure! Don't discuss and analyze theoretically; try and measure what is the more performant way. Then decide whether the possible performance gain justifies refactoring your database. Edit: I just ran a test measuring string slicing versus lookup in a dict keyed on (start, end) tuples. It suggests that there's not much of a difference. It's a pretty naive test, though, so take it with a pinch of salt.
3
10
0
In order to save space and the complexity of having to maintain the consistency of data between different sources, I'm considering storing start/end indices for some substrings instead of storing the substrings themselves. The trick is that if I do so, it's possible I'll be creating slices ALL the time. Is this something to be avoided? Is the slice operator fast enough I don't need to worry? How about the new object creation/destruction overhead? Okay, I learned my lesson. Don't optimize unless there's a real problem you're trying to fix. (Of course this doesn't mean to right needlessly bad code, but that's beside the point...) Also, test and profile before coming to stack overflow. =D Thanks everyone!
how fast is python's slice
1.2
0
0
7,845
1,286,757
2009-08-17T08:24:00.000
-2
0
1
0
python,optimization
1,286,918
5
false
0
0
premature optimization is the rool of all evil. Prove to yourself that you really have a need to optimize code, then act.
3
10
0
In order to save space and the complexity of having to maintain the consistency of data between different sources, I'm considering storing start/end indices for some substrings instead of storing the substrings themselves. The trick is that if I do so, it's possible I'll be creating slices ALL the time. Is this something to be avoided? Is the slice operator fast enough I don't need to worry? How about the new object creation/destruction overhead? Okay, I learned my lesson. Don't optimize unless there's a real problem you're trying to fix. (Of course this doesn't mean to right needlessly bad code, but that's beside the point...) Also, test and profile before coming to stack overflow. =D Thanks everyone!
how fast is python's slice
-0.07983
0
0
7,845
1,286,757
2009-08-17T08:24:00.000
1
0
1
0
python,optimization
1,289,744
5
false
0
0
Would slices be ineffective because they create copies of the source string? This may or may not be an issue. If it turns out to be an issue, would it not be possible to simply implement a "String view"; an object that has a reference to the source string and has a start and end point.. Upon access/iteration, it just reads from the source string.
3
10
0
In order to save space and the complexity of having to maintain the consistency of data between different sources, I'm considering storing start/end indices for some substrings instead of storing the substrings themselves. The trick is that if I do so, it's possible I'll be creating slices ALL the time. Is this something to be avoided? Is the slice operator fast enough I don't need to worry? How about the new object creation/destruction overhead? Okay, I learned my lesson. Don't optimize unless there's a real problem you're trying to fix. (Of course this doesn't mean to right needlessly bad code, but that's beside the point...) Also, test and profile before coming to stack overflow. =D Thanks everyone!
how fast is python's slice
0.039979
0
0
7,845
1,288,959
2009-08-17T16:19:00.000
0
1
0
0
python,gnupg,pyme,gpgme
2,618,041
2
true
0
0
Add "c.set_armor(1)" before you set the passphrase callback.
1
0
0
I am using Pyme to interface with GPGME and have had no problems signing / encrypting. When I try to decrypt, however, it always brings up the prompt for the passphrase despite having set it via a c.set_passphrase_cb callback. Am I doing something wrong?
Python Pyme: Simple decryption without user interaction
1.2
0
0
1,048
1,289,813
2009-08-17T19:04:00.000
2
0
1
0
python,multiprocessing
26,808,887
5
false
0
0
Just starting the pool takes a long time. I have found in 'real world' programs if I can keep a pool open and reuse it for many different processes,passing the reference down through method calls (usually using map.async) then on Linux I can save a few percent but on Windows I can often halve the time taken. Linux is always quicker for my particular problems but even on Windows I get net benefits from multiprocessing.
3
31
0
So I knocked up some test code to see how the multiprocessing module would scale on cpu bound work compared to threading. On linux I get the performance increase that I'd expect: linux (dual quad core xeon): serialrun took 1192.319 ms parallelrun took 346.727 ms threadedrun took 2108.172 ms My dual core macbook pro shows the same behavior: osx (dual core macbook pro) serialrun took 2026.995 ms parallelrun took 1288.723 ms threadedrun took 5314.822 ms I then went and tried it on a windows machine and got some very different results. windows (i7 920): serialrun took 1043.000 ms parallelrun took 3237.000 ms threadedrun took 2343.000 ms Why oh why, is the multiprocessing approach so much slower on windows? Here's the test code: #!/usr/bin/env python import multiprocessing import threading import time def print_timing(func): def wrapper(*arg): t1 = time.time() res = func(*arg) t2 = time.time() print '%s took %0.3f ms' % (func.func_name, (t2-t1)*1000.0) return res return wrapper def counter(): for i in xrange(1000000): pass @print_timing def serialrun(x): for i in xrange(x): counter() @print_timing def parallelrun(x): proclist = [] for i in xrange(x): p = multiprocessing.Process(target=counter) proclist.append(p) p.start() for i in proclist: i.join() @print_timing def threadedrun(x): threadlist = [] for i in xrange(x): t = threading.Thread(target=counter) threadlist.append(t) t.start() for i in threadlist: i.join() def main(): serialrun(50) parallelrun(50) threadedrun(50) if __name__ == '__main__': main()
python multiprocessing vs threading for cpu bound work on windows and linux
0.07983
0
0
10,563
1,289,813
2009-08-17T19:04:00.000
23
0
1
0
python,multiprocessing
1,289,849
5
true
0
0
Processes are much more lightweight under UNIX variants. Windows processes are heavy and take much more time to start up. Threads are the recommended way of doing multiprocessing on windows.
3
31
0
So I knocked up some test code to see how the multiprocessing module would scale on cpu bound work compared to threading. On linux I get the performance increase that I'd expect: linux (dual quad core xeon): serialrun took 1192.319 ms parallelrun took 346.727 ms threadedrun took 2108.172 ms My dual core macbook pro shows the same behavior: osx (dual core macbook pro) serialrun took 2026.995 ms parallelrun took 1288.723 ms threadedrun took 5314.822 ms I then went and tried it on a windows machine and got some very different results. windows (i7 920): serialrun took 1043.000 ms parallelrun took 3237.000 ms threadedrun took 2343.000 ms Why oh why, is the multiprocessing approach so much slower on windows? Here's the test code: #!/usr/bin/env python import multiprocessing import threading import time def print_timing(func): def wrapper(*arg): t1 = time.time() res = func(*arg) t2 = time.time() print '%s took %0.3f ms' % (func.func_name, (t2-t1)*1000.0) return res return wrapper def counter(): for i in xrange(1000000): pass @print_timing def serialrun(x): for i in xrange(x): counter() @print_timing def parallelrun(x): proclist = [] for i in xrange(x): p = multiprocessing.Process(target=counter) proclist.append(p) p.start() for i in proclist: i.join() @print_timing def threadedrun(x): threadlist = [] for i in xrange(x): t = threading.Thread(target=counter) threadlist.append(t) t.start() for i in threadlist: i.join() def main(): serialrun(50) parallelrun(50) threadedrun(50) if __name__ == '__main__': main()
python multiprocessing vs threading for cpu bound work on windows and linux
1.2
0
0
10,563
1,289,813
2009-08-17T19:04:00.000
1
0
1
0
python,multiprocessing
1,289,977
5
false
0
0
Currently, your counter() function is not modifying much state. Try changing counter() so that it modifies many pages of memory. Then run a cpu bound loop. See if there is still a large disparity between linux and windows. I'm not running python 2.6 right now, so I can't try it myself.
3
31
0
So I knocked up some test code to see how the multiprocessing module would scale on cpu bound work compared to threading. On linux I get the performance increase that I'd expect: linux (dual quad core xeon): serialrun took 1192.319 ms parallelrun took 346.727 ms threadedrun took 2108.172 ms My dual core macbook pro shows the same behavior: osx (dual core macbook pro) serialrun took 2026.995 ms parallelrun took 1288.723 ms threadedrun took 5314.822 ms I then went and tried it on a windows machine and got some very different results. windows (i7 920): serialrun took 1043.000 ms parallelrun took 3237.000 ms threadedrun took 2343.000 ms Why oh why, is the multiprocessing approach so much slower on windows? Here's the test code: #!/usr/bin/env python import multiprocessing import threading import time def print_timing(func): def wrapper(*arg): t1 = time.time() res = func(*arg) t2 = time.time() print '%s took %0.3f ms' % (func.func_name, (t2-t1)*1000.0) return res return wrapper def counter(): for i in xrange(1000000): pass @print_timing def serialrun(x): for i in xrange(x): counter() @print_timing def parallelrun(x): proclist = [] for i in xrange(x): p = multiprocessing.Process(target=counter) proclist.append(p) p.start() for i in proclist: i.join() @print_timing def threadedrun(x): threadlist = [] for i in xrange(x): t = threading.Thread(target=counter) threadlist.append(t) t.start() for i in threadlist: i.join() def main(): serialrun(50) parallelrun(50) threadedrun(50) if __name__ == '__main__': main()
python multiprocessing vs threading for cpu bound work on windows and linux
0.039979
0
0
10,563
1,289,941
2009-08-17T19:33:00.000
0
0
0
0
c++,python,wt
1,475,164
7
false
1
1
Having looked several ones, like django, pylos, web2py, wt. My recommendation is web2py. It's a python version of "ruby on rails" and easy to learn.
2
12
0
I am familiar with both Python and C++ as a programmer. I was thinking of writing my own simple web application and I wanted to know which language would be more appropriate for server-side web development. Some things I'm looking for: It has to be intuitive. I recognize that Wt exists and it follows the model of Qt. The one thing I hate about Qt is that they encourage strange syntax through obfuscated means (e.g. the "public slots:" idiom). If I'm going to write C++, I need it to be standard, recognizable, clean code. No fancy shmancy silliness that Qt provides. The less non-C++ or Python code I have to write, the better. The thing about Django (Python web framework) is that it requires you pretty much write the HTML by hand. I think it would be great if HTML forms took more of a wxWidgets approach. Wt is close to this but follows the Qt model instead of wxWidgets. I'm typically writing video games with C++ and I have no experience in web development. I want to write a nice web site for many reasons. I want it to be a learning experience, I want it to be fun, and I want to easily be able to concentrate on "fun stuff" (e.g. less boilerplate, more meat of the app). Any tips for a newbie web developer? I'm guessing web app frameworks are the way to go, but it's just a matter of picking one.
Web Application Frameworks: C++ vs Python
0
0
0
10,487
1,289,941
2009-08-17T19:33:00.000
1
0
0
0
c++,python,wt
1,290,620
7
false
1
1
I think you better go firt python in your case, meanwhile you can extend cppCMS functionalities and write your own framework arround it. wt was a good idea design, but somehow not that suitable.
2
12
0
I am familiar with both Python and C++ as a programmer. I was thinking of writing my own simple web application and I wanted to know which language would be more appropriate for server-side web development. Some things I'm looking for: It has to be intuitive. I recognize that Wt exists and it follows the model of Qt. The one thing I hate about Qt is that they encourage strange syntax through obfuscated means (e.g. the "public slots:" idiom). If I'm going to write C++, I need it to be standard, recognizable, clean code. No fancy shmancy silliness that Qt provides. The less non-C++ or Python code I have to write, the better. The thing about Django (Python web framework) is that it requires you pretty much write the HTML by hand. I think it would be great if HTML forms took more of a wxWidgets approach. Wt is close to this but follows the Qt model instead of wxWidgets. I'm typically writing video games with C++ and I have no experience in web development. I want to write a nice web site for many reasons. I want it to be a learning experience, I want it to be fun, and I want to easily be able to concentrate on "fun stuff" (e.g. less boilerplate, more meat of the app). Any tips for a newbie web developer? I'm guessing web app frameworks are the way to go, but it's just a matter of picking one.
Web Application Frameworks: C++ vs Python
0.028564
0
0
10,487
1,290,030
2009-08-17T19:47:00.000
5
0
1
0
python,class,dictionary
8,500,864
6
false
0
0
You actually summarize the trade-offs quite well. It seems lots of people get worried far too early about performance. Rather than repeat the standard advice on the subject I'll suggest you search the web for "Knuth premature optimization". The fact of the matter is that for objects of known structure you will be very much happier using class-based objects than dicts. Again, your desire not to instantiate objects each time their (instance) data is read in form the database represents a slightly unhealthy focus on the wrong parts of your program's design. Creating an instance of one of your classes from data attributes takes very little time, and the convenience of being able to add behavior through methods is well worth the slight extra complexity. Using a dict with constant subscripts to reference the elements of a data object seems wrong-headed to me. You are effectively emulating Python's namespace mechanism and creating lots of unnecessary string constants (which won't necessarily be consolidated by the interpreter). If you're really interested in speed then why not use a list, with symbolic constants for the field names? Answer: because it would be wrong-headed to contort your code in this way for a potentially illusory increase in execution speed that in 99% of all cases (a figure I just plucked from my ass) isn't going to be noticed because the application isn't CPU-bound anyway. Write your program the easiest way you know how. If it works and runs fast enough, move on to the next task.
4
6
0
First, I'd like to point out that I know OOP concepts and understand the differences between dictionaries and classes. My question is about what makes sense design wise in this case: I am designing a webapp in python and I have to represent something like a book object. Books have chapters and chapters have titles and content. For simplicity, lets say that the content is plain text. My question is, should I make the book and chapter classes or dictionaries? I know it'd look neater to use book.chapter instead of book['chapter'], and if I end up having methods in the future, it might make sense to puts them in the book class. However, I'd like to know if there is any overhead to using classes instead of storing the information in dictionaries? If I don't want to instantiate a book object from a database everytime and store it as a pickle, I'd have to be worried about incompatibility with past book objects if I add/remove data members from a class.I feel that it'd be easier to deal with this problem in dictionaries. Any pointers on whether/when it makes sense to use dictionaries instead of classes?
python: Overhead to using classes instead of dictionaries?
0.16514
0
0
2,887
1,290,030
2009-08-17T19:47:00.000
1
0
1
0
python,class,dictionary
1,290,102
6
false
0
0
If you go with objects, I wouldn't store the data pickled in the database simply for the reasons you gave. It would be considerably worse if you underwent a change of language or similar. FWIW, I would start with a dictionary. If things get complicated or new features are needed, make it an object.
4
6
0
First, I'd like to point out that I know OOP concepts and understand the differences between dictionaries and classes. My question is about what makes sense design wise in this case: I am designing a webapp in python and I have to represent something like a book object. Books have chapters and chapters have titles and content. For simplicity, lets say that the content is plain text. My question is, should I make the book and chapter classes or dictionaries? I know it'd look neater to use book.chapter instead of book['chapter'], and if I end up having methods in the future, it might make sense to puts them in the book class. However, I'd like to know if there is any overhead to using classes instead of storing the information in dictionaries? If I don't want to instantiate a book object from a database everytime and store it as a pickle, I'd have to be worried about incompatibility with past book objects if I add/remove data members from a class.I feel that it'd be easier to deal with this problem in dictionaries. Any pointers on whether/when it makes sense to use dictionaries instead of classes?
python: Overhead to using classes instead of dictionaries?
0.033321
0
0
2,887
1,290,030
2009-08-17T19:47:00.000
4
0
1
0
python,class,dictionary
1,290,044
6
false
0
0
Objects were originally created to bundle data with functionality. If you just want to store data, use dictionaries. If you want to include methods for manipulating the data, use objects.
4
6
0
First, I'd like to point out that I know OOP concepts and understand the differences between dictionaries and classes. My question is about what makes sense design wise in this case: I am designing a webapp in python and I have to represent something like a book object. Books have chapters and chapters have titles and content. For simplicity, lets say that the content is plain text. My question is, should I make the book and chapter classes or dictionaries? I know it'd look neater to use book.chapter instead of book['chapter'], and if I end up having methods in the future, it might make sense to puts them in the book class. However, I'd like to know if there is any overhead to using classes instead of storing the information in dictionaries? If I don't want to instantiate a book object from a database everytime and store it as a pickle, I'd have to be worried about incompatibility with past book objects if I add/remove data members from a class.I feel that it'd be easier to deal with this problem in dictionaries. Any pointers on whether/when it makes sense to use dictionaries instead of classes?
python: Overhead to using classes instead of dictionaries?
0.132549
0
0
2,887
1,290,030
2009-08-17T19:47:00.000
8
0
1
0
python,class,dictionary
1,290,060
6
false
0
0
A few thoughts: If you start with a dictionary, you can always switch to a custom class later that implements the mapping protocol (or subclasses dict). So it's probably a good starting point. You can define custom Python objects to use __slots__, which will be faster and more memory efficient if you have a large number of objects. If you use a custom Python object, it will be easier to replace it in the future with an object written in C. (I've never tried it, but I would expect that subclassing dict from C would be a tricky proposition.)
4
6
0
First, I'd like to point out that I know OOP concepts and understand the differences between dictionaries and classes. My question is about what makes sense design wise in this case: I am designing a webapp in python and I have to represent something like a book object. Books have chapters and chapters have titles and content. For simplicity, lets say that the content is plain text. My question is, should I make the book and chapter classes or dictionaries? I know it'd look neater to use book.chapter instead of book['chapter'], and if I end up having methods in the future, it might make sense to puts them in the book class. However, I'd like to know if there is any overhead to using classes instead of storing the information in dictionaries? If I don't want to instantiate a book object from a database everytime and store it as a pickle, I'd have to be worried about incompatibility with past book objects if I add/remove data members from a class.I feel that it'd be easier to deal with this problem in dictionaries. Any pointers on whether/when it makes sense to use dictionaries instead of classes?
python: Overhead to using classes instead of dictionaries?
1
0
0
2,887
1,290,470
2009-08-17T21:10:00.000
3
0
0
0
python,django,django-admin
1,330,208
2
false
1
0
There's no such functionality built in, but I don't think it would be hard to create your own AdminInline subclass (and an accompanying template for it) that would do this. Just model it off TabularInline, but display fields' data directly instead of rendering form fields.
1
7
0
I can edit a parent child relationship using the TablularInline and StackedInline classes, however I would prefer to list the child relationships as a change list as there is a lot of information and the forms are too big. Is there an inline change list available in DJango admin or a way or creating one?
Django Admin Inline Change List
0.291313
0
0
2,622
1,290,779
2009-08-17T22:13:00.000
0
0
1
0
python,geometry
1,290,858
3
false
0
0
First, find the distance from the point to the line. If the distance from the point to the line is zero, then it's on the line.
1
1
0
Is there a nice way to determine if a point lies within a 3D line segment? I know there are algorithms that determine the distance between a point and line segment, but I'm wondering if there's something more compact or efficient.
IsPointInsideSegment(pt, line) in Python
0
0
0
885
1,291,755
2009-08-18T04:10:00.000
2
0
0
0
python,django,wsgi
1,291,769
13
false
1
0
settings.DEBUG could be True and running under Apache or some other non-development server. It will still run. As far as I can tell, there is nothing in the run-time environment short of examining the pid and comparing to pids in the OS that will give you this information.
1
59
0
How can I be certain that my application is running on development server or not? I suppose I could check value of settings.DEBUG and assume if DEBUG is True then it's running on development server, but I'd prefer to know for sure than relying on convention.
How can I tell whether my Django application is running on development server or not?
0.03076
0
0
26,741
1,291,991
2009-08-18T05:37:00.000
5
0
0
0
wxpython,resize,wxwidgets,frame,aspect-ratio
1,292,083
2
true
0
1
Sizers cannot be applied to top-level windows (in order to define properties of the windows themselves as opposed to their contents), so unfortunately there is no "true" way to lock in the aspect ratio. Your best bet would be to catch your window's OnSize event, get the size that the user wants the window to be (which is stored in the wxSizeEvent), calculate the new width and height according to the aspect ratio and then immediately call SetSize with the new size. This technique has the drawback of visual artifacts during resizing (the operating system will initially paint the window according to how the user has dragged it but then immediately redraw it after you call SetSize, leading to a flickering effect) so it isn't a great solution, however since you cannot lock top-level windows into aspect ratios (and prevent the need to resize with SetSize at all) you'll have to decide whether the flickering is worth it.
1
2
0
I have a wx.Frame that has only a single child. While setting up the child's wx.Sizer, I can use the wx.SHAPED flag, which keeps the aspect ratio of the child locked. However, the Frame still has complete freedom in it's aspect ratio, and will leave a blank space in the area that is left unused by the child. How can I lock the aspect ratio of the wx.Frame during resizing?
wxwidgets/wxPython: Resizing a wxFrame while keeping aspect ratio
1.2
0
0
5,993
1,292,817
2009-08-18T09:23:00.000
19
0
0
0
python,browser-automation
3,486,971
15
false
0
0
selenium will do exactly what you want and it handles javascript
3
29
0
suppose, I need to perform a set of procedure on a particular website say, fill some forms, click submit button, send the data back to server, receive the response, again do something based on the response and send the data back to the server of the website. I know there is a webbrowser module in python, but I want to do this without invoking any web browser. It hast to be a pure script. Is there a module available in python, which can help me do that? thanks
How to automate browsing using python?
1
0
1
108,234
1,292,817
2009-08-18T09:23:00.000
1
0
0
0
python,browser-automation
20,679,640
15
false
0
0
The best solution that i have found (and currently implementing) is : - scripts in python using selenium webdriver - PhantomJS headless browser (if firefox is used you will have a GUI and will be slower)
3
29
0
suppose, I need to perform a set of procedure on a particular website say, fill some forms, click submit button, send the data back to server, receive the response, again do something based on the response and send the data back to the server of the website. I know there is a webbrowser module in python, but I want to do this without invoking any web browser. It hast to be a pure script. Is there a module available in python, which can help me do that? thanks
How to automate browsing using python?
0.013333
0
1
108,234
1,292,817
2009-08-18T09:23:00.000
0
0
0
0
python,browser-automation
3,988,708
15
false
0
0
httplib2 + beautifulsoup Use firefox + firebug + httpreplay to see what the javascript passes to and from the browser from the website. Using httplib2 you can essentially do the same via post and get
3
29
0
suppose, I need to perform a set of procedure on a particular website say, fill some forms, click submit button, send the data back to server, receive the response, again do something based on the response and send the data back to the server of the website. I know there is a webbrowser module in python, but I want to do this without invoking any web browser. It hast to be a pure script. Is there a module available in python, which can help me do that? thanks
How to automate browsing using python?
0
0
1
108,234
1,292,931
2009-08-18T09:56:00.000
0
0
1
0
python,eclipse,ide
5,147,647
2
true
0
0
WingIDE 4 now has support for refactoring.
1
3
0
I am just transitioning from Eclipse to Wing IDE for my Python code. In Eclipse I had an option to "rename" objects. I could take any object defined in my code, it could be a variable or a function or a method or whatever, and I could rename it automatically in all the files that referenced it. Is there a similar feature in Wing IDE?
Renaming objects with Wing IDE
1.2
0
0
395
1,293,518
2009-08-18T12:28:00.000
2
0
0
0
python,proxy,ftp,ftplib
1,293,579
6
false
0
0
Standard module ftplib doesn't support proxies. It seems the only solution is to write your own customized version of the ftplib.
1
9
0
I'm developing an FTP client in Python ftplib. How do I add proxies support to it (most FTP apps I have seen seem to have it)? I'm especially thinking about SOCKS proxies, but also other types... FTP, HTTP (is it even possible to use HTTP proxies with FTP program?) Any ideas how to do it?
Proxies in Python FTP application
0.066568
0
1
18,202
1,294,382
2009-08-18T14:50:00.000
16
0
1
0
python,python-internals,gil
1,294,430
8
false
0
0
Whenever two threads have access to the same variable you have a problem. In C++ for instance, the way to avoid the problem is to define some mutex lock to prevent two thread to, let's say, enter the setter of an object at the same time. Multithreading is possible in python, but two threads cannot be executed at the same time at a granularity finer than one python instruction. The running thread is getting a global lock called GIL. This means if you begin write some multithreaded code in order to take advantage of your multicore processor, your performance won't improve. The usual workaround consists of going multiprocess. Note that it is possible to release the GIL if you're inside a method you wrote in C for instance. The use of a GIL is not inherent to Python but to some of its interpreter, including the most common CPython. (#edited, see comment) The GIL issue is still valid in Python 3000.
2
265
0
What is a global interpreter lock and why is it an issue? A lot of noise has been made around removing the GIL from Python, and I'd like to understand why that is so important. I have never written a compiler nor an interpreter myself, so don't be frugal with details, I'll probably need them to understand.
What is the global interpreter lock (GIL) in CPython?
1
0
0
72,549
1,294,382
2009-08-18T14:50:00.000
63
0
1
0
python,python-internals,gil
1,294,398
8
false
0
0
Suppose you have multiple threads which don't really touch each other's data. Those should execute as independently as possible. If you have a "global lock" which you need to acquire in order to (say) call a function, that can end up as a bottleneck. You can wind up not getting much benefit from having multiple threads in the first place. To put it into a real world analogy: imagine 100 developers working at a company with only a single coffee mug. Most of the developers would spend their time waiting for coffee instead of coding. None of this is Python-specific - I don't know the details of what Python needed a GIL for in the first place. However, hopefully it's given you a better idea of the general concept.
2
265
0
What is a global interpreter lock and why is it an issue? A lot of noise has been made around removing the GIL from Python, and I'd like to understand why that is so important. I have never written a compiler nor an interpreter myself, so don't be frugal with details, I'll probably need them to understand.
What is the global interpreter lock (GIL) in CPython?
1
0
0
72,549
1,294,385
2009-08-18T14:50:00.000
0
0
0
0
python,mysql,file-io,blob
1,294,488
2
false
0
0
You can insert and read BLOBs from a DB like every other column type. From the database API's view there is nothing special about BLOBs.
1
15
0
I want to write a python script that populates a database with some information. One of the columns in my table is a BLOB that I would like to save a file to for each entry. How can I read the file (binary) and insert it into the DB using python? Likewise, how can I retrieve it and write that file back to some arbitrary location on the hard drive?
How to insert / retrieve a file stored as a BLOB in a MySQL db using python
0
1
0
25,537
1,294,618
2009-08-18T15:25:00.000
3
0
0
1
python,google-app-engine,registration
1,294,678
1
false
1
0
Call or write to Google! Google's policies are very exact and very strict, because they are catering to thousands of developers, and thus need those standards and uniformity. But if you have a good reason for needing more than 10, and you can get a real person at the end of a telephone line, I'd think you'd have a good chance of getting the limit raised. Alternatively, you could just get a friend or co-worker to register. That seems like it ought to be legal...but check the User Agreement first.
1
14
0
Anyone knows any "legal" way to surpass the 10-app-limit Google imposes? I wouldn't mind to pay, or anything, but I wasn't able to find a way to have more than 10 apps and can't either remove one.
how to register more than 10 apps in Google App Engine
0.53705
0
0
1,877
1,296,162
2009-08-18T20:02:00.000
3
0
0
0
python,c
1,296,188
3
false
0
0
You can embed a Python interpreter in a C program, but I think that the easiest solution is to write a Python script that converts "pickles" in another format, e.g. an SQLite database.
1
23
0
I am working on integrating with several music players. At the moment my favorite is exaile. In the new version they are migrating the database format from SQLite3 to an internal Pickle format. I wanted to know if there is a way to access pickle format files without having to reverse engineer the format by hand. I know there is the cPickle python module, but I am unaware if it is callable directly from C.
How can I read a python pickle database/file from C?
0.197375
1
0
27,544
1,296,446
2009-08-18T20:52:00.000
1
1
0
0
python,email,gmail,imap,pop3
1,296,476
4
false
0
0
Just go to the Gmail web interface, do an advanced search by date, then select all and mark as read.
2
5
0
Long story short, I created a new gmail account, and linked several other accounts to it (each with 1000s of messages), which I am importing. All imported messages arrive as unread, but I need them to appear as read. I have a little experience with python, but I've only used mail and imaplib modules for sending mail, not processing accounts. Is there a way to bulk process all items in an inbox, and simply mark messages older than a specified date as read?
Parse Gmail with Python and mark all older than date as "read"
0.049958
0
1
5,833
1,296,446
2009-08-18T20:52:00.000
1
1
0
0
python,email,gmail,imap,pop3
1,296,465
4
false
0
0
Rather than try to parse our HTML why not just use the IMAP interface? Hook it up to a standard mail client and then just sort by date and mark whichever ones you want as read.
2
5
0
Long story short, I created a new gmail account, and linked several other accounts to it (each with 1000s of messages), which I am importing. All imported messages arrive as unread, but I need them to appear as read. I have a little experience with python, but I've only used mail and imaplib modules for sending mail, not processing accounts. Is there a way to bulk process all items in an inbox, and simply mark messages older than a specified date as read?
Parse Gmail with Python and mark all older than date as "read"
0.049958
0
1
5,833
1,296,640
2009-08-18T21:32:00.000
3
0
1
0
python,ironpython
1,296,734
4
false
0
0
You can use the Python standard library from IronPython just fine. Here's how: Install Python. Setup an environment variable named IRONPYTHONPATH that points to the standard library directory. Next time ipy.exe is run, site.py is read and you're good to go.
1
6
0
I tried IronPython some time ago and it seemed that it implements only python language, and uses .NET for libraries. Is this still the case? Can one use python modules from IronPython?
Does IronPython implement python standard library?
0.148885
0
0
3,559
1,298,020
2009-08-19T06:01:00.000
2
0
1
0
php,python,ruby,ide,smalltalk
22,278,806
6
false
0
0
OK so this is ancient-but I simply can't believe no one thought of opening several browsers. I don't think I've ever seen anyone programming in a Smalltalk system with just the one. Class browsers, hierarchy browsers, protocol browsers... Yes, each one shows a single methods source at any one time but just point each one at a different method!
4
4
0
I'm working on an IDE for python, ruby and php. Never having used Smallltalk myself (even it was very popular when I was at university) I wonder if the classic Smalltalk Browser which displays only one method is really an improvment or to classical file editing or not. I myself like to have the overview of as much as possible in a class. Now i use a 24" 1280x1920 display in a two column mode which can display a lot of lines at ones. I personally have to wonder what is the benefit if you for example also have good code folding editor where a user can fold for example all def's (functions code bodies) with one keystroke. But I see the request to make xxx more smalltalkish from time to time in newsgroups. I know some might want an image based version but the browser was the second most different Smalltalk invention.
How useful would be a Smalltalk source code browser for other programming languages?
0.066568
0
0
1,422
1,298,020
2009-08-19T06:01:00.000
3
0
1
0
php,python,ruby,ide,smalltalk
1,334,764
6
false
0
0
Eclipse offers a Smalltalk like browser, the Java browsing perspective. Even though being a Smalltalker myself, I almost never use it. Why? The powerful part if the Smalltalk IDE is the debugger not the browser. When coding Smalltalk, I do everything test-first and then fix all missing methods in the debugger while running the test. Having this for any other language would be like ... WOW JUST WOW, so go ahead and do this :)
4
4
0
I'm working on an IDE for python, ruby and php. Never having used Smallltalk myself (even it was very popular when I was at university) I wonder if the classic Smalltalk Browser which displays only one method is really an improvment or to classical file editing or not. I myself like to have the overview of as much as possible in a class. Now i use a 24" 1280x1920 display in a two column mode which can display a lot of lines at ones. I personally have to wonder what is the benefit if you for example also have good code folding editor where a user can fold for example all def's (functions code bodies) with one keystroke. But I see the request to make xxx more smalltalkish from time to time in newsgroups. I know some might want an image based version but the browser was the second most different Smalltalk invention.
How useful would be a Smalltalk source code browser for other programming languages?
0.099668
0
0
1,422
1,298,020
2009-08-19T06:01:00.000
0
0
1
0
php,python,ruby,ide,smalltalk
1,298,447
6
false
0
0
I have a love/hate relationship with the Smalltalk browsers (Squeak in my case). At times I think they are the best things since sliced bread, and at others they make me grind my teeth. The problem with Smalltalk is that the browsers are basically all you have. You can of course write your own, but very few people go that route. Whereas with file-based languages, I have a choice of ways of looking at the code, using completely different editors or environments if I wish. However, one way of looking at the code I've never wanted is one that only lets me see one method at a time.
4
4
0
I'm working on an IDE for python, ruby and php. Never having used Smallltalk myself (even it was very popular when I was at university) I wonder if the classic Smalltalk Browser which displays only one method is really an improvment or to classical file editing or not. I myself like to have the overview of as much as possible in a class. Now i use a 24" 1280x1920 display in a two column mode which can display a lot of lines at ones. I personally have to wonder what is the benefit if you for example also have good code folding editor where a user can fold for example all def's (functions code bodies) with one keystroke. But I see the request to make xxx more smalltalkish from time to time in newsgroups. I know some might want an image based version but the browser was the second most different Smalltalk invention.
How useful would be a Smalltalk source code browser for other programming languages?
0
0
0
1,422
1,298,020
2009-08-19T06:01:00.000
2
0
1
0
php,python,ruby,ide,smalltalk
1,302,807
6
false
0
0
VisualAge for Java used the Smalltalk Browser model for coding, and I thought they (IBM) did a great job of taking a typical file based language and lifting it up to a higher conceptual mode. Instantiations even had a great add-on to bring good refactoring tools to VAJ (people either don't know or forget for which language refactoring tools were introduced first...take a guess ;) Of course I had cut my teeth on Smalltalk, then moved to C++ for a number of years (too many) and was pleased to see anything Smalltalk-like. When I saw that IBM was seriously moving on Eclipse I was flabbergasted. But most of my co-workers at the time did not like not being able to see the entire text of the .java file at once. I would ask, "why not just have only one method in a class so that you can see it all of the class file at once?" Then someone would reply, "Then I wouldn't be able to decompose my code very well at all!" To which I would reply, "If your code is decomposed well, why do you need to see every method at once?" And then I would get a response about things being slower somehow... Development environments that throw in your face the fact that the code database is a system of text files and force you to work with the code that way have always seemed retarded to me...particularly in the in the case of OO languages. Having said that, there are a number of things that I don't like about the traditional Smalltalk browser. I've often wanted a better way of navigating across the browser instances that I've opened and visited. Whenever you work with code there is invariably a context of methods and classes that you are working with (modifying and/or viewing) - it should be simpler to navigate around the context that dynamically develops while you work. I would also like to be able to easily compose a view of 2-3 method bodies together at one time - something that a code-folding editor can sort of give you, at least for one file...
4
4
0
I'm working on an IDE for python, ruby and php. Never having used Smallltalk myself (even it was very popular when I was at university) I wonder if the classic Smalltalk Browser which displays only one method is really an improvment or to classical file editing or not. I myself like to have the overview of as much as possible in a class. Now i use a 24" 1280x1920 display in a two column mode which can display a lot of lines at ones. I personally have to wonder what is the benefit if you for example also have good code folding editor where a user can fold for example all def's (functions code bodies) with one keystroke. But I see the request to make xxx more smalltalkish from time to time in newsgroups. I know some might want an image based version but the browser was the second most different Smalltalk invention.
How useful would be a Smalltalk source code browser for other programming languages?
0.066568
0
0
1,422
1,298,037
2009-08-19T06:07:00.000
1
0
1
0
python,shelve,temporary-files
1,298,162
1
true
0
0
I would rather inherit from shelve.Shelf, and override the close method (*) to unlink the files. Notice that, depending on the specific dbm module being used, you may have more than one file that contains the shelf. One solution could be to create a temporary directory, rather than a temporary file, and remove anything in the directory when done. The other solution would be to bind to a specific dbm module (say, bsddb, or dumbdbm), and remove specifically those files that these libraries create. (*) notice that the close method of a shelf is also called when the shelf is garbage collected. The only case how you could end up with garbage files is when the interpreter crashes or gets killed.
1
1
0
Basically, I want an infinite size (more accurately, hard-drive rather than memory bound) dict in a python program I'm writing. It seems like the tempfile and shelve modules are naturally suited for this, however, I can't see how to use them together in a safe manner. I want the tempfile to be deleted when the shelve is GCed (or at guarantee deletion after the shelve is out of use, regardless of when), but the only solution I can come up with for this involves using tempfile.TemporaryFile() to open a file handle, getting the filename from the handle, using this filename for opening a shelve, keeping the reference to the file handle to prevent it from getting GCed (and the file deleted), and then putting a wrapper on the shelve that stores this reference. Anyone have a better solution than this convoluted mess? Restrictions: Can only use the standard python library and must be fully cross platform.
Is there an easy way to use a python tempfile in a shelve (and make sure it cleans itself up)?
1.2
0
0
606
1,299,018
2009-08-19T10:32:00.000
0
0
1
1
java,python,installation
1,299,061
5
false
0
0
exec(String command) Executes the specified string command in a separate process. Check for Python from command
3
3
0
How can I check from inside a java program if python is installed in windows? Python does not add its path to the system Path and no assumption is to be made about the probable path of installation(i.e it can be installed anywhere).
How to check if python is installed in windows from java?
0
0
0
13,675
1,299,018
2009-08-19T10:32:00.000
2
0
1
1
java,python,installation
1,299,062
5
true
0
0
Use the Java Runtime to exec the following command "python --version". If it works, you have Python, and the standard output is the version number. If it doesn't work, you don't have Python.
3
3
0
How can I check from inside a java program if python is installed in windows? Python does not add its path to the system Path and no assumption is to be made about the probable path of installation(i.e it can be installed anywhere).
How to check if python is installed in windows from java?
1.2
0
0
13,675
1,299,018
2009-08-19T10:32:00.000
-2
0
1
1
java,python,installation
19,837,856
5
false
0
0
this would work Process process = Runtime.getRuntime().exec("cmd /c C:\Python27\python --version");
3
3
0
How can I check from inside a java program if python is installed in windows? Python does not add its path to the system Path and no assumption is to be made about the probable path of installation(i.e it can be installed anywhere).
How to check if python is installed in windows from java?
-0.07983
0
0
13,675
1,300,213
2009-08-19T14:09:00.000
0
0
0
0
python,django,postgresql,apache2,mod-wsgi
2,368,542
2
true
1
0
Found it! I'm using eventlet in some other code and I imported one of my modules into a django model. So eventlet was taking over and putting everything to "sleep".
1
1
0
I'm running Django through mod_wsgi and Apache (2.2.8) on Ubuntu 8.04. I've been running Django on this setup for about 6 months without any problems. Yesterday, I moved my database (postgres 8.3) to its own server, and my Django site started refusing to load (the browser spinner would just keep spinning). It works for about 10 mintues, then just stops. Apache is still able to serve static files. Just nothing through Django. I've checked the apache error logs, and I don't see any entries that could be related. I'm not sure if this is a WSGI, Django, Apache, or Postgres issue? Any ideas? Thanks for your help!
Apache/Django freezing after a few requests
1.2
1
0
435
1,300,265
2009-08-19T14:15:00.000
7
1
1
0
c#,static,ironpython,dynamic-language-runtime
1,301,404
1
true
0
0
You can do: scope.SetVariable("math", DynamicHelpers.GetPythonTypeFromType(typeof(System.Math))); DynamicHelpers is in IronPython.Runtime.Types.
1
5
0
scope.SetVariable("math", ?? typeof(System.Math) ??); or do I need create a module?
How to import static class (or static method) into IronPython (or DLR) using C# code(not python)?
1.2
0
0
1,921
1,300,610
2009-08-19T15:14:00.000
2
0
1
0
php,python,substr
1,300,675
3
false
0
0
Python distinguishes between strings and numbers (and actually also between numbers of different kinds, i.e., int vs float) so the best solution depends on what type you start with (str or int?) and what type you want as a result (ditto). Int to int: abs(x) % 10 Int to str: str(x)[-1] Str to int: int(x[-1]) Str to str: x[-1]
1
1
0
I need to be able to get the last digit of a number. i.e., I need 2 to be returned from: 12. Like this in PHP: $minute = substr(date('i'), -1) but I need this in Python. Any ideas
Python - substr
0.132549
0
0
2,917
1,301,000
2009-08-19T16:08:00.000
0
0
0
0
python,sqlalchemy,mod-python
1,301,029
2
true
1
0
Not that I've ever heard of, but it's impossible to tell without some code to look at. Maybe you initialised your result set list as a global, or shared member, and then appended results to it when the application was called without resetting it to empty? A classic way of re-using lists accidentally is to put one in a default argument value to a function. (The same could happen in mod_wsgi of course.)
1
0
0
I have been working on a website using mod_python, python, and SQL Alchemy when I ran into a strange problem: When I query the database for all of the records, it returns the correct result set; however, when I refresh the page, it returns me a result set with that same result set appended to it. I get more result sets "stacked" on top of eachother as I refresh the page more. For example: First page load: 10 results Second page load: 20 results (two of each) Third page load: 30 results (three of each) etc... Is this some underlying problem with mod_python? I don't recall running into this when using mod_wsgi.
mod_python problem?
1.2
1
0
192
1,301,346
2009-08-19T17:11:00.000
4
0
1
0
python,oop,naming-conventions,identifier
1,301,456
17
false
0
0
Your question is good, it is not only about methods. Functions and objects in modules are commonly prefixed with one underscore as well, and can be prefixed by two. But __double_underscore names are not name-mangled in modules, for example. What happens is that names beginning with one (or more) underscores are not imported if you import all from a module (from module import *), nor are the names shown in help(module).
2
1,637
0
Can someone please explain the exact meaning of having single and double leading underscores before an object's name in Python, and the difference between both? Also, does that meaning stay the same regardless of whether the object in question is a variable, a function, a method, etc.?
What is the meaning of single and double underscore before an object name?
0.047024
0
0
597,074
1,301,346
2009-08-19T17:11:00.000
415
0
1
0
python,oop,naming-conventions,identifier
1,301,409
17
false
0
0
__foo__: this is just a convention, a way for the Python system to use names that won't conflict with user names. _foo: this is just a convention, a way for the programmer to indicate that the variable is private (whatever that means in Python). __foo: this has real meaning: the interpreter replaces this name with _classname__foo as a way to ensure that the name will not overlap with a similar name in another class. No other form of underscores have meaning in the Python world. There's no difference between class, variable, global, etc in these conventions.
2
1,637
0
Can someone please explain the exact meaning of having single and double leading underscores before an object's name in Python, and the difference between both? Also, does that meaning stay the same regardless of whether the object in question is a variable, a function, a method, etc.?
What is the meaning of single and double underscore before an object name?
1
0
0
597,074
1,301,352
2009-08-19T17:12:00.000
2
0
1
0
python,shell,vi
1,301,501
10
false
0
0
When working with Vim on the console, I have found that using "tabs" in Vim, instead of having multiple Vim instances suspended in the background, makes handling multiple files in Vim more efficient. It takes a bit of getting used to, but it works really well.
2
17
0
This may sound strange, but I need a better way to build python scripts than opening a file with nano/vi, change something, quit the editor, and type in python script.py, over and over again. I need to build the script on a webserver without any gui. Any ideas how can I improve my workflow?
Python IDE on Linux Console
0.039979
0
0
30,862
1,301,352
2009-08-19T17:12:00.000
5
0
1
0
python,shell,vi
1,301,418
10
false
0
0
Using emacs with python-mode you can execute the script with C-c C-c
2
17
0
This may sound strange, but I need a better way to build python scripts than opening a file with nano/vi, change something, quit the editor, and type in python script.py, over and over again. I need to build the script on a webserver without any gui. Any ideas how can I improve my workflow?
Python IDE on Linux Console
0.099668
0
0
30,862
1,301,887
2009-08-19T18:44:00.000
25
0
0
1
python,django,macos,shell,terminal
2,623,452
3
false
1
0
Maybe this is because there was an error while running Django. Sometimes it happens that the std input disappears because stty was used. You can manually hide your input by typing: $ stty -echo Now you won't see what you typed. To restore this and solve your problem just type $ stty echo This could help.
2
11
0
I'm confused by some behavior of my Mac OS X Terminal and my Django manage.py shell and pdb. When I start a new terminal, the Standard Input is displayed as I type. However, if there is an error, suddenly Standard Input does not appear on the screen. This error continues until I shut down that terminal window. The Input is still being captured as I can see the Standard Output. E.g. in pdb.set_trace() I can 'l' to display where I'm at in the code. However, the 'l' will not be displayed, just an empty prompt. This makes it hard to debug because I can't determine what I'm typing in. What could be going wrong and what can I do to fix it?
Why is Standard Input is not displayed as I type in Mac OS X Terminal application?
1
0
0
2,868
1,301,887
2009-08-19T18:44:00.000
3
0
0
1
python,django,macos,shell,terminal
2,018,573
3
false
1
0
If you exit pdb you can type reset and standard input echo will return. I'm not sure if you can execute something similar within pdb. It will erase what is currently displayed however.
2
11
0
I'm confused by some behavior of my Mac OS X Terminal and my Django manage.py shell and pdb. When I start a new terminal, the Standard Input is displayed as I type. However, if there is an error, suddenly Standard Input does not appear on the screen. This error continues until I shut down that terminal window. The Input is still being captured as I can see the Standard Output. E.g. in pdb.set_trace() I can 'l' to display where I'm at in the code. However, the 'l' will not be displayed, just an empty prompt. This makes it hard to debug because I can't determine what I'm typing in. What could be going wrong and what can I do to fix it?
Why is Standard Input is not displayed as I type in Mac OS X Terminal application?
0.197375
0
0
2,868
1,302,057
2009-08-19T19:13:00.000
3
0
0
0
c#,python,string,uniqueidentifier,uuid
1,302,155
5
false
1
0
If you are using MS-SQL you should probably just use the uniqueindentifier datatype, it is both compact (16 bytes) and since the SQL engine knows about it it can optimize indexes and queries using it.
3
9
0
I need to generate unique record id for the given unique string. I tried using uuid format which seems to be good. But we feel that is lengthly. so we need to cutdown the uuid string 9f218a38-12cd-5942-b877-80adc0589315 to smaller. By removing '-' we can save 4 chars. What is the safest part to remove from uuid? We don't need universally unique id but we like to use uuid as a source but cut down strings. We need unique id specific to site/database (SQL Server/ADO.NET Data services). Any idea or sample from any language is fine Thanks in advance
cutdown uuid further to make short string
0.119427
0
0
9,264
1,302,057
2009-08-19T19:13:00.000
2
0
0
0
c#,python,string,uniqueidentifier,uuid
1,302,532
5
false
1
0
An UUID provides (almost) 128 bits of uniqueness. You may shorten it to 16 binary bytes, or 22 base64-encoded characters. I wouldn't recommend removing any part of a UUID, otherwise, it just loses its sense. UUIDs were designed so that all the 128 bits have meaning. If you want less than that, you should use some other schema. For example, if you could guarantee that only version 4 UUIDs are used, then you could take just the first 32 bits, or just the last 32 bits. You lose uniqueness, but you have pretty random numbers. Just avoid the bits that are fixed (version and variant). But if you can't guarantee that, you will have real problems. For version 1 UUIDs, the first bits will not be unique for UUIDs generated in the same day, and the last bits will not be unique for UUIDs generated in the same system. Even if you CRC the UUID, it is not guaranteed that you will have 16 or 32 bits of uniqueness. In this case, just use some other scheme. Generate a 32-bit random number using the system random number generator and use that as your unique ID. Don't rely on UUIDs if you intend on stripping its length.
3
9
0
I need to generate unique record id for the given unique string. I tried using uuid format which seems to be good. But we feel that is lengthly. so we need to cutdown the uuid string 9f218a38-12cd-5942-b877-80adc0589315 to smaller. By removing '-' we can save 4 chars. What is the safest part to remove from uuid? We don't need universally unique id but we like to use uuid as a source but cut down strings. We need unique id specific to site/database (SQL Server/ADO.NET Data services). Any idea or sample from any language is fine Thanks in advance
cutdown uuid further to make short string
0.07983
0
0
9,264
1,302,057
2009-08-19T19:13:00.000
2
0
0
0
c#,python,string,uniqueidentifier,uuid
1,302,665
5
false
1
0
The UUID is 128 bits or 16 bytes. With no encoding, you could get it as low as 16 bytes. UUIDs are commonly written in hexadecimal, making them 32 byte readable strings. With other encodings, you get different results: base-64 turns 3 8-bit bytes into 4 6-bit characters, so 16 bytes of data becomes 22 characters long base-85 turns 4 8-bit bytes into 5 6.4-bit characters, so 16 bytes of data becomes 20 characters long It all depends on if you want readable strings and how standard/common an encoding you want to use.
3
9
0
I need to generate unique record id for the given unique string. I tried using uuid format which seems to be good. But we feel that is lengthly. so we need to cutdown the uuid string 9f218a38-12cd-5942-b877-80adc0589315 to smaller. By removing '-' we can save 4 chars. What is the safest part to remove from uuid? We don't need universally unique id but we like to use uuid as a source but cut down strings. We need unique id specific to site/database (SQL Server/ADO.NET Data services). Any idea or sample from any language is fine Thanks in advance
cutdown uuid further to make short string
0.07983
0
0
9,264
1,303,270
2009-08-20T00:00:00.000
1
0
1
0
php,python,web-applications,lua
2,008,551
6
false
1
0
If it's only the language, then I agree with Norman. If the web development framework is important to you, then you have to consider Ruby because RoR is a very mature framework. I love Python, but there seems to be quite a few frameworks to choose from, none of them is dominant. CherryPy, Django, Pylons, web2py, Zope 2, Zope 3, etc. One important indicator to me is that there are more RoR jobs on the market than any other (language, framework).
2
7
0
I'm about to begin my next web development project and wanted to hear about the merits of Lua within the web-development space. How does Lua compare to PHP/Python/JSP/etc.. for web development? Any reason why Lua would be a poor choice for a web application language vs the others?
Lua vs PHP/Python/JSP/etc
0.033321
0
0
11,188
1,303,270
2009-08-20T00:00:00.000
7
0
1
0
php,python,web-applications,lua
1,303,300
6
false
1
0
Using Lua for web development is pretty rare...you could do it, but it will be a lot more time consuming than using a language that has matured as a web developing language (PHP) or has good web related libraries (python/ruby/etc.) If you do go with Lua, this means you may end up "recreating the wheel" a lot for what may be easily available in mature web languages. The better question is, what does Lua offer that you need which is not offered in the other languages you listed? Or do you want to help Lua become a better web development platform by creating a Lua MVC framework like Rails did for Ruby?
2
7
0
I'm about to begin my next web development project and wanted to hear about the merits of Lua within the web-development space. How does Lua compare to PHP/Python/JSP/etc.. for web development? Any reason why Lua would be a poor choice for a web application language vs the others?
Lua vs PHP/Python/JSP/etc
1
0
0
11,188
1,303,307
2009-08-20T00:14:00.000
2
0
0
0
python,image,graphics,audio,fft
9,714,655
4
false
0
0
If you need to convert from PCM format to integers, you'll want to use struct.unpack.
1
27
0
How would I go about using Python to read the frequency peaks from a WAV PCM file and then be able to generate an image of it, for spectogram analysis? I'm trying to make a program that allows you to read any audio file, converting it to WAV PCM, and then finding the peaks and frequency cutoffs.
FFT for Spectrograms in Python
0.099668
0
0
35,268
1,303,654
2009-08-20T02:26:00.000
111
0
0
0
python,database,django,multithreading,transactions
1,346,401
1
true
1
0
After weeks of testing and reading the Django source code, I've found the answer to my own question: Transactions Django's default autocommit behavior still holds true for my threaded function. However, it states in the Django docs: As soon as you perform an action that needs to write to the database, Django produces the INSERT/UPDATE/DELETE statements and then does the COMMIT. There’s no implicit ROLLBACK. That last sentence is very literal. It DOES NOT issue a ROLLBACK command unless something in Django has set the dirty flag. Since my function was only doing SELECT statements it never set the dirty flag and didn't trigger a COMMIT. This goes against the fact that PostgreSQL thinks the transaction requires a ROLLBACK because Django issued a SET command for the timezone. In reviewing the logs, I threw myself off because I kept seeing these ROLLBACK statements and assumed Django's transaction management was the source. Turns out it's not, and that's OK. Connections The connection management is where things do get tricky. It turns out Django uses signals.request_finished.connect(close_connection) to close the database connection it normally uses. Since nothing normally happens in Django that doesn't involve a request, you take this behavior for granted. In my case, though, there was no request because the job was scheduled. No request means no signal. No signal means the database connection was never closed. Going back to transactions, it turns out that simply issuing a call to connection.close() in the absence of any changes to the transaction management issues the ROLLBACK statement in the PostgreSQL log that I'd been looking for. Solution The solution is to allow the normal Django transaction management to proceed as normal and to simply close the connection one of three ways: Write a decorator that closes the connection and wrap the necessary functions in it. Hook into the existing request signals to have Django close the connection. Close the connection manually at the end of the function. Any of those three will (and do) work. This has driven me crazy for weeks. I hope this helps someone else in the future!
1
59
0
I've got Django set up to run some recurring tasks in their own threads, and I noticed that they were always leaving behind unfinished database connection processes (pgsql "Idle In Transaction"). I looked through the Postgres logs and found that the transactions weren't being completed (no ROLLBACK). I tried using the various transaction decorators on my functions, no luck. I switched to manual transaction management and did the rollback manually, that worked, but still left the processes as "Idle". So then I called connection.close(), and all is well. But I'm left wondering, why doesn't Django's typical transaction and connection management work for these threaded tasks that are being spawned from the main Django thread?
Threaded Django task doesn't automatically handle transactions or db connections?
1.2
1
0
10,095
1,304,593
2009-08-20T07:45:00.000
0
1
0
0
python,logging,handler
1,304,622
2
false
0
0
You probably need to do both. To figure this out, I suggest to install a local mail server and use that. This way, you can shut it down while your script runs and note down the error message. To keep the code maintainable, you should extends SMTPHandler in such a way that you can handle the exceptions in a single place (instead of wrapping every logger call with try-except).
1
1
0
I have setup the logging module for my new python script. I have two handlers, one sending stuff to a file, and one for email alerts. The SMTPHandler is setup to mail anything at the ERROR level or above. Everything works great, unless the SMTP connection fails. If the SMTP server does not respond or authentication fails (it requires SMTP auth), then the whole script dies. I am fairly new to python, so I am trying to figure out how to capture the exception that the SMTPHandler is raising so that any problems sending the log message via email won't bring down my entire script. Since I am also writing errors to a log file, if the SMTP alert fails, I just want to keep going, not halt anything. If I need a "try:" statement, would it go around the logging.handlers.SMTPHandler setup, or around the individual calls to my_logger.error()?
Python logging SMTPHandler - handling offline SMTP server
0
0
1
1,574
1,304,608
2009-08-20T07:50:00.000
0
0
0
0
python,django,django-models
1,304,634
2
false
1
0
One way to do this would be to break the Item model up into the parts that are individually assignable to a user. If you have fixed user types (admin, customer, team etc.) who can always see the same set of fields, these parts would be whole groups of fields. If it's very dynamic and you want to be able to set up individual fields for each user, each field is a part of its own. That way, you would have a meta-Item which consists solely of an Id that the parts can refer to. This holds together the parts. Then, you would map a user not to the Item but to the parts and reconstruct the item view from the common Id of the parts.
1
0
0
I have a model containing items, which has many different fields. There is another model which assigns a set of this field to each user using a m2m-relation. I want to achieve, that in the end, every user has access to a defined set of fields of the item model, and he only sees these field in views, he can only edit these field etc. Is there any generic way to set this up?
User-specific model in Django
0
0
0
282
1,304,638
2009-08-20T07:59:00.000
0
0
1
0
python
1,304,651
1
true
0
0
setuptools doesn't quite work on Python 3.1 yet. Try installing packages with regular distutils, or use binary packages (.exe, .msi) provided by the package author.
1
1
0
I've tried to install pip on windows, but it's not working: giving me ImportError: No module named pkg_resources easy_install doesn't have version 3.1 or so, just 2.5, and should be replaced by pim. is there easy way to install it on windows?
how to install new packages with python 3.1.1?
1.2
0
0
540
1,305,182
2009-08-20T10:13:00.000
0
0
0
0
python,django
37,114,696
2
false
1
0
You can use Django authenticaion system to create users and giving them permissions to modify the data.
1
0
0
I have a question regarding editing/saving data in database using Django. I have template that show data from database. And after each record i have link that link you to edit page. But now is the question how to edit data in db without using admin panel? I run thought tutorial in djangobook but i didn't see how to achieve this without using the shell Thanks in advice!
Django - Edit data in db
0
0
0
525
1,307,014
2009-08-20T15:34:00.000
15
0
1
0
python,string,unicode,conventions
1,307,177
6
false
0
0
With the world getting smaller, chances are that any string you encounter will contain Unicode eventually. So for any new apps, you should at least provide __unicode__(). Whether you also override __str__() is then just a matter of taste.
2
224
0
Is there a python convention for when you should implement __str__() versus __unicode__(). I've seen classes override __unicode__() more frequently than __str__() but it doesn't appear to be consistent. Are there specific rules when it is better to implement one versus the other? Is it necessary/good practice to implement both?
Python __str__ versus __unicode__
1
0
0
69,908
1,307,014
2009-08-20T15:34:00.000
24
0
1
0
python,string,unicode,conventions
1,307,209
6
false
0
0
If I didn't especially care about micro-optimizing stringification for a given class I'd always implement __unicode__ only, as it's more general. When I do care about such minute performance issues (which is the exception, not the rule), having __str__ only (when I can prove there never will be non-ASCII characters in the stringified output) or both (when both are possible), might help. These I think are solid principles, but in practice it's very common to KNOW there will be nothing but ASCII characters without doing effort to prove it (e.g. the stringified form only has digits, punctuation, and maybe a short ASCII name;-) in which case it's quite typical to move on directly to the "just __str__" approach (but if a programming team I worked with proposed a local guideline to avoid that, I'd be +1 on the proposal, as it's easy to err in these matters AND "premature optimization is the root of all evil in programming";-).
2
224
0
Is there a python convention for when you should implement __str__() versus __unicode__(). I've seen classes override __unicode__() more frequently than __str__() but it doesn't appear to be consistent. Are there specific rules when it is better to implement one versus the other? Is it necessary/good practice to implement both?
Python __str__ versus __unicode__
1
0
0
69,908
1,307,322
2009-08-20T16:21:00.000
1
0
0
0
python,iphone,arrays,google-app-engine,app-store
1,307,656
1
true
1
0
Unfortunately the only API that seems to be around for Apple's app store is a commercial offering from ABTO; nobody seems to have developed a free one. I'm afraid you'll have to resort to "screen scraping" -- urlget things, use beautifulsoup or the like for interpreting the HTML you get, and be ready to fix breakages whenever Apple tweaks their formats &c. It seems Apple has no interest in making such a thing available to developers (although as far as I can't tell they're not actively fighting against it either, they appear to just not care).
1
0
0
I'd like to know how to pragmatically pull lists of apps from the iphone app store. I'd code this in python (via the google app engine) or in an iphone app. My goal would be to select maybe 5 of them and present them to the user. (for instance a top 5 kind of thing, or advanced filtering or queries)
How do I programmatically pull lists/arrays of (itunes urls to) apps in the iphone app store?
1.2
0
0
151
1,308,038
2009-08-20T18:36:00.000
4
0
0
0
python,mysql
1,308,060
5
true
0
0
There's almost certainly something in either your query, your table definition, or an ORM you're using that thinks the column is numeric and is converting the results to integers. You'll have to define the column as a string (everywhere!) if you want to preserve leading zeroes. Edit: ZEROFILL on the server isn't going to cut it. Python treats integer columns as Python integers, and those don't have leading zeroes, period. You'll either have to change the column type to VARCHAR, use something like "%02d" % val in Python, or put a CAST(my_column AS VARCHAR) in the query.
1
1
0
When using Python and doing a Select statement to MYSQL to select 09 from a column the zero gets dropped and only the 9 gets printed. Is there any way to pull all of the number i.e. including the leading zero?
Python - MYSQL - Select leading zeros
1.2
1
0
1,789
1,308,376
2009-08-20T19:39:00.000
2
0
0
1
python,google-app-engine,sqlalchemy,orm
11,325,656
2
false
1
0
Nowadays they do since Google has launched Cloud SQL
1
11
0
I'd like to use the Python version of App Engine but rather than write my code specifically for the Google Data Store, I'd like to create my models with a generic Python ORM that could be attached to Big Table, or, if I prefer, a regular database at some later time. Is there any Python ORM such as SQLAlchemy that would allow this?
Do any Python ORMs (SQLAlchemy?) work with Google App Engine?
0.197375
1
0
5,333
1,308,710
2009-08-20T20:45:00.000
2
0
0
0
python,python-imaging-library,reportlab
28,385,327
3
false
0
1
I've found that mask='auto' has stopped working for me with reportlab 3.1.8. In the docs it says to pass the values that you want masked out. So what works for me now is mask=[0, 2, 0, 2, 0, 2, ]. Basically it looks like this `mask=[red_start, red_end, green_start, green_end, blue_start, blue_end, ] The mask parameter lets you create transparent images. It takes 6 numbers and defines the range of RGB values which will be masked out or treated as transparent. For example with [0,2,40,42,136,139], it will mask out any pixels with a Red value from 0 or 1, Green from 40 or 41 and Blue of 136, 137 or 138 (on a scale of 0-255). It's currently your job to know which color is the 'transparent' or background one. UPDATE: That masks out anything that is rgb(0, 0, 0) or rgb(1, 1, 1) which obviously might not be the right solution. My problem was people uploading png images with a gray color space. So I need to still figure out a way to detect the color space of the image. and only apply that mask on gray space images.
2
29
0
I have two PNGs that I am trying to combine into a PDF using ReportLab 2.3 on Python 2.5. When I use canvas.drawImage(ImageReader) to write either PNG onto the canvas and save, the transparency comes out black. If I use PIL (1.1.6) to generate a new Image, then paste() either PNG onto the PIL Image, it composits just fine. I've double checked in Gimp and both images have working alpha channels and are being saved correctly. I'm not receiving an error and there doesn't seem to be anything my google-fu can turn up. Has anybody out there composited a transparent PNG onto a ReportLab canvas, with the transparency working properly? Thanks!
Transparency in PNGs with reportlab 2.3
0.132549
0
0
11,462
1,308,710
2009-08-20T20:45:00.000
1
0
0
0
python,python-imaging-library,reportlab
1,311,056
3
false
0
1
ReportLab uses PIL for managing images. Currently, PIL trunk has patch applied to support transparent PNGs, but you will have to wait for a 1.1.6 release if you need stable package.
2
29
0
I have two PNGs that I am trying to combine into a PDF using ReportLab 2.3 on Python 2.5. When I use canvas.drawImage(ImageReader) to write either PNG onto the canvas and save, the transparency comes out black. If I use PIL (1.1.6) to generate a new Image, then paste() either PNG onto the PIL Image, it composits just fine. I've double checked in Gimp and both images have working alpha channels and are being saved correctly. I'm not receiving an error and there doesn't seem to be anything my google-fu can turn up. Has anybody out there composited a transparent PNG onto a ReportLab canvas, with the transparency working properly? Thanks!
Transparency in PNGs with reportlab 2.3
0.066568
0
0
11,462
1,308,760
2009-08-20T20:56:00.000
0
0
0
0
php,python,curl,https,urllib2
1,308,768
2
false
0
0
No problem since the proxy server supports the CONNECT method.
1
2
0
I need to make a cURL request to a https URL, but I have to go through a proxy as well. Is there some problem with doing this? I have been having so much trouble doing this with curl and php, that I tried doing it with urllib2 in Python, only to find that urllib2 cannot POST to https when going through a proxy. I haven't been able to find any documentation to this effect with cURL, but I was wondering if anyone knew if this was an issue?
cURL: https through a proxy
0
0
1
9,009
1,308,879
2009-08-20T21:17:00.000
2
0
0
0
.net,python,networking,sockets
1,308,897
3
false
0
0
Normally you just listen on 0.0.0.0. This is an alias for all IP addresses.
2
6
0
I need to simulate multiple embedded server devices that are typically used for motor control. In real life, there can be multiple servers on the network and our desktop software acts as a client to all the motor servers simultaneously. We have a half-dozen of these motor control servers on hand for basic testing, but it's getting expensive to test bigger systems with the real hardware. I'd like to build a simulator that can look like many servers on the network to test our client software. How can I build a simulator that will look like it has many IP addresses on the same port without physically having many NIC's. For example, the client software will try to connect to servers 192.168.10.1 thru 192.168.10.50 on port 1111. The simulator will accept all of those connections and run simulations as if it were moving physical motors and send back simulated data on those socket connections. Can I use a router to map all of those addresses to a single testing server, or ideally, is there a way to use localhost to 'spoof' those IP addresses? The client software is written in .Net, but Python ideas would be welcomed as well.
Simulate multiple IP addresses for testing
0.132549
0
1
9,577
1,308,879
2009-08-20T21:17:00.000
5
0
0
0
.net,python,networking,sockets
1,309,096
3
true
0
0
A. consider using Bonjour (zeroconf) for service discovery B. You can assign 1 or more IP addresses the same NIC: On XP, Start -> Control Panel -> Network Connections and select properties on your NIC (usually 'Local Area Connection'). Scroll down to Internet Protocol (TCP/IP), select it and click on [Properties]. If you are using DHCP, you will need to get a static, base IP, from your IT. Otherwise, click on [Advanced] and under 'IP Addresses' click [Add..] Enter the IP information for the additional IP you want to add. Repeat for each additional IP address. C. Consider using VMWare, as you can configure multiple systems and virtual IPs within a single, logical, network of "computers". -- sky
2
6
0
I need to simulate multiple embedded server devices that are typically used for motor control. In real life, there can be multiple servers on the network and our desktop software acts as a client to all the motor servers simultaneously. We have a half-dozen of these motor control servers on hand for basic testing, but it's getting expensive to test bigger systems with the real hardware. I'd like to build a simulator that can look like many servers on the network to test our client software. How can I build a simulator that will look like it has many IP addresses on the same port without physically having many NIC's. For example, the client software will try to connect to servers 192.168.10.1 thru 192.168.10.50 on port 1111. The simulator will accept all of those connections and run simulations as if it were moving physical motors and send back simulated data on those socket connections. Can I use a router to map all of those addresses to a single testing server, or ideally, is there a way to use localhost to 'spoof' those IP addresses? The client software is written in .Net, but Python ideas would be welcomed as well.
Simulate multiple IP addresses for testing
1.2
0
1
9,577
1,309,355
2009-08-20T23:16:00.000
0
0
1
0
python,date
1,309,373
8
false
0
0
I would probably just loop over the days checking if the day is mon-fri. Not as efficent but easier to get right.
1
2
0
I need to calculate date (year, month, day) which is (for example) 18 working days back from another date. It would be enough to eliminate just weekends. Example: I've got a date 2009-08-21 and a number of 18 workdays as a parameter, and correct answer should be 2009-07-27. thanks for any help
How to calculate a date back from another date with a given number of work days
0
0
0
1,942
1,309,606
2009-08-21T00:59:00.000
0
0
0
0
python,django
1,309,619
4
false
1
0
Make sure that site-dependencies, django, registration, sorl, and typogrify all have __init__.py files in them.
1
1
0
I have a django project which is laid out like this... myproject apps media templates django registration sorl typogrify I'd like to change it to this... myproject apps media templates site-deps django registration sorl typogrify When I attempt it the 'site-dependencies' all break. Is there a way to implement this structure? I tried adding site-deps to the PYTHONPATH without joy...
How to tame the location of third party contributions in Django
0
0
0
330
1,310,740
2009-08-21T08:37:00.000
-1
0
1
0
python,python-3.x
1,310,787
5
true
0
0
EDIT IN 2021: This is incorrect. Do not refer to this answer.
1
62
0
I have written a function comp(time1, time2) which will return True when time1 is less than time2. I have a scenario where time1 should always be less than time2. I need time1 to have the least possible value (i.e. represent the earliest possible moment). How can I get this time?
What is the oldest time that can be represented in Python?
1.2
0
0
58,038
1,312,940
2009-08-21T16:26:00.000
0
0
1
0
python,string
1,313,334
8
false
0
0
Other answers are about nested quoting. Another point of view I've come across, but I'm not sure I subscribe to, is to use single-quotes(') for characters (which are strings, but ord/chr are quick picky) and to use double-quotes for strings. Which disambiguates between a string that is supposed to be one character and one that just happens to be one character. Personally I find most touch typists aren't affected noticably by the "load" of using the shift-key. YMMV on that part. Going down the "it's faster to not use the shift" is a slippery slope. It's also faster to use hyper-condensed variable/function/class/module names. Everyone just so loves the fast and short 8.3 DOS files names too. :) Pick what makes semantic sense to you, then optimize.
4
2
0
What is the best literal delimiter in Python and why? Single ' or double "? And most important, why? I'm a beginner in Python and I'm trying to stick with just one. I know that in PHP, for example " is preferred, because PHP does not try to search for the 'string' variable. Is the same case in Python?
Python: what kind of literal delimiter is "better" to use?
0
0
0
2,994
1,312,940
2009-08-21T16:26:00.000
9
0
1
0
python,string
1,312,949
8
true
0
0
' because it's one keystroke less than ". Save your wrists! They're otherwise identical (except you have to escape whichever you choose to use, if they appear inside the string).
4
2
0
What is the best literal delimiter in Python and why? Single ' or double "? And most important, why? I'm a beginner in Python and I'm trying to stick with just one. I know that in PHP, for example " is preferred, because PHP does not try to search for the 'string' variable. Is the same case in Python?
Python: what kind of literal delimiter is "better" to use?
1.2
0
0
2,994
1,312,940
2009-08-21T16:26:00.000
0
0
1
0
python,string
1,314,042
8
false
0
0
This is a rule I have heard about: ") If the string is for human consuption, that is interface text or output, use "" ') If the string is a specifier, like a dictionary key or an option, use '' I think a well-enforced rule like that can make sense for a project, but it's nothing that I would personally care much about. I like the above, since I read it, but I always use "" (since I learned C first wayy back?).
4
2
0
What is the best literal delimiter in Python and why? Single ' or double "? And most important, why? I'm a beginner in Python and I'm trying to stick with just one. I know that in PHP, for example " is preferred, because PHP does not try to search for the 'string' variable. Is the same case in Python?
Python: what kind of literal delimiter is "better" to use?
0
0
0
2,994
1,312,940
2009-08-21T16:26:00.000
0
0
1
0
python,string
1,312,960
8
false
0
0
Single and double quotes act identically in Python. Escapes (\n) always work, and there is no variable interpolation. (If you don't want escapes, you can use the r flag, as in r"\n".) Since I'm coming from a Perl background, I have a habit of using single quotes for plain strings and double-quotes for formats used with the % operator. But there is really no difference.
4
2
0
What is the best literal delimiter in Python and why? Single ' or double "? And most important, why? I'm a beginner in Python and I'm trying to stick with just one. I know that in PHP, for example " is preferred, because PHP does not try to search for the 'string' variable. Is the same case in Python?
Python: what kind of literal delimiter is "better" to use?
0
0
0
2,994
1,313,000
2009-08-21T16:39:00.000
3
0
0
0
python,mysql
1,313,013
3
false
0
0
Make another table and do one-to-many. Don't try to cram a programming language feature into a database as-is if you can avoid it. If you absolutely need to be able to store an object down the line, your options are a bit more limited. YAML is probably the best balance of human-readable and program-readable, and it has some syntax for specifying classes you might be able to use.
2
0
0
I need to insert a python tuple (of floats) into a MySQL database. In principle I could pickle it and insert it as a string, but that would grant me the chance only to retrieve it through python. Alternative is to serialize the tuple to XML and store the XML string. What solutions do you think would be also possible, with an eye toward storing other stuff (e.g. a list, or an object). Recovering it from other languages is a plus.
Inserting python tuple in a MySQL database
0.197375
1
0
2,884
1,313,000
2009-08-21T16:39:00.000
2
0
0
0
python,mysql
1,313,016
3
true
0
0
I'd look at serializing it to JSON, using the simplejson package, or the built-in json package in python 2.6. It's simple to use in python, importable by practically every other language, and you don't have to make all of the "what tag should I use? what attributes should this have?" decisions that you might in XML.
2
0
0
I need to insert a python tuple (of floats) into a MySQL database. In principle I could pickle it and insert it as a string, but that would grant me the chance only to retrieve it through python. Alternative is to serialize the tuple to XML and store the XML string. What solutions do you think would be also possible, with an eye toward storing other stuff (e.g. a list, or an object). Recovering it from other languages is a plus.
Inserting python tuple in a MySQL database
1.2
1
0
2,884
1,313,362
2009-08-21T17:56:00.000
0
0
0
0
python,django,database
1,315,203
5
false
1
0
Just commentize the django app strings of the INSTALLED_APPS tuple in your project settings.py file (at the beginning of the project, before running syncdb).
3
1
0
is it possible to write Django apps, for example for internal/personal use with existing databases, without having the 'overhead' of Djangos own tables that are usually installed when starting a project? I would like to use existing tables via models, but not have all the other stuff that is surely useful on normal webpages. The reason would be to build small personal inspection/admin tools without being to invasive on legacy databases.
Django without additional tables?
0
0
0
409
1,313,362
2009-08-21T17:56:00.000
3
0
0
0
python,django,database
1,313,406
5
false
1
0
Django doesn't install any tables by itself. It comes with some pre-fabricated applications, which install tables, but those are easily disabled by removing them from the INSTALLED_APPS setting.
3
1
0
is it possible to write Django apps, for example for internal/personal use with existing databases, without having the 'overhead' of Djangos own tables that are usually installed when starting a project? I would like to use existing tables via models, but not have all the other stuff that is surely useful on normal webpages. The reason would be to build small personal inspection/admin tools without being to invasive on legacy databases.
Django without additional tables?
0.119427
0
0
409
1,313,362
2009-08-21T17:56:00.000
0
0
0
0
python,django,database
1,313,575
5
false
1
0
Don't install any of Django's built-in apps and don't use any models.py in your apps. Your database will have zero tables in it. You won't have users, sites or sessions -- those are Django features that use the database. AFAIK you should still, however, have a SQLite database. I think that parts of Django assume you've got a database connection and it may try to establish this connection. It's an easy experiment to try.
3
1
0
is it possible to write Django apps, for example for internal/personal use with existing databases, without having the 'overhead' of Djangos own tables that are usually installed when starting a project? I would like to use existing tables via models, but not have all the other stuff that is surely useful on normal webpages. The reason would be to build small personal inspection/admin tools without being to invasive on legacy databases.
Django without additional tables?
0
0
0
409
1,313,626
2009-08-21T18:44:00.000
1
0
0
1
python,google-app-engine,optimization,caching,memcached
1,348,633
2
false
1
0
A couple of alternatives to regular eviction: The obvious one: Don't evict, and set a timer instead. Even a really short one - a few seconds - can cut down on effort a huge amount for a popular app, without users even noticing data may be a few seconds stale. Instead of evicting, generate the cache key based on criteria that change when the data does. For example, if retrieving the key of the most recent announcement is cheap, you could use that as part of the key of the cached data. When a new announcement is posted, you go looking for a key that doesn't exist, and create a new one as a result.
1
2
0
I've written an application for Google AppEngine, and I'd like to make use of the memcache API to cut down on per-request CPU time. I've profiled the application and found that a large chunk of the CPU time is in template rendering and API calls to the datastore, and after chatting with a co-worker I jumped (perhaps a bit early?) to the conclusion that caching a chunk of a page's rendered HTML would cut down on the CPU time per request significantly. The caching pattern is pretty clean, but the question of where to put this logic of caching and evicting is a bit of a mystery to me. For example, imagine an application's main page has an Announcements section. This section would need to be re-rendered after: first read for anyone in the account, a new announcement being added, and an old announcement being deleted Some options of where to put the evict_announcements_section_from_cache() method call: in the Announcement Model's .delete(), and .put() methods in the RequestHandler's .post() method anywhere else? Then in the RequestHandler's get page, I could potentially call get_announcements_section() which would follow the standard memcache pattern (check cache, add to cache on miss, return value) and pass that HTML down to the template for that chunk of the page. Is it the typical design pattern to put the cache-evicting logic in the Model, or the Controller/RequestHandler, or somewhere else? Ideally I'd like to avoid having evicting logic with tentacles all over the code.
Where is the best place to put cache-evicting logic in an AppEngine application?
0.099668
0
0
306
1,313,989
2009-08-21T20:11:00.000
1
0
0
0
python,django
1,314,005
7
false
1
0
You didn't have to do anything when deploying a PHP site because your hosting provider had already installed it. Web hosts which support Django typically install and configure it for you.
1
3
0
To deploy a site with Python/Django/MySQL I had to do these on the server (RedHat Linux): Install MySQLPython Install ModPython Install Django (using python setup.py install) Add some directives on httpd.conf file (or use .htaccess) But, when I deployed another site with PHP (using CodeIgniter) I had to do nothing. I faced some problems while deploying a Django project on a shared server. Now, my questions are: Can the deployment process of Django project be made easier? Am I doing too much? Can some of the steps be omitted? What is the best way to deploy django site on a shared server?
How can Django projects be deployed with minimal installation work?
0.028564
1
0
1,978
1,315,511
2009-08-22T08:38:00.000
1
0
1
0
python,migration,packages
1,315,764
3
false
1
0
As Vinay says, there are some parts of common installations that can't be just copied over. Also, keep in mind that setup.py scripts can perform arbitrary work, for example, they could test for the version of Python, and change how they install things, or they could write registry entries, or create .rc files, etc. I concur: re-install the packages. The time you save by trying to just copy everything over will be completely lost the first time something mysteriously doesn't work and you try to debug it. Also, another benefit to re-installation: if you only do it when you need the package, then you won't bother reinstalling the packages you no longer need.
2
2
0
How can i quickly migrate/copy my python packages that i have installed over time to a new machine? This is my scenario; Am upgrading from an old laptop running python2.5 & Django1.0, to a new laptop which i intend to install python 2.6.2 & Django 1.1. In time i have downloaded and installed many python packages in my old machine(e.f pygame,pyro genshi,py2exe bla bla bla many...), is there an easier way i can copy my packages to the new laptop without running installation file for each individual package? Gath
How to migrate packages to a new Python installation?
0.066568
0
0
2,862
1,315,511
2009-08-22T08:38:00.000
3
0
1
0
python,migration,packages
1,315,529
3
false
1
0
If they're pure Python, then in theory you could just copy them across from one Lib\site-packages directory to the other. However, this will not work for any packages which include C extensions (as these need to be recompiled anew for every Python version). You also need to consider e.g. .pth files which have been created by the installation packages, deleting pre-existing .pyc files etc. I'd advise just reinstalling the packages.
2
2
0
How can i quickly migrate/copy my python packages that i have installed over time to a new machine? This is my scenario; Am upgrading from an old laptop running python2.5 & Django1.0, to a new laptop which i intend to install python 2.6.2 & Django 1.1. In time i have downloaded and installed many python packages in my old machine(e.f pygame,pyro genshi,py2exe bla bla bla many...), is there an easier way i can copy my packages to the new laptop without running installation file for each individual package? Gath
How to migrate packages to a new Python installation?
0.197375
0
0
2,862
1,316,357
2009-08-22T16:24:00.000
-1
0
0
0
python,compression,zlib,corruption
1,317,724
4
false
0
0
Okay sorry I wasn't clear enough. This is win32, python 2.6.2. I'm afraid I can't find the zlib file, but its whatever is included in the win32 binary release. And I don't have access to the original data -- I've been compressing my log files, and I'd like to get them back. As far as other software, I've naievely tried 7zip, but of course it failed, because it's zlib, not gzip (I couldn't any software to decompress zlib streams directly). I can't give a carbon copy of the traceback now, but it was (traced back to zlib.decompress(data)) zlib.error: Error: -3. Also, to be clear, these are static files, not streams as I made it sound earlier (so no transmission errors). And I'm afraid again I don't have the code, but I know I used zlib.compress(data, 9) (i.e. at the highest compression level -- although, interestingly it seems that not all the zlib output is 78da as you might expect since I put it on the highest level) and just zlib.decompress().
1
12
0
Okay so I have some data streams compressed by python's (2.6) zlib.compress() function. When I try to decompress them, some of them won't decompress (zlib error -5, which seems to be a "buffer error", no idea what to make of that). At first, I thought I was done, but I realized that all the ones I couldn't decompress started with 0x78DA (the working ones were 0x789C), and I looked around and it seems to be a different kind of zlib compression -- the magic number changes depending on the compression used. What can I use to decompress the files? Am I hosed?
zlib decompression in python
-0.049958
0
0
82,088
1,316,386
2009-08-22T16:33:00.000
0
1
1
0
python,linguistics
1,316,803
5
false
0
0
What to use depends on what you want to translate. Texts that are a part of your application, like UI etc. Then use gettext directly, or zope.i18n, which wraps gettext so it's easier to use. Arbitrary texts: The Google Translation API is the thing for you. "Content", ie things that the user of the application will modify and translate: Well... nothing, really. You have to implement that yourself. On your description, it sounds like you are after #2.
1
14
0
Is there a Python module for the translation of texts from one human language to another? I'm planning to work with texts that are to be pre and post processed with Python scripts. What other Python-integrated approaches can be used?
Translating human languages in Python
0
0
0
31,530
1,316,767
2009-08-22T19:10:00.000
29
0
1
0
python,memory,memory-management
1,317,085
10
false
0
0
(del can be your friend, as it marks objects as being deletable when there no other references to them. Now, often the CPython interpreter keeps this memory for later use, so your operating system might not see the "freed" memory.) Maybe you would not run into any memory problem in the first place by using a more compact structure for your data. Thus, lists of numbers are much less memory-efficient than the format used by the standard array module or the third-party numpy module. You would save memory by putting your vertices in a NumPy 3xN array and your triangles in an N-element array.
4
532
0
I wrote a Python program that acts on a large input file to create a few million objects representing triangles. The algorithm is: read an input file process the file and create a list of triangles, represented by their vertices output the vertices in the OFF format: a list of vertices followed by a list of triangles. The triangles are represented by indices into the list of vertices The requirement of OFF that I print out the complete list of vertices before I print out the triangles means that I have to hold the list of triangles in memory before I write the output to file. In the meanwhile I'm getting memory errors because of the sizes of the lists. What is the best way to tell Python that I no longer need some of the data, and it can be freed?
How can I explicitly free memory in Python?
1
0
0
688,168
1,316,767
2009-08-22T19:10:00.000
127
0
1
0
python,memory,memory-management
1,316,799
10
false
0
0
Unfortunately (depending on your version and release of Python) some types of objects use "free lists" which are a neat local optimization but may cause memory fragmentation, specifically by making more and more memory "earmarked" for only objects of a certain type and thereby unavailable to the "general fund". The only really reliable way to ensure that a large but temporary use of memory DOES return all resources to the system when it's done, is to have that use happen in a subprocess, which does the memory-hungry work then terminates. Under such conditions, the operating system WILL do its job, and gladly recycle all the resources the subprocess may have gobbled up. Fortunately, the multiprocessing module makes this kind of operation (which used to be rather a pain) not too bad in modern versions of Python. In your use case, it seems that the best way for the subprocesses to accumulate some results and yet ensure those results are available to the main process is to use semi-temporary files (by semi-temporary I mean, NOT the kind of files that automatically go away when closed, just ordinary files that you explicitly delete when you're all done with them).
4
532
0
I wrote a Python program that acts on a large input file to create a few million objects representing triangles. The algorithm is: read an input file process the file and create a list of triangles, represented by their vertices output the vertices in the OFF format: a list of vertices followed by a list of triangles. The triangles are represented by indices into the list of vertices The requirement of OFF that I print out the complete list of vertices before I print out the triangles means that I have to hold the list of triangles in memory before I write the output to file. In the meanwhile I'm getting memory errors because of the sizes of the lists. What is the best way to tell Python that I no longer need some of the data, and it can be freed?
How can I explicitly free memory in Python?
1
0
0
688,168
1,316,767
2009-08-22T19:10:00.000
4
0
1
0
python,memory,memory-management
1,316,811
10
false
0
0
If you don't care about vertex reuse, you could have two output files--one for vertices and one for triangles. Then append the triangle file to the vertex file when you are done.
4
532
0
I wrote a Python program that acts on a large input file to create a few million objects representing triangles. The algorithm is: read an input file process the file and create a list of triangles, represented by their vertices output the vertices in the OFF format: a list of vertices followed by a list of triangles. The triangles are represented by indices into the list of vertices The requirement of OFF that I print out the complete list of vertices before I print out the triangles means that I have to hold the list of triangles in memory before I write the output to file. In the meanwhile I'm getting memory errors because of the sizes of the lists. What is the best way to tell Python that I no longer need some of the data, and it can be freed?
How can I explicitly free memory in Python?
0.07983
0
0
688,168
1,316,767
2009-08-22T19:10:00.000
25
0
1
0
python,memory,memory-management
1,316,790
10
false
0
0
You can't explicitly free memory. What you need to do is to make sure you don't keep references to objects. They will then be garbage collected, freeing the memory. In your case, when you need large lists, you typically need to reorganize the code, typically using generators/iterators instead. That way you don't need to have the large lists in memory at all.
4
532
0
I wrote a Python program that acts on a large input file to create a few million objects representing triangles. The algorithm is: read an input file process the file and create a list of triangles, represented by their vertices output the vertices in the OFF format: a list of vertices followed by a list of triangles. The triangles are represented by indices into the list of vertices The requirement of OFF that I print out the complete list of vertices before I print out the triangles means that I have to hold the list of triangles in memory before I write the output to file. In the meanwhile I'm getting memory errors because of the sizes of the lists. What is the best way to tell Python that I no longer need some of the data, and it can be freed?
How can I explicitly free memory in Python?
1
0
0
688,168
1,318,311
2009-08-23T11:13:00.000
2
1
1
0
python,py2exe
1,319,719
3
false
0
0
Look through the third-party libraries that you use. Some libraries (e.g. PIL) do tricks with conditional imports that make it hard for py2exe to bundle the right code. These issues can often be worked around, but a bit of googling up front might save you some headaches later.
1
2
0
im looking for simple script that will compile to exe , and i found py2exe before i decide to work with it , what do you think are the pros and cons of the py2exe tool?
what are the pros/cons of py2exe
0.132549
0
0
969
1,319,585
2009-08-23T21:15:00.000
8
0
0
0
python,sqlalchemy,sqlobject
1,319,598
3
true
0
0
That doesn't do away with the need for an ORM. That is an ORM. In which case, why reinvent the wheel? Is there a compelling reason you're trying to avoid using an established ORM?
2
3
0
Rather than use an ORM, I am considering the following approach in Python and MySQL with no ORM (SQLObject/SQLAlchemy). I would like to get some feedback on whether this seems likely to have any negative long-term consequences since in the short-term view it seems fine from what I can tell. Rather than translate a row from the database into an object: each table is represented by a class a row is retrieved as a dict an object representing a cursor provides access to a table like so: cursor.mytable.get_by_ids(low, high) removing means setting the time_of_removal to the current time So essentially this does away with the need for an ORM since each table has a class to represent it and within that class, a separate dict represents each row. Type mapping is trivial because each dict (row) being a first class object in python/blub allows you to know the class of the object and, besides, the low-level database library in Python handles the conversion of types at the field level into their appropriate application-level types. If you see any potential problems with going down this road, please let me know. Thanks.
Is this a good approach to avoid using SQLAlchemy/SQLObject?
1.2
1
0
569