content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
How to get a timestamp older than 1901
I'm trying to find to accurately count the number of seconds since Jan 1, 1850 to the present in a couple of languages (JavaScript, C++, and Python [don't even ask, I stopped asking these questions long ago]).
Problem is the platforms store timestamps as 32-bit signed integers, so I can't get a timestamp for dates older than 1901 to easily subtract the present timestamp from etc.. So how do I do what I want to do?
A:
In python, there's the datetime module. Specifically, the date class will help.
from datetime import date
print date(1850, 1, 1).weekday() # 1, which is Tuesday
# (Mon is 0)
Edit
Or, to your specific problem, working with timedelta will help out.
from datetime import datetime
td = datetime.now() - datetime(1850, 1, 1)
print (86400*td.days)+td.seconds # seconds since then till now
A:
The portable, language-agnostic approach:
Step 1. Count the number of seconds between 01/01/1850 00:00 and 01/01/1901 00:00. Save this number somewhere (call it M)
Step 2. Use available language functionality to count the number of seconds between 01/01/1901 00:00 and whatever other date and time you want.
Step 3. Return the result from Step 2 + M. Remember to cast the result as a long integer if necessary.
A:
Under WIN32, you can use SystemTimeToFileTime.
FILETIME is a 64-bit unsigned integer that counts the number of 100-nanosecond intervals since January 1, 1601 (UTC).
You can convert two timestamps to FILETIME. You can convert it to ULARGE_INTEGER (t.dwLowDateTime + t.dwHighDateTime << 32), and do regular arithmetics to measure the interval.
A:
Why not use Date objects instead of integers, at least to get a starting point.
function secondsSince(D){
D= new Date(Date.parse(D));
D.setUTCHours(0,0,0,0);
return Math.floor((new Date()-D)/1000);
}
//test with a date
var daystring='Jan 1, 1850',
ss= secondsSince(daystring),
diff= ss/(60*60*24*365.25);
alert('It has been '+ ss +
' seconds since 00:00:00 (GMT) on ' + daystring +
'\n\nThat is about '+diff.toFixed(2)+' years.');
A:
The egenix datetime is also worth looking at
http://www.egenix.com/products/python/mxBase/mxDateTime/
|
How to get a timestamp older than 1901
|
I'm trying to find to accurately count the number of seconds since Jan 1, 1850 to the present in a couple of languages (JavaScript, C++, and Python [don't even ask, I stopped asking these questions long ago]).
Problem is the platforms store timestamps as 32-bit signed integers, so I can't get a timestamp for dates older than 1901 to easily subtract the present timestamp from etc.. So how do I do what I want to do?
|
[
"In python, there's the datetime module. Specifically, the date class will help.\nfrom datetime import date\nprint date(1850, 1, 1).weekday() # 1, which is Tuesday \n# (Mon is 0)\n\nEdit\nOr, to your specific problem, working with timedelta will help out.\nfrom datetime import datetime\ntd = datetime.now() - datetime(1850, 1, 1)\nprint (86400*td.days)+td.seconds # seconds since then till now\n\n",
"The portable, language-agnostic approach:\nStep 1. Count the number of seconds between 01/01/1850 00:00 and 01/01/1901 00:00. Save this number somewhere (call it M)\nStep 2. Use available language functionality to count the number of seconds between 01/01/1901 00:00 and whatever other date and time you want.\nStep 3. Return the result from Step 2 + M. Remember to cast the result as a long integer if necessary.\n",
"Under WIN32, you can use SystemTimeToFileTime.\nFILETIME is a 64-bit unsigned integer that counts the number of 100-nanosecond intervals since January 1, 1601 (UTC).\nYou can convert two timestamps to FILETIME. You can convert it to ULARGE_INTEGER (t.dwLowDateTime + t.dwHighDateTime << 32), and do regular arithmetics to measure the interval.\n",
"Why not use Date objects instead of integers, at least to get a starting point.\nfunction secondsSince(D){\n D= new Date(Date.parse(D));\n D.setUTCHours(0,0,0,0); \n return Math.floor((new Date()-D)/1000);\n}\n\n//test with a date\nvar daystring='Jan 1, 1850', \nss= secondsSince(daystring), \ndiff= ss/(60*60*24*365.25);\nalert('It has been '+ ss + \n' seconds since 00:00:00 (GMT) on ' + daystring + \n'\\n\\nThat is about '+diff.toFixed(2)+' years.');\n",
"The egenix datetime is also worth looking at\nhttp://www.egenix.com/products/python/mxBase/mxDateTime/\n"
] |
[
4,
3,
1,
0,
0
] |
[] |
[] |
[
"c++",
"javascript",
"python",
"timestamp"
] |
stackoverflow_0001494231_c++_javascript_python_timestamp.txt
|
Q:
Python App Engine uploaded image content-type
I know that I can accept image uploads by having a form that POSTs to App Engine like so:
<form action="/upload_received" enctype="multipart/form-data" method="post">
<div><input type="file" name="img"/></div>
<div><input type="submit" value="Upload Image"></div>
</form>
Then in the Python code I can do something like
image = self.request.get("img")
But how can I figure out what the content-type of this image should be when later showing it to the user? It seems the most robust way would be to figure this out from the image data itself, but how to get that easily? I did not see anything suitable in the google.appengine.api images package.
Should I just look for the magic image headers in my own code, or is there already a method for that somewhere?
Edit:
Here's the simplistic solution I ended up using, seems to work well enough for my purposes and avoids having to store the image type as a separate field in the data store:
# Given an image, returns the mime type or None if could not detect.
def detect_mime_from_image_data(self, image):
if image[1:4] == 'PNG': return 'image/png'
if image[0:3] == 'GIF': return 'image/gif'
if image[6:10] == 'JFIF': return 'image/jpeg'
return None
A:
Instead of using self.request.get(fieldname), use self.request.POST[fieldname]. This returns a cgi.FieldStorage object (see the Python library docs for details), which has 'filename', 'type' and 'value' attributes.
A:
Try the python mimetypes module, it will guess the content type and encoding for you,
e.g.
>>import mimetypes
>>mimetypes.guess_type("/home/sean/desktop/comedy/30seconds.mp4")
('video/mp4', None)
A:
Based on my research, browsers, except for Internet Explorer (version 6 at least), determine the file mime type by using the file's extension. Given that you want image mime types, you could use a simple Python dictionary in order to achieve this.
Unfortunately I don't know of any method in Python that tries to guess the image type by reading some magic bytes (the way fileinfo does in PHP). Maybe you could apply the EAFP (Easier to Ask Forgiveness than Permission) principle with the Google appengine image API.
Yes, it appears that the image API does not tell you the type of image you've loaded. What I'd do in this case is to build that Python dictionary to map file extensions to image mime types and than try to load the image while expecting for a NotImageError() exception. If everything goes well, then I assume the mime type was OK.
|
Python App Engine uploaded image content-type
|
I know that I can accept image uploads by having a form that POSTs to App Engine like so:
<form action="/upload_received" enctype="multipart/form-data" method="post">
<div><input type="file" name="img"/></div>
<div><input type="submit" value="Upload Image"></div>
</form>
Then in the Python code I can do something like
image = self.request.get("img")
But how can I figure out what the content-type of this image should be when later showing it to the user? It seems the most robust way would be to figure this out from the image data itself, but how to get that easily? I did not see anything suitable in the google.appengine.api images package.
Should I just look for the magic image headers in my own code, or is there already a method for that somewhere?
Edit:
Here's the simplistic solution I ended up using, seems to work well enough for my purposes and avoids having to store the image type as a separate field in the data store:
# Given an image, returns the mime type or None if could not detect.
def detect_mime_from_image_data(self, image):
if image[1:4] == 'PNG': return 'image/png'
if image[0:3] == 'GIF': return 'image/gif'
if image[6:10] == 'JFIF': return 'image/jpeg'
return None
|
[
"Instead of using self.request.get(fieldname), use self.request.POST[fieldname]. This returns a cgi.FieldStorage object (see the Python library docs for details), which has 'filename', 'type' and 'value' attributes.\n",
"Try the python mimetypes module, it will guess the content type and encoding for you,\ne.g.\n>>import mimetypes\n>>mimetypes.guess_type(\"/home/sean/desktop/comedy/30seconds.mp4\")\n('video/mp4', None)\n",
"Based on my research, browsers, except for Internet Explorer (version 6 at least), determine the file mime type by using the file's extension. Given that you want image mime types, you could use a simple Python dictionary in order to achieve this.\nUnfortunately I don't know of any method in Python that tries to guess the image type by reading some magic bytes (the way fileinfo does in PHP). Maybe you could apply the EAFP (Easier to Ask Forgiveness than Permission) principle with the Google appengine image API.\nYes, it appears that the image API does not tell you the type of image you've loaded. What I'd do in this case is to build that Python dictionary to map file extensions to image mime types and than try to load the image while expecting for a NotImageError() exception. If everything goes well, then I assume the mime type was OK.\n"
] |
[
5,
2,
0
] |
[] |
[] |
[
"google_app_engine",
"image",
"multipart",
"python",
"upload"
] |
stackoverflow_0001409377_google_app_engine_image_multipart_python_upload.txt
|
Q:
How can I translate dates and times from natural language to datetime?
I'm looking for a way to translate 'tomorrow at 6am' or 'next monday at noon' to the appropriate datetime objects.
I thought of engineering a complex set of rules, but is there another way?
A:
parsedatetime - Python module that is able to parse 'human readable' date/time expressions.
#!/usr/bin/env python
from datetime import datetime
import parsedatetime as pdt # $ pip install parsedatetime
cal = pdt.Calendar()
now = datetime.now()
print("now: %s" % now)
for time_string in ["tomorrow at 6am", "next moday at noon",
"2 min ago", "3 weeks ago", "1 month ago"]:
print("%s:\t%s" % (time_string, cal.parseDT(time_string, now)[0]))
Output
now: 2015-10-18 13:55:29.732131
tomorrow at 6am: 2015-10-19 06:00:00
next moday at noon: 2015-10-18 12:00:00
2 min ago: 2015-10-18 13:53:29
3 weeks ago: 2015-09-27 13:55:29
1 month ago: 2015-09-18 13:55:29
A:
See what you think of this example from the pyparsing wiki. It handles the following test cases:
today
tomorrow
yesterday
in a couple of days
a couple of days from now
a couple of days from today
in a day
3 days ago
3 days from now
a day ago
now
10 minutes ago
10 minutes from now
in 10 minutes
in a minute
in a couple of minutes
20 seconds ago
in 30 seconds
20 seconds before noon
20 seconds before noon tomorrow
noon
midnight
noon tomorrow
6am tomorrow
0800 yesterday
12:15 AM today
3pm 2 days from today
a week from today
a week from now
3 weeks ago
noon next Sunday
noon Sunday
noon last Sunday
|
How can I translate dates and times from natural language to datetime?
|
I'm looking for a way to translate 'tomorrow at 6am' or 'next monday at noon' to the appropriate datetime objects.
I thought of engineering a complex set of rules, but is there another way?
|
[
"parsedatetime - Python module that is able to parse 'human readable' date/time expressions.\n#!/usr/bin/env python\nfrom datetime import datetime\nimport parsedatetime as pdt # $ pip install parsedatetime\n\ncal = pdt.Calendar()\nnow = datetime.now()\nprint(\"now: %s\" % now)\nfor time_string in [\"tomorrow at 6am\", \"next moday at noon\", \n \"2 min ago\", \"3 weeks ago\", \"1 month ago\"]:\n print(\"%s:\\t%s\" % (time_string, cal.parseDT(time_string, now)[0]))\n\nOutput\nnow: 2015-10-18 13:55:29.732131\ntomorrow at 6am: 2015-10-19 06:00:00\nnext moday at noon: 2015-10-18 12:00:00\n2 min ago: 2015-10-18 13:53:29\n3 weeks ago: 2015-09-27 13:55:29\n1 month ago: 2015-09-18 13:55:29\n\n",
"See what you think of this example from the pyparsing wiki. It handles the following test cases:\ntoday\ntomorrow\nyesterday\nin a couple of days\na couple of days from now\na couple of days from today\nin a day\n3 days ago\n3 days from now\na day ago\nnow\n10 minutes ago\n10 minutes from now\nin 10 minutes\nin a minute\nin a couple of minutes\n20 seconds ago\nin 30 seconds\n20 seconds before noon\n20 seconds before noon tomorrow\nnoon\nmidnight\nnoon tomorrow\n6am tomorrow\n0800 yesterday\n12:15 AM today\n3pm 2 days from today\na week from today\na week from now\n3 weeks ago\nnoon next Sunday\nnoon Sunday\nnoon last Sunday\n\n"
] |
[
51,
12
] |
[] |
[] |
[
"parsing",
"python",
"python_datetime"
] |
stackoverflow_0001495487_parsing_python_python_datetime.txt
|
Q:
wxpython -- threads and window events
I have a wxPython application (http://www.OpenSTV.org) that counts ballots using methods that have multiple rounds. I'd like to do two things:
(1) For a large number of ballots, this can be a bit slow, so I'd like to show the user a progress dialog so he doesn't think the application is frozen.
(2) I'd like to allow the user to break ties manually, and this requires the counting code to show a dialog window.
To achieve (1), I create a thread to run the counting code, and this allows me to present a nice progress dialog to the user.
The problem with this, however, is that the counting code is not the main thread, and only the main thread in wxPython can process window events.
I suppose I could create a thread to run the progress dialog instead, but this seems awkward. Is there a better way of accomplishing both (1) and (2)?
A:
Use Queue to communicate and synchronize among threads, with each thread "owning" and exclusively interacting with a resource that's not handy to share.
In GUI toolkits where only the main thread can really handle the GUI, the main thread should play along -- set up and start the threads doing the actual work, then do nothing but GUI work, using Queues to communicate to and from the other threads.
For (1), when your counting thread has an update, it should put it to the Queue where the main thread is waiting; when your main thread gets a suitable message on that Queue, it updates the progress dialog.
For (2), the counting thread sends the "have the user break a tie" request, main thread gets it and responds appropriately, and sends back the resolution on a separate Queue.
So in general, there are two kinds of communications: one that don't require a response, and others that do. For the former kind, just put the notification on the appropriate queue and simply proceed -- it will be acted on in due course. For the latter kind, my favorite idiom is to put on the appropriate queue a pair (request, response_queue). If otherwise identical requests differ in that some need a response and others don't, queueing (request, None) when no response is needed (and (request, q) where q's a Queue when a response IS needed) is a nice, easy, and general idiom, too.
A:
There are several ways to call the main thread wxPython thread from a process thread. The simplest is wx.CallAfter() which will always execute the functional passed to it in the main thread. You can also use wx.PostEvent() and there's an example of this in the demo (labeled: Threads), and there are several more complicated but more customizable ways which are discussed in the last chapter of wxPython in Action.
|
wxpython -- threads and window events
|
I have a wxPython application (http://www.OpenSTV.org) that counts ballots using methods that have multiple rounds. I'd like to do two things:
(1) For a large number of ballots, this can be a bit slow, so I'd like to show the user a progress dialog so he doesn't think the application is frozen.
(2) I'd like to allow the user to break ties manually, and this requires the counting code to show a dialog window.
To achieve (1), I create a thread to run the counting code, and this allows me to present a nice progress dialog to the user.
The problem with this, however, is that the counting code is not the main thread, and only the main thread in wxPython can process window events.
I suppose I could create a thread to run the progress dialog instead, but this seems awkward. Is there a better way of accomplishing both (1) and (2)?
|
[
"Use Queue to communicate and synchronize among threads, with each thread \"owning\" and exclusively interacting with a resource that's not handy to share.\nIn GUI toolkits where only the main thread can really handle the GUI, the main thread should play along -- set up and start the threads doing the actual work, then do nothing but GUI work, using Queues to communicate to and from the other threads.\nFor (1), when your counting thread has an update, it should put it to the Queue where the main thread is waiting; when your main thread gets a suitable message on that Queue, it updates the progress dialog.\nFor (2), the counting thread sends the \"have the user break a tie\" request, main thread gets it and responds appropriately, and sends back the resolution on a separate Queue.\nSo in general, there are two kinds of communications: one that don't require a response, and others that do. For the former kind, just put the notification on the appropriate queue and simply proceed -- it will be acted on in due course. For the latter kind, my favorite idiom is to put on the appropriate queue a pair (request, response_queue). If otherwise identical requests differ in that some need a response and others don't, queueing (request, None) when no response is needed (and (request, q) where q's a Queue when a response IS needed) is a nice, easy, and general idiom, too.\n",
"There are several ways to call the main thread wxPython thread from a process thread. The simplest is wx.CallAfter() which will always execute the functional passed to it in the main thread. You can also use wx.PostEvent() and there's an example of this in the demo (labeled: Threads), and there are several more complicated but more customizable ways which are discussed in the last chapter of wxPython in Action.\n"
] |
[
5,
3
] |
[] |
[] |
[
"events",
"multithreading",
"python",
"window",
"wxpython"
] |
stackoverflow_0001496092_events_multithreading_python_window_wxpython.txt
|
Q:
How close is Python to being able to wrap it in a workbook type skin?
With my luck this question will be closed too quickly. I see a tremendous possibility for a python application that basically is like a workbook. Imagine if you will that instead of writing code you select from a menu of choices. For example, the File menu would have an open command that lets the user navigate to a file or directory of file or a webpage, even a list of web pages and specify those as the things that will be the base for the next actions.
Then you have a find menu. The menu would allow easy access to the various parsing tools, regular expression and string tools so you can specify the thing you want to find within the files.
Another menu item could allow you to create queries to interact with database objects.
I could go on and on. As the language becomes more higher level then these types of features become easier to implement. There is a tremendous advantage to developing something like this. How much time is spent reinventing the wheel for mundane tasks? Programmers have functions that they have built to do many mundane tasks but what about democratizing the power offered by a tool like Python.
I have people in my office all of the time asking how to solve problems that seem intractable to them, but when I show them how with a few lines of code their problem is solvable except for the edge cases they become amazed. I deflect their gratitude with the observation that it is not really that hard except for being able to construct the right google search to identify the right package or library to solve the problem. There is nothing amazing about my ability to use lxml and sets to pull all bolded sections from a collection of say 12,000 documents and compare across time and across unique identifiers in the collection how those bolded sections have evolved/changed or converged. The amazing piece is that someone wrote the libraries to do these things.
What is the advantage to the community for something like this. Imagine if you would an interface that looks like a workbook but interacts with an app-store. So if you want to pull something from html file you go to the app store and buy a plug-in that handles the work. If the workbook is built robustly enough it could be licensed to a machine, the 'apps' would be tied to a particular workbook.
Just imagine the creativity that could be unleashed by users if they could get over the feeling that access to this power is difficult. You guys may not see this but I see Python being so close to being able to port to something like a workbook framework. Weren't the early spreadsheet programs nothing more than a frame around some Fortran libraries that had been ported to C?
Comments or is there such an application and I have not found it.
A:
There are Python application that are based on generating code -- the most amazing one probably Resolver One, which focuses on spreadsheets (and hinges on IronPython). With that exception, however, interacting based on the UI paradigm you have in mind (pick one of this, one of that, etc) tends to be pretty limited in the gamut of choices it offers to let the user generate the exact application they need -- there's just so much more you can say by writing even a little script, than what you can say by point-and-grunt.
That being said, Python would surely be a great choice both to implement such an app and as the language to generate... if and when you have a UI sketch that looks like it can actually allow non-programmers to specify a large-enough spectrum of apps in a broad-enough domain!-). Spreadsheets have proven themselves in this sense, but I don't know of other niches or approaches that have actually done so -- do you?
A:
Your idea kinda reminded me of something I stumbled across months ago: http://www.ailab.si/orange/
A:
Is your concept very similar to Microsoft Access? Generally programmers tend not to write such programs because they produce such horrible code that the authors themselves would never want to use their program.
|
How close is Python to being able to wrap it in a workbook type skin?
|
With my luck this question will be closed too quickly. I see a tremendous possibility for a python application that basically is like a workbook. Imagine if you will that instead of writing code you select from a menu of choices. For example, the File menu would have an open command that lets the user navigate to a file or directory of file or a webpage, even a list of web pages and specify those as the things that will be the base for the next actions.
Then you have a find menu. The menu would allow easy access to the various parsing tools, regular expression and string tools so you can specify the thing you want to find within the files.
Another menu item could allow you to create queries to interact with database objects.
I could go on and on. As the language becomes more higher level then these types of features become easier to implement. There is a tremendous advantage to developing something like this. How much time is spent reinventing the wheel for mundane tasks? Programmers have functions that they have built to do many mundane tasks but what about democratizing the power offered by a tool like Python.
I have people in my office all of the time asking how to solve problems that seem intractable to them, but when I show them how with a few lines of code their problem is solvable except for the edge cases they become amazed. I deflect their gratitude with the observation that it is not really that hard except for being able to construct the right google search to identify the right package or library to solve the problem. There is nothing amazing about my ability to use lxml and sets to pull all bolded sections from a collection of say 12,000 documents and compare across time and across unique identifiers in the collection how those bolded sections have evolved/changed or converged. The amazing piece is that someone wrote the libraries to do these things.
What is the advantage to the community for something like this. Imagine if you would an interface that looks like a workbook but interacts with an app-store. So if you want to pull something from html file you go to the app store and buy a plug-in that handles the work. If the workbook is built robustly enough it could be licensed to a machine, the 'apps' would be tied to a particular workbook.
Just imagine the creativity that could be unleashed by users if they could get over the feeling that access to this power is difficult. You guys may not see this but I see Python being so close to being able to port to something like a workbook framework. Weren't the early spreadsheet programs nothing more than a frame around some Fortran libraries that had been ported to C?
Comments or is there such an application and I have not found it.
|
[
"There are Python application that are based on generating code -- the most amazing one probably Resolver One, which focuses on spreadsheets (and hinges on IronPython). With that exception, however, interacting based on the UI paradigm you have in mind (pick one of this, one of that, etc) tends to be pretty limited in the gamut of choices it offers to let the user generate the exact application they need -- there's just so much more you can say by writing even a little script, than what you can say by point-and-grunt.\nThat being said, Python would surely be a great choice both to implement such an app and as the language to generate... if and when you have a UI sketch that looks like it can actually allow non-programmers to specify a large-enough spectrum of apps in a broad-enough domain!-). Spreadsheets have proven themselves in this sense, but I don't know of other niches or approaches that have actually done so -- do you?\n",
"Your idea kinda reminded me of something I stumbled across months ago: http://www.ailab.si/orange/\n",
"Is your concept very similar to Microsoft Access? Generally programmers tend not to write such programs because they produce such horrible code that the authors themselves would never want to use their program.\n"
] |
[
3,
1,
0
] |
[] |
[] |
[
"openaccess",
"python",
"wrapper"
] |
stackoverflow_0001496039_openaccess_python_wrapper.txt
|
Q:
How to treat the first line of a file differently in Python?
I often need to process large text files containing headers in the first line. The headers are often treated differently to the body of the file, or my processing of the body is dependent on the headers. Either way I need to treat the first line as a special case.
I could use simple line iteration and set a flag:
headerProcessed = false
for line in f:
if headerProcessed:
processBody(line)
else:
processHeader(line)
headerProcessed = true
but I dislike a test in the loop that is redundant for all but one of the millions of times it executes. Is there a better way? Could I treat the first line differently then get the iteration to start on the second line? Should I be bothered?
A:
You could:
processHeader(f.readline())
for line in f:
processBody(line)
A:
f = file("test")
processHeader(f.next()) #or next(f) for py3
for line in f:
processBody(line)
This works.
Edit:
Changed .__next__ to next (they are equivalent, but I suppose next is more concise).
Regaring file vs open, file just seems more clear to me, therefore I will continue to prefer it over open.
A:
Use iter()
it_f = iter(f)
header = it_f.next()
processHeader(header)
for line in it_f:
processBody(line)
It works with any iterable object.
A:
Large text files with headers in the first line? So it's tabular data.
Just to make sure: Have you looked at the csv module? It should handle all tabular data except such where the fields are not delimited but defined by position. And it does the header stuff too.
|
How to treat the first line of a file differently in Python?
|
I often need to process large text files containing headers in the first line. The headers are often treated differently to the body of the file, or my processing of the body is dependent on the headers. Either way I need to treat the first line as a special case.
I could use simple line iteration and set a flag:
headerProcessed = false
for line in f:
if headerProcessed:
processBody(line)
else:
processHeader(line)
headerProcessed = true
but I dislike a test in the loop that is redundant for all but one of the millions of times it executes. Is there a better way? Could I treat the first line differently then get the iteration to start on the second line? Should I be bothered?
|
[
"You could:\nprocessHeader(f.readline())\nfor line in f:\n processBody(line)\n\n",
"f = file(\"test\")\nprocessHeader(f.next()) #or next(f) for py3\nfor line in f:\n processBody(line)\n\nThis works.\nEdit:\nChanged .__next__ to next (they are equivalent, but I suppose next is more concise).\nRegaring file vs open, file just seems more clear to me, therefore I will continue to prefer it over open.\n",
"Use iter()\nit_f = iter(f)\nheader = it_f.next()\nprocessHeader(header)\n\nfor line in it_f:\n processBody(line)\n\nIt works with any iterable object.\n",
"Large text files with headers in the first line? So it's tabular data. \nJust to make sure: Have you looked at the csv module? It should handle all tabular data except such where the fields are not delimited but defined by position. And it does the header stuff too.\n"
] |
[
17,
8,
3,
2
] |
[] |
[] |
[
"file",
"iteration",
"python"
] |
stackoverflow_0001496456_file_iteration_python.txt
|
Q:
Aggregating multiple feeds with Universal Feed Parser
Having great luck working with single-source feed parsing in Universal Feed Parser, but now I need to run multiple feeds through it and generate chronologically interleaved output (not RSS). Seems like I'll need to iterate through URLs and stuff every entry into a list of dictionaries, then sort that by the entry timestamps and take a slice off the top. That seems do-able, but pretty expensive resource-wise (I'll cache it aggressively for that reason).
Just wondering if there's an easier way - an existing library that works with feedparser to do simple aggregation, for example. Sample code? Gotchas or warnings? Thanks.
A:
You could throw the feeds into a database and then generate a new feed from this database.
Consider looking into two feedparser-based RSS aggregators: Planet Feed Aggregator and FeedJack (Django based), or at least how they solve this problem.
A:
Here is already suggestion to store data in the database, e.g. bsddb.btopen() or any RDBMS.
Take a look at heapq.merge() and bisect.insort() or use one of B-tree implementations if you'd like to merge data in memory.
|
Aggregating multiple feeds with Universal Feed Parser
|
Having great luck working with single-source feed parsing in Universal Feed Parser, but now I need to run multiple feeds through it and generate chronologically interleaved output (not RSS). Seems like I'll need to iterate through URLs and stuff every entry into a list of dictionaries, then sort that by the entry timestamps and take a slice off the top. That seems do-able, but pretty expensive resource-wise (I'll cache it aggressively for that reason).
Just wondering if there's an easier way - an existing library that works with feedparser to do simple aggregation, for example. Sample code? Gotchas or warnings? Thanks.
|
[
"You could throw the feeds into a database and then generate a new feed from this database.\nConsider looking into two feedparser-based RSS aggregators: Planet Feed Aggregator and FeedJack (Django based), or at least how they solve this problem.\n",
"Here is already suggestion to store data in the database, e.g. bsddb.btopen() or any RDBMS.\nTake a look at heapq.merge() and bisect.insort() or use one of B-tree implementations if you'd like to merge data in memory.\n"
] |
[
2,
1
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001496067_django_python.txt
|
Q:
Problem loading transparent background sprites in pygame
I'm trying to load a transparent image into pygame using the following code:
def load_image(name, colorkey=None):
fullname = os.path.join('data', name)
try:
image = pygame.image.load(fullname)
except pygame.error, message:
print 'Cannot load image:', fullname
raise SystemExit, message
image = image.convert()
if colorkey is not None:
if colorkey is -1:
colorkey = image.get_at((0,0))
image.set_colorkey(colorkey, RLEACCEL)
return image, image.get_rect()
For some reason, everytime I load the image the background is automatically changed to black? I'm not using the colorkey in this case as my image(s) will end up with a white border around them which is quite visible given that my game's background is constantly changing.
Any ideas?
Thanks,Regards
A:
You call image.convert(). From the docs for Surface.convert:
"The converted Surface will have no
pixel alphas. They will be stripped if
the original had them. See
Surface.convert_alpha - change the
pixel format of an image including per
pixel alphas for preserving or
creating per-pixel alphas."
|
Problem loading transparent background sprites in pygame
|
I'm trying to load a transparent image into pygame using the following code:
def load_image(name, colorkey=None):
fullname = os.path.join('data', name)
try:
image = pygame.image.load(fullname)
except pygame.error, message:
print 'Cannot load image:', fullname
raise SystemExit, message
image = image.convert()
if colorkey is not None:
if colorkey is -1:
colorkey = image.get_at((0,0))
image.set_colorkey(colorkey, RLEACCEL)
return image, image.get_rect()
For some reason, everytime I load the image the background is automatically changed to black? I'm not using the colorkey in this case as my image(s) will end up with a white border around them which is quite visible given that my game's background is constantly changing.
Any ideas?
Thanks,Regards
|
[
"You call image.convert(). From the docs for Surface.convert:\n\n\"The converted Surface will have no\n pixel alphas. They will be stripped if\n the original had them. See\n Surface.convert_alpha - change the\n pixel format of an image including per\n pixel alphas for preserving or\n creating per-pixel alphas.\"\n\n"
] |
[
3
] |
[] |
[] |
[
"pygame",
"python"
] |
stackoverflow_0001496106_pygame_python.txt
|
Q:
Guidance on optimising Python runtime for embedded systems with low system resources
My team is incorporating the Python 2.4.4 runtime into our project in order to leverage some externally developed functionality.
Our platform has a 450Mhz SH4 application core and limited memory for use by the Python runtime and application.
We have ported Python, but initial testing has highlighted the following hurdles:
a) start-up times for the Python runtime can be as bad as 25 seconds (when importing the libraries concerned, and in turn their dependencies)
b) Python never seems to release memory to the OS during garbage collection - the only recourse is to close the runtime and restart (incurring start-up delays noted above, which often times is impractical)
If we can mitigate these issues our use of Python would be substantially improved. Any guidance from the SO community would be very valuable. Especially from anyone who has knowledge of the intrinsics of how the Python execution engine operates.
A:
Perhaps it is hard to believe, but CPython version 2.4 never releases memory to the OS. This is allegedly fixed in verion Python 2.5.
In addition, performance (processor-wise) was improved in Python 2.5 and Python 2.6 on top of that.
See the C API section in What's new in Python 2.5, look for the item called Evan Jones’s patch to obmalloc
Alex Martelli (whose advice should always be at least considered), says multiprocess is the only way to go to free memory. If you cannot use multiprocessing (module in Python 2.6), os.fork is at least available. Using os.fork in the most primitive manner (fork one work process at the beginning, wait for it to finish, fork a new..) is still better than relauching the interpreter paying 25 seconds for that.
|
Guidance on optimising Python runtime for embedded systems with low system resources
|
My team is incorporating the Python 2.4.4 runtime into our project in order to leverage some externally developed functionality.
Our platform has a 450Mhz SH4 application core and limited memory for use by the Python runtime and application.
We have ported Python, but initial testing has highlighted the following hurdles:
a) start-up times for the Python runtime can be as bad as 25 seconds (when importing the libraries concerned, and in turn their dependencies)
b) Python never seems to release memory to the OS during garbage collection - the only recourse is to close the runtime and restart (incurring start-up delays noted above, which often times is impractical)
If we can mitigate these issues our use of Python would be substantially improved. Any guidance from the SO community would be very valuable. Especially from anyone who has knowledge of the intrinsics of how the Python execution engine operates.
|
[
"Perhaps it is hard to believe, but CPython version 2.4 never releases memory to the OS. This is allegedly fixed in verion Python 2.5.\nIn addition, performance (processor-wise) was improved in Python 2.5 and Python 2.6 on top of that.\nSee the C API section in What's new in Python 2.5, look for the item called Evan Jones’s patch to obmalloc\nAlex Martelli (whose advice should always be at least considered), says multiprocess is the only way to go to free memory. If you cannot use multiprocessing (module in Python 2.6), os.fork is at least available. Using os.fork in the most primitive manner (fork one work process at the beginning, wait for it to finish, fork a new..) is still better than relauching the interpreter paying 25 seconds for that.\n"
] |
[
5
] |
[] |
[] |
[
"embedded",
"garbage_collection",
"memory",
"performance",
"python"
] |
stackoverflow_0001496761_embedded_garbage_collection_memory_performance_python.txt
|
Q:
Python 2.5 and 2.6 and Numpy compatibility problem
In the computers of our laboratory, which have Python 2.6.2 installed in them, my program, which is an animation of the 2D random walk and diffusion, works perfectly.
However, I can't get the exact same program to work on my laptop, which has Python 2.5. By that not working, I mean the animation is screwed; the axis always changes every time the pylab.draw() and pylab.clf() commands are called in a for loop.
I call a pylab.axis([specified axis]) command before and after draw() and clf() to fix the "field-of-view", but it's still the same - what I get is a flickering series of image instead of the smooth animation I get when I run the exact same program in our laboratory.
I tried to install Python 2.6 in my laptop, but I discovered that there is no Numpy for Py2.6. So it is a mystery to me that my program, which imports Numpy and uses many of its functions, works in our laboratory computer. What can be done with my compatibility problem?
A:
The various (matplotlib.pyplot) graphical backends do not behave in exactly the same way.
You could try setting the backend so that it is the same on both machines:
matplotlib.use('GTKagg') # Right after importing matplotlib
For a list of possible backends, you can do matplotlib.use('...').
A:
Numpy for python 2.6 appears to be downloadable from numpy sourceforge or can be compiled from source
|
Python 2.5 and 2.6 and Numpy compatibility problem
|
In the computers of our laboratory, which have Python 2.6.2 installed in them, my program, which is an animation of the 2D random walk and diffusion, works perfectly.
However, I can't get the exact same program to work on my laptop, which has Python 2.5. By that not working, I mean the animation is screwed; the axis always changes every time the pylab.draw() and pylab.clf() commands are called in a for loop.
I call a pylab.axis([specified axis]) command before and after draw() and clf() to fix the "field-of-view", but it's still the same - what I get is a flickering series of image instead of the smooth animation I get when I run the exact same program in our laboratory.
I tried to install Python 2.6 in my laptop, but I discovered that there is no Numpy for Py2.6. So it is a mystery to me that my program, which imports Numpy and uses many of its functions, works in our laboratory computer. What can be done with my compatibility problem?
|
[
"The various (matplotlib.pyplot) graphical backends do not behave in exactly the same way.\nYou could try setting the backend so that it is the same on both machines:\nmatplotlib.use('GTKagg') # Right after importing matplotlib\n\nFor a list of possible backends, you can do matplotlib.use('...').\n",
"Numpy for python 2.6 appears to be downloadable from numpy sourceforge or can be compiled from source\n"
] |
[
2,
1
] |
[] |
[] |
[
"compatibility",
"numpy",
"python"
] |
stackoverflow_0001496942_compatibility_numpy_python.txt
|
Q:
question related to reverse function and kwargs
To reverse lookup an url by means of name or View_name we will use reverse function in the views like below
reverse("calendarviewurl2", kwargs={"year":theyear,"month":themonth})
and reverse function signature is as follows
http://code.djangoproject.com/browser/django/trunk/django/core/urlresolvers.py
def reverse(self, lookup_view, *args, **kwargs)
My question is related to kwargs
when we want to send a dictionary as keyword arguments we should use the below syntax
snippet 1
kwargs={"year":2009,"month":9}
reverse("name",**kwargs)
as opposed to below code
snippet 2
reverse("name",kwargs={"year":2009,"month":9})
So my question is
Do the snippet1 and snippet2 are
same? ( i feel they are not same)
In case of reverse function only
snippet 2 is working where as
snippet 1 is not properly working.Why is it so? (Even though the proper way to send a dictionary is by using syntax mentioned in snippet1.)
A:
Didn't you look at the signature,
def reverse(viewname, urlconf=None, args=None, kwargs=None,
prefix=None, current_app=None):
takes no **kwargs at all.
kwargs={"year":2009,"month":9}
reverse("name",**kwargs)
means
reverse("name", year=2009, month=9)
which is completely different from
reverse("name",kwargs={"year":2009,"month":9})
When a function actually does take **kwargs, both ways to call it are the same. But that's not the case here. Reverse would have look like this to take **kwargs:
def reverse(viewname, urlconf=None, prefix=None,
current_app=None, *args, **kwargs):
|
question related to reverse function and kwargs
|
To reverse lookup an url by means of name or View_name we will use reverse function in the views like below
reverse("calendarviewurl2", kwargs={"year":theyear,"month":themonth})
and reverse function signature is as follows
http://code.djangoproject.com/browser/django/trunk/django/core/urlresolvers.py
def reverse(self, lookup_view, *args, **kwargs)
My question is related to kwargs
when we want to send a dictionary as keyword arguments we should use the below syntax
snippet 1
kwargs={"year":2009,"month":9}
reverse("name",**kwargs)
as opposed to below code
snippet 2
reverse("name",kwargs={"year":2009,"month":9})
So my question is
Do the snippet1 and snippet2 are
same? ( i feel they are not same)
In case of reverse function only
snippet 2 is working where as
snippet 1 is not properly working.Why is it so? (Even though the proper way to send a dictionary is by using syntax mentioned in snippet1.)
|
[
"Didn't you look at the signature,\ndef reverse(viewname, urlconf=None, args=None, kwargs=None, \n prefix=None, current_app=None):\n\ntakes no **kwargs at all.\nkwargs={\"year\":2009,\"month\":9}\nreverse(\"name\",**kwargs)\n\nmeans \nreverse(\"name\", year=2009, month=9)\n\nwhich is completely different from \nreverse(\"name\",kwargs={\"year\":2009,\"month\":9})\n\nWhen a function actually does take **kwargs, both ways to call it are the same. But that's not the case here. Reverse would have look like this to take **kwargs:\ndef reverse(viewname, urlconf=None, prefix=None, \n current_app=None, *args, **kwargs):\n\n"
] |
[
10
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001497258_django_python.txt
|
Q:
Comparing performance between ruby and python code
I have a memory and CPU intensive problem to solve and I need to benchmark the different solutions in ruby and python on different platforms.
To do the benchmark, I need to measure the time taken and the memory occupied by objects (not the entire program, but a selected list of objects) in both python and ruby.
Please recommend ways to do it, and also let me know if it is possible to do it without using OS specify tools like (Task Manager and ps). Thanks!
Update: Yes, I know that both Python and Ruby are not strong in performance and there are better alternatives like c, c++, Java etc. I am actually more interested in comparing the performance of Python and Ruby. And please no fame-wars.
A:
For Python I recommend heapy
from guppy import hpy
h = hpy()
print h.heap()
or Dowser or PySizer
For Ruby you can use the BleakHouse Plugin or just read this answer on memory leak debugging (ruby).
A:
If you really need to write fast code in a language like this (and not a language far more suited to CPU intensive operations and close control over memory usage such as C++) then I'd recommend pushing the bulk of the work out to Cython.
Cython is a language that makes
writing C extensions for the Python
language as easy as Python itself.
Cython is based on the well-known
Pyrex, but supports more cutting edge
functionality and optimizations.
The Cython language is very close to
the Python language, but Cython
additionally supports calling C
functions and declaring C types on
variables and class attributes. This
allows the compiler to generate very
efficient C code from Cython code.
That way you can get most of the efficiency of C with most of the ease of use of Python.
A:
If you are using Python for CPU intensive algorithmic tasks I suggest use Numpy/Scipy to speed up your numerical calculations and use the Psyco JIT compiler for everything else. Your speeds can approach that of much lower-level languages if you use optimized components.
A:
I'd be wary of trying to measure just the memory consumption of an object graph over the lifecycle of an application. After all, you really don't care about that, in the end. You care that your application, in its entirety, has a sufficiently low footprint.
If you choose to limit your observation of memory consumption anyway, include garbage collector timing in your list of considerations, then look at ruby-prof:
http://ruby-prof.rubyforge.org/
Ciao,
Sheldon.
A:
(you didn't specify py 2.5, 2.6 or 3; or ruby 1.8 or 1.9, jruby, MRI; The JVM has a wealth of tools to attack memory issues; Generally it 's helpful to zero in on memory depletion by posting stripped down versions of programs that replicate the problem
Heapy, ruby-prof, bleak house are all good tools, here are others:
Ruby
http://eigenclass.org/R2/writings/object-size-ruby-ocaml
watch ObjectSpace yourself
http://www.coderoshi.com/2007/08/cheap-tricks-ix-spying-on-ruby.html
http://sporkmonger.com/articles/2006/10/22/a-question
(ruby and python)
http://www.softwareverify.com/
|
Comparing performance between ruby and python code
|
I have a memory and CPU intensive problem to solve and I need to benchmark the different solutions in ruby and python on different platforms.
To do the benchmark, I need to measure the time taken and the memory occupied by objects (not the entire program, but a selected list of objects) in both python and ruby.
Please recommend ways to do it, and also let me know if it is possible to do it without using OS specify tools like (Task Manager and ps). Thanks!
Update: Yes, I know that both Python and Ruby are not strong in performance and there are better alternatives like c, c++, Java etc. I am actually more interested in comparing the performance of Python and Ruby. And please no fame-wars.
|
[
"For Python I recommend heapy\nfrom guppy import hpy\nh = hpy()\nprint h.heap()\n\nor Dowser or PySizer\nFor Ruby you can use the BleakHouse Plugin or just read this answer on memory leak debugging (ruby).\n",
"If you really need to write fast code in a language like this (and not a language far more suited to CPU intensive operations and close control over memory usage such as C++) then I'd recommend pushing the bulk of the work out to Cython.\n\nCython is a language that makes\n writing C extensions for the Python\n language as easy as Python itself.\n Cython is based on the well-known\n Pyrex, but supports more cutting edge\n functionality and optimizations.\nThe Cython language is very close to\n the Python language, but Cython\n additionally supports calling C\n functions and declaring C types on\n variables and class attributes. This\n allows the compiler to generate very\n efficient C code from Cython code.\n\nThat way you can get most of the efficiency of C with most of the ease of use of Python.\n",
"If you are using Python for CPU intensive algorithmic tasks I suggest use Numpy/Scipy to speed up your numerical calculations and use the Psyco JIT compiler for everything else. Your speeds can approach that of much lower-level languages if you use optimized components.\n",
"I'd be wary of trying to measure just the memory consumption of an object graph over the lifecycle of an application. After all, you really don't care about that, in the end. You care that your application, in its entirety, has a sufficiently low footprint.\nIf you choose to limit your observation of memory consumption anyway, include garbage collector timing in your list of considerations, then look at ruby-prof:\nhttp://ruby-prof.rubyforge.org/\nCiao,\nSheldon.\n",
"(you didn't specify py 2.5, 2.6 or 3; or ruby 1.8 or 1.9, jruby, MRI; The JVM has a wealth of tools to attack memory issues; Generally it 's helpful to zero in on memory depletion by posting stripped down versions of programs that replicate the problem\nHeapy, ruby-prof, bleak house are all good tools, here are others:\nRuby \nhttp://eigenclass.org/R2/writings/object-size-ruby-ocaml\nwatch ObjectSpace yourself\nhttp://www.coderoshi.com/2007/08/cheap-tricks-ix-spying-on-ruby.html\nhttp://sporkmonger.com/articles/2006/10/22/a-question\n(ruby and python)\nhttp://www.softwareverify.com/\n"
] |
[
3,
3,
2,
1,
0
] |
[] |
[] |
[
"memory_management",
"performance",
"python",
"ruby"
] |
stackoverflow_0001490841_memory_management_performance_python_ruby.txt
|
Q:
Can Python determine the class of an object accessing a method
Is there anyway to do something like this:
class A:
def foo(self):
if isinstance(caller, B):
print "B can't call methods in A"
else:
print "Foobar"
class B:
def foo(self, ref): ref.foo()
class C:
def foo(self, ref): ref.foo()
a = A();
B().foo(a) # Outputs "B can't call methods in A"
C().foo(a) # Outputs "Foobar"
Where caller in A uses some form of introspection to determine the class of the calling method's object?
EDIT:
In the end, I put this together based on some of the suggestions:
import inspect
...
def check_caller(self, klass):
frame = inspect.currentframe()
current = lambda : frame.f_locals.get('self')
while not current() is None:
if isinstance(current(), klass): return True
frame = frame.f_back
return False
It's not perfect for all the reasons supplied, but thanks for the responses: they were a big help.
A:
Assuming the caller is a method, then yes you can, by looking in the previous frame, and picking out self from the locals.
class Reciever:
def themethod(self):
frame = sys._getframe(1)
arguments = frame.f_code.co_argcount
if arguments == 0:
print "Not called from a method"
return
caller_calls_self = frame.f_code.co_varnames[0]
thecaller = frame.f_locals[caller_calls_self]
print "Called from a", thecaller.__class__.__name__, "instance"
Üglŷ as heck, but it works. Now why you would want to do this is another question altogether, I suspect that there is a better way. The whole concept of A isn't allowed to call B is likely to be a mistake.
A:
The caller is always an instance of A. The fact that you're calling it inside a B method doesn't change that. In other words: Insiode B.foo, ref is an instance of A, so calling ref.foo() is a call on A, B is not involved on that call (it could happen top-level).
The only sane way is to pass a reference to self so A can check if it is B or not.
class A(object):
def foo(self, caller=None):
if isinstance(caller, B):
print "B can't call methods in A"
else:
print "Foobar"
class B(object):
def foo(self, ref): ref.foo(self)
class C(object):
def foo(self, ref): ref.foo(self)
a = A();
B().foo(a) # Outputs "B can't call methods in A"
C().foo(a) # Outputs "Foobar"
a.foo() # Outputs "Foobar"
|
Can Python determine the class of an object accessing a method
|
Is there anyway to do something like this:
class A:
def foo(self):
if isinstance(caller, B):
print "B can't call methods in A"
else:
print "Foobar"
class B:
def foo(self, ref): ref.foo()
class C:
def foo(self, ref): ref.foo()
a = A();
B().foo(a) # Outputs "B can't call methods in A"
C().foo(a) # Outputs "Foobar"
Where caller in A uses some form of introspection to determine the class of the calling method's object?
EDIT:
In the end, I put this together based on some of the suggestions:
import inspect
...
def check_caller(self, klass):
frame = inspect.currentframe()
current = lambda : frame.f_locals.get('self')
while not current() is None:
if isinstance(current(), klass): return True
frame = frame.f_back
return False
It's not perfect for all the reasons supplied, but thanks for the responses: they were a big help.
|
[
"Assuming the caller is a method, then yes you can, by looking in the previous frame, and picking out self from the locals.\nclass Reciever:\n def themethod(self):\n frame = sys._getframe(1)\n arguments = frame.f_code.co_argcount\n if arguments == 0:\n print \"Not called from a method\"\n return\n caller_calls_self = frame.f_code.co_varnames[0]\n thecaller = frame.f_locals[caller_calls_self]\n print \"Called from a\", thecaller.__class__.__name__, \"instance\"\n\nÜglŷ as heck, but it works. Now why you would want to do this is another question altogether, I suspect that there is a better way. The whole concept of A isn't allowed to call B is likely to be a mistake.\n",
"The caller is always an instance of A. The fact that you're calling it inside a B method doesn't change that. In other words: Insiode B.foo, ref is an instance of A, so calling ref.foo() is a call on A, B is not involved on that call (it could happen top-level).\nThe only sane way is to pass a reference to self so A can check if it is B or not.\nclass A(object):\n def foo(self, caller=None):\n if isinstance(caller, B):\n print \"B can't call methods in A\"\n else:\n print \"Foobar\"\n\nclass B(object):\n def foo(self, ref): ref.foo(self)\n\nclass C(object):\n def foo(self, ref): ref.foo(self)\n\na = A();\nB().foo(a) # Outputs \"B can't call methods in A\"\nC().foo(a) # Outputs \"Foobar\"\na.foo() # Outputs \"Foobar\"\n\n"
] |
[
6,
4
] |
[
"Something like this may meet your needs better:\nclass A(object):\n def foo(self):\n # do stuff\n\nclass B(A):\n def foo(self):\n raise NotImplementedError\n\nclass C(A):\n pass\n\n...but it's difficult to say without knowing exactly what you're trying to do.\n"
] |
[
-1
] |
[
"introspection",
"python"
] |
stackoverflow_0001497683_introspection_python.txt
|
Q:
Is it OK if objects from different classes interact with each other?
I just started to use the object oriented programming in Python. I wander if it is OK if I create a method of a class which use objects from another class. In other words, when I call a method of the first class I give an object from the second class as one of the arguments. And then, the considered methods (of the first class) can manipulate by the object from the second class (to get its attributes or use its methods). Is it allowed in Python? Is it not considered as a bad programming style?
Is it OK if I instantiate objects from the second class withing a method of the first class. In other words, if I call a method from the first class it instantiate objects fro the second class.
Thank you in advance for any help.
A:
If you're talking about passing an instance of one object to the method of a another one, then yes of course it's allowed! And it's considered fine practice.
If you want to know more about good object oriented coding, may I offer some suggested readings:
Design Patterns: Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson, John M. Vlissides
Known as the Gang Of Four book, this lays out a number of design patterns that seem to show up in object oriented code time and time again. This is a good place to go for ideas on how to handle certain problems in a good object oriented way.
Another good one:
Refactoring: Improving the Design of Existing Code by Martin Fowler, Kent Beck, John Brant , William Opdyke, Don Roberts
This is a great book for learning what NOT to do when writing object oriented code, and how to fix it to make it better when you do encounter it. It offers a list of code smells that suggest bad object oriented code and a reference section of refactorings that provide instructions on how to fix those smells and make them more object oriented.
A:
What you're talking about is fine. In fact most data types (string, int, boolean, etc.) in Python are objects, so pretty much every method works in the way you described.
A:
The answer is that it os MORE than OK, it's in fact the whole point.
What is not "OK" is when objects start fiddling with the internals of each other. You can prevent this from happening accidentally, by calling things that are meant to be internal with a leading underscore (or two, which makes it internal also for subclasses). This works as a little marker for other programmers that you aren't supposed to use it, and that it's not official API and can change.
A:
I see no problem with that, it happens all the time. Do you have a specific problem you're trying to solve or just asking a general question without a context?
A:
The Law of Demeter is general guidance on which methods and objects you can interact with in good faith.
It is guidance. You can make code that works that doesn't follow the LoD, but it is a good guide and helps you build "structure shy systems" -- something you will appreciate later when you try to make big changes.
http://en.wikipedia.org/wiki/Law_of_Demeter
I recommend reading up on good OO practices and principles when you're not coding. Maybe a few papers or a chapter of a book each evening or every other day. Try the SOLID principles. You can find a quick reference to them here:
http://agileinaflash.blogspot.com/2009/03/solid.html
|
Is it OK if objects from different classes interact with each other?
|
I just started to use the object oriented programming in Python. I wander if it is OK if I create a method of a class which use objects from another class. In other words, when I call a method of the first class I give an object from the second class as one of the arguments. And then, the considered methods (of the first class) can manipulate by the object from the second class (to get its attributes or use its methods). Is it allowed in Python? Is it not considered as a bad programming style?
Is it OK if I instantiate objects from the second class withing a method of the first class. In other words, if I call a method from the first class it instantiate objects fro the second class.
Thank you in advance for any help.
|
[
"If you're talking about passing an instance of one object to the method of a another one, then yes of course it's allowed! And it's considered fine practice.\nIf you want to know more about good object oriented coding, may I offer some suggested readings:\nDesign Patterns: Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson, John M. Vlissides\nKnown as the Gang Of Four book, this lays out a number of design patterns that seem to show up in object oriented code time and time again. This is a good place to go for ideas on how to handle certain problems in a good object oriented way. \nAnother good one:\nRefactoring: Improving the Design of Existing Code by Martin Fowler, Kent Beck, John Brant , William Opdyke, Don Roberts\nThis is a great book for learning what NOT to do when writing object oriented code, and how to fix it to make it better when you do encounter it. It offers a list of code smells that suggest bad object oriented code and a reference section of refactorings that provide instructions on how to fix those smells and make them more object oriented. \n",
"What you're talking about is fine. In fact most data types (string, int, boolean, etc.) in Python are objects, so pretty much every method works in the way you described.\n",
"The answer is that it os MORE than OK, it's in fact the whole point.\nWhat is not \"OK\" is when objects start fiddling with the internals of each other. You can prevent this from happening accidentally, by calling things that are meant to be internal with a leading underscore (or two, which makes it internal also for subclasses). This works as a little marker for other programmers that you aren't supposed to use it, and that it's not official API and can change.\n",
"I see no problem with that, it happens all the time. Do you have a specific problem you're trying to solve or just asking a general question without a context?\n",
"The Law of Demeter is general guidance on which methods and objects you can interact with in good faith. \nIt is guidance. You can make code that works that doesn't follow the LoD, but it is a good guide and helps you build \"structure shy systems\" -- something you will appreciate later when you try to make big changes.\nhttp://en.wikipedia.org/wiki/Law_of_Demeter\nI recommend reading up on good OO practices and principles when you're not coding. Maybe a few papers or a chapter of a book each evening or every other day. Try the SOLID principles. You can find a quick reference to them here:\nhttp://agileinaflash.blogspot.com/2009/03/solid.html\n"
] |
[
8,
1,
1,
0,
0
] |
[] |
[] |
[
"oop",
"python"
] |
stackoverflow_0001498009_oop_python.txt
|
Q:
Python - importing package classes into console global namespace
I'm having a spot of trouble getting my python classes to work within the python console. I want to automatically import all of my classes into the global namespace so I can use them without any prefix.module.names.
Here's what I've got so far...
projectname/
|-__init__.py
|
|-main_stuff/
|-__init__.py
|-main1.py
|-main2.py
|
|-other_stuff/
|-__init__.py
|-other1.py
|-other2.py
Each file defines a class of the same name, e.g. main1.py will define a class called Main1.
My PYTHONPATH is the absolute path to projectname/.
I've got a python startup file that contains this:
from projectname import *
But this doesn't let me use my classes at all. Upon starting a python console I would like to be able to write:
ob=Main1()
but Main1 isn't within the current namespace, so it doesn't work.
I tried adding things to the __init__.py files...
In projectname/__init__.py:
import main_stuff
In projectname/main_stuff/__init__.py:
import other_stuff
__all__ = ["main1", "main2", "main3"]
And so on. And in my startup file I added:
from projectname.main_stuff import *
from projectname.main_stuff/other_stuff import *
But to use the classes within the python console I still have to write:
ob=main1.Main1()
I'd prefer not to need the main1. part. Does anyone know how I can automatically put my classes in the global namespace when using the python console?
Thanks.
==== EDIT ====
What I need is to import a package at the class level, but from package import * gives me everything at the module level. I'm after an easy way of doing something like this:
for module in package do:
from package.module import *
==== ANOTHER EDIT ====
I ended up adding the class imports to my python startup file individually. It's not ideal because of the overhead of maintaining it, but it does what I want.
from class1.py import Class1
from class2.py import Class2
from class3.py import Class3
A:
You want to use a different form of import.
In projectname/main_stuff/__init__.py:
from other_stuff import *
__all__ = ["main1", "main2", "main3"]
When you use a statement like this:
import foo
You are defining the name foo in the current module. Then you can use foo.something to get at the stuff in foo.
When you use this:
from foo import *
You are taking all of the names defined in foo and defining them in this module (like pouring a bucket of names from foo into your module).
|
Python - importing package classes into console global namespace
|
I'm having a spot of trouble getting my python classes to work within the python console. I want to automatically import all of my classes into the global namespace so I can use them without any prefix.module.names.
Here's what I've got so far...
projectname/
|-__init__.py
|
|-main_stuff/
|-__init__.py
|-main1.py
|-main2.py
|
|-other_stuff/
|-__init__.py
|-other1.py
|-other2.py
Each file defines a class of the same name, e.g. main1.py will define a class called Main1.
My PYTHONPATH is the absolute path to projectname/.
I've got a python startup file that contains this:
from projectname import *
But this doesn't let me use my classes at all. Upon starting a python console I would like to be able to write:
ob=Main1()
but Main1 isn't within the current namespace, so it doesn't work.
I tried adding things to the __init__.py files...
In projectname/__init__.py:
import main_stuff
In projectname/main_stuff/__init__.py:
import other_stuff
__all__ = ["main1", "main2", "main3"]
And so on. And in my startup file I added:
from projectname.main_stuff import *
from projectname.main_stuff/other_stuff import *
But to use the classes within the python console I still have to write:
ob=main1.Main1()
I'd prefer not to need the main1. part. Does anyone know how I can automatically put my classes in the global namespace when using the python console?
Thanks.
==== EDIT ====
What I need is to import a package at the class level, but from package import * gives me everything at the module level. I'm after an easy way of doing something like this:
for module in package do:
from package.module import *
==== ANOTHER EDIT ====
I ended up adding the class imports to my python startup file individually. It's not ideal because of the overhead of maintaining it, but it does what I want.
from class1.py import Class1
from class2.py import Class2
from class3.py import Class3
|
[
"You want to use a different form of import.\nIn projectname/main_stuff/__init__.py:\nfrom other_stuff import *\n__all__ = [\"main1\", \"main2\", \"main3\"]\n\nWhen you use a statement like this:\nimport foo\n\nYou are defining the name foo in the current module. Then you can use foo.something to get at the stuff in foo.\nWhen you use this:\nfrom foo import *\n\nYou are taking all of the names defined in foo and defining them in this module (like pouring a bucket of names from foo into your module).\n"
] |
[
21
] |
[] |
[] |
[
"console",
"global",
"import",
"namespaces",
"python"
] |
stackoverflow_0001499119_console_global_import_namespaces_python.txt
|
Q:
Django: form values not updating when model updates
I am creating a form that uses MultipleChoiceField. The values for this field are derived from another model. This method works fine, however, I am noticing (on the production server) that when I add a new item to the model in question (NoticeType), the form does not dynamically update. I have to restart the server for the new item to show up on my MultipleChoiceField.
Any changes to the NoticeType model (editing items or creating new ones) do not propagate to the form. After I restart the production server, the updates appear.
Any ideas why this might be ? The relevant portion of the form is below. Thanks.
from django import forms
from django.contrib.auth.models import User
from notification.models import NoticeType
class EditUserProfileForm(forms.Form):
CHOICES = []
for notice in NoticeType.objects.all():
CHOICES.append( (notice.label,notice.display) )
notifications = forms.MultipleChoiceField(
label="Email Notifications",
required=False,
choices=( CHOICES ),
widget=forms.CheckboxSelectMultiple,)
A:
Although mherren is right that you can fix this problem by defining your choices in the __init__ method, there is an easier way: use the ModelMultipleChoiceField which is specifically designed to take a queryset, and updates dynamically.
class EditUserProfileForm(forms.Form):
notifications = forms. ModelMultipleChoiceField(
label="Email Notifications",
required=False,
queryset = NoticeType.objects.all(),
widget=forms.CheckboxSelectMultiple)
A:
My hunch is that the class definition is only being processed once on load rather than for each instantiation. Try adding the CHOICES computation to the init method like so:
def __init__(self, *args, **kwargs):
super(self.__class__, self).__init__(*args, **kwargs)
CHOICES = []
for notice in NoticeType.objects.all():
CHOICES.append( (notice.label, notice.display) )
self.fields['notifications'].choices = CHOICES
|
Django: form values not updating when model updates
|
I am creating a form that uses MultipleChoiceField. The values for this field are derived from another model. This method works fine, however, I am noticing (on the production server) that when I add a new item to the model in question (NoticeType), the form does not dynamically update. I have to restart the server for the new item to show up on my MultipleChoiceField.
Any changes to the NoticeType model (editing items or creating new ones) do not propagate to the form. After I restart the production server, the updates appear.
Any ideas why this might be ? The relevant portion of the form is below. Thanks.
from django import forms
from django.contrib.auth.models import User
from notification.models import NoticeType
class EditUserProfileForm(forms.Form):
CHOICES = []
for notice in NoticeType.objects.all():
CHOICES.append( (notice.label,notice.display) )
notifications = forms.MultipleChoiceField(
label="Email Notifications",
required=False,
choices=( CHOICES ),
widget=forms.CheckboxSelectMultiple,)
|
[
"Although mherren is right that you can fix this problem by defining your choices in the __init__ method, there is an easier way: use the ModelMultipleChoiceField which is specifically designed to take a queryset, and updates dynamically.\nclass EditUserProfileForm(forms.Form):\n notifications = forms. ModelMultipleChoiceField(\n label=\"Email Notifications\",\n required=False,\n queryset = NoticeType.objects.all(),\n widget=forms.CheckboxSelectMultiple)\n\n",
"My hunch is that the class definition is only being processed once on load rather than for each instantiation. Try adding the CHOICES computation to the init method like so:\ndef __init__(self, *args, **kwargs):\n super(self.__class__, self).__init__(*args, **kwargs)\n CHOICES = []\n for notice in NoticeType.objects.all():\n CHOICES.append( (notice.label, notice.display) )\n self.fields['notifications'].choices = CHOICES\n\n"
] |
[
9,
7
] |
[] |
[] |
[
"django",
"django_forms",
"python"
] |
stackoverflow_0001498763_django_django_forms_python.txt
|
Q:
Running python script as another user
On a Linux box I want to run a Python script as another user.
I've already made a wrapper program in C++ that calls the script, since I've realized that the ownership of running the script is decided by the ownership of the python interpreter. After that I change the C++ program to a different user and run the C++ program.
This setup doesn't seem to be working. Any ideas?
A:
You can set the user with os.setuid(), and you can get the uid with pwd.
Like so:
>>> import pwd, os
>>> uid = pwd.getpwnam('root')[2]
>>> os.setuid(uid)
Obviously this only works if the user or executable has the permission to do so. Exactly how to set that up I don't know. Obviously it works if you are root. I think you may need to the the setuid flag on the Python executable, and that would leave a WHOPPING security hole. possible that's permittable if the user you setuid too is a dedicated restricted user that can't do anything except whatever you need to do.
Unix security, based on users and setuiding and stuff, is not very good or practical, and it's easy to leave big security holes. A more secure option is actually to do this client-server typish, so you have a demon that does everything, and the client talks to it. The demon can then run with a higher security than the users, but the users would have to give a name and password when they run the script, or identify themselves with some public/private key or somesuch.
A:
Give those users the ability to sudo su $dedicated_username and tailor the permissions on your system so that $dedicated_user has sufficient, but not excessive, access.
|
Running python script as another user
|
On a Linux box I want to run a Python script as another user.
I've already made a wrapper program in C++ that calls the script, since I've realized that the ownership of running the script is decided by the ownership of the python interpreter. After that I change the C++ program to a different user and run the C++ program.
This setup doesn't seem to be working. Any ideas?
|
[
"You can set the user with os.setuid(), and you can get the uid with pwd.\nLike so:\n>>> import pwd, os\n>>> uid = pwd.getpwnam('root')[2]\n>>> os.setuid(uid)\n\nObviously this only works if the user or executable has the permission to do so. Exactly how to set that up I don't know. Obviously it works if you are root. I think you may need to the the setuid flag on the Python executable, and that would leave a WHOPPING security hole. possible that's permittable if the user you setuid too is a dedicated restricted user that can't do anything except whatever you need to do.\nUnix security, based on users and setuiding and stuff, is not very good or practical, and it's easy to leave big security holes. A more secure option is actually to do this client-server typish, so you have a demon that does everything, and the client talks to it. The demon can then run with a higher security than the users, but the users would have to give a name and password when they run the script, or identify themselves with some public/private key or somesuch.\n",
"Give those users the ability to sudo su $dedicated_username and tailor the permissions on your system so that $dedicated_user has sufficient, but not excessive, access.\n"
] |
[
15,
0
] |
[
"Use the command sudo.\nIn order to run a program as a user, the system must \"authenticate\" that user.\nObviously, root can run any program as any user, and any user can su to another user with a password.\nThe program sudo can be configured to allow a group of users to sudo a particular command as a particular user.\nFor example, you could create a group scriptUsers and a user scriptRun. Then, configure sudo to let any user in scriptUsers become scriptRun ONLY to run your script.\n"
] |
[
-1
] |
[
"linux",
"python"
] |
stackoverflow_0001499268_linux_python.txt
|
Q:
How do I use the wx.lib.docview package?
I'm currently working on a simple wxPython app that's essentially document based. So far I've been manually implementing the usual open/save/undo/redo etc etc stuff.
It occurred to me that wxPython must have something to help me out and after a bit of searching revealed the docview package.
At this point though I'm just not quite sure how to hook everything up and get things started. Anyone got any good links or hints about places to start?
The docs seems to be a little thin about this and Robin Dunn's wxPython book doesn't really cover this package at all.
A:
You might take a look at the docviewdemo.py from the wxPython Docs and Demos:
on my machine they are located:
C:\Program Files\wxPython2.8 Docs and Demos\samples\pydocview\
C:\Program Files\wxPython2.8 Docs and Demos\samples\docview\
A:
In addition to the ones mentioned, there is quite an extensive example docview/pydocview in the samples\ide. If you want it to run you will have to make a few code corrections (I have submitted a ticket that outlines the fixes at trac.wxwidgets.org #11237). It is pretty complex but I found it handy to figure out how to do some more complex things. For example, samples\ide\activegrid\tools\ProjectEditor.py is built from scratch and has undo support etc rather than just relying on a control that does everything for you already. That way you can see how things are supposed to be done at the detailed level. The documentation is rather useless in that regard.
If you have decided against using docview/pydocview I have a spreadsheet application built on wxPython that you may find useful as an example. While it does not implement a document view framework it does have some characteristics of it and I've implemented an undo/redo system. Check it out at http://www.missioncognition.net/pysheet/ I'm currently working on a pydocview based app so I expect that to be up on my site eventually.
|
How do I use the wx.lib.docview package?
|
I'm currently working on a simple wxPython app that's essentially document based. So far I've been manually implementing the usual open/save/undo/redo etc etc stuff.
It occurred to me that wxPython must have something to help me out and after a bit of searching revealed the docview package.
At this point though I'm just not quite sure how to hook everything up and get things started. Anyone got any good links or hints about places to start?
The docs seems to be a little thin about this and Robin Dunn's wxPython book doesn't really cover this package at all.
|
[
"You might take a look at the docviewdemo.py from the wxPython Docs and Demos:\non my machine they are located:\n\nC:\\Program Files\\wxPython2.8 Docs and Demos\\samples\\pydocview\\\nC:\\Program Files\\wxPython2.8 Docs and Demos\\samples\\docview\\\n\n",
"In addition to the ones mentioned, there is quite an extensive example docview/pydocview in the samples\\ide. If you want it to run you will have to make a few code corrections (I have submitted a ticket that outlines the fixes at trac.wxwidgets.org #11237). It is pretty complex but I found it handy to figure out how to do some more complex things. For example, samples\\ide\\activegrid\\tools\\ProjectEditor.py is built from scratch and has undo support etc rather than just relying on a control that does everything for you already. That way you can see how things are supposed to be done at the detailed level. The documentation is rather useless in that regard.\nIf you have decided against using docview/pydocview I have a spreadsheet application built on wxPython that you may find useful as an example. While it does not implement a document view framework it does have some characteristics of it and I've implemented an undo/redo system. Check it out at http://www.missioncognition.net/pysheet/ I'm currently working on a pydocview based app so I expect that to be up on my site eventually.\n"
] |
[
1,
1
] |
[] |
[] |
[
"docview",
"python",
"user_interface",
"wxpython"
] |
stackoverflow_0000751159_docview_python_user_interface_wxpython.txt
|
Q:
http checks python
learning python here, I want to check if anybody is running a web server on my local
network, using this code, but it gives me a lot of error in the concole.
#!/usr/bin/env python
import httplib
last = 1
while last <> 255:
url = "10.1.1." + "last"
connection = httplib.HTTPConnection("url", 80)
connection.request("GET","/")
response = connection.getresponse()
print (response.status)
last = last + 1
A:
I do suggest changing the while loop to the more idiomatic for loop, and handling exceptions:
#!/usr/bin/env python
import httplib
import socket
for i in range(1, 256):
try:
url = "10.1.1.%d" % i
connection = httplib.HTTPConnection(url, 80)
connection.request("GET","/")
response = connection.getresponse()
print url + ":", response.status
except socket.error:
print url + ":", "error!"
To see how to add a timeout to this, so it doesn't take so long to check each server, see here.
A:
as pointed out, you have some basic
quotation issues. but more fundamentally:
you're not using Pythonesque
constructs to handle things but
you're coding them as simple
imperative code. that's fine, of course, but below are examples of funner (and better) ways to express things
you need to explicitly set timeouts or it'll
take forever
you need to multithread or it'll take forever
you need to handle various common exception types or your code will crash: connections will fail (including
time out) under numerous conditions
against real web servers
10.1.1.* is only one possible set of "local" servers. RFC 1918 spells out that
the "local" ranges are 10.0.0.0 - 10.255.255.255, 172.16.0.0 - 172.31.255.255, and
192.168.0.0 - 192.168.255.255. the problem of
generic detection of responders in
your "local" network is a hard one
web servers (especially local
ones) often run on other ports than
80 (notably on 8000, 8001, or 8080)
the complexity of general
web servers, dns, etc is such that
you can get various timeout
behaviors at different times (and affected by recent operations)
below, some sample code to get you started, that pretty much addresses all of
the above problems except (5), which i'll assume is (well) beyond
the scope of the question.
btw i'm printing the size of the returned web page, since it's a simple
"signature" of what the page is. the sample IPs return various Yahoo
assets.
import urllib
import threading
import socket
def t_run(thread_list, chunks):
t_count = len(thread_list)
print "Running %s jobs in groups of %s threads" % (t_count, chunks)
for x in range(t_count / chunks + 1):
i = x * chunks
i_c = min(i + chunks, t_count)
c = len([t.start() for t in thread_list[i:i_c]])
print "Started %s threads for jobs %s...%s" % (c, i, i_c - 1)
c = len([t.join() for t in thread_list[i:i_c]])
print "Finished %s threads for job index %s" % (c, i)
def url_scan(ip_base, timeout=5):
socket.setdefaulttimeout(timeout)
def f(url):
# print "-- Trying (%s)" % url
try:
# the print will only complete if there's a server there
r = urllib.urlopen(url)
if r:
print "## (%s) got %s bytes" % (url, len(r.read()))
else:
print "## (%s) failed to connect" % url
except IOError, msg:
# these are just the common cases
if str(msg)=="[Errno socket error] timed out":
return
if str(msg)=="[Errno socket error] (10061, 'Connection refused')":
return
print "## (%s) got error '%s'" % (url, msg)
# you might want 8000 and 8001, too
return [threading.Thread(target=f,
args=("http://" + ip_base + str(x) + ":" + str(p),))
for x in range(255) for p in [80, 8080]]
# run them (increase chunk size depending on your memory)
# also, try different timeouts
t_run(url_scan("209.131.36."), 100)
t_run(url_scan("209.131.36.", 30), 100)
A:
Remove the quotes from the variable names last and url. Python is interpreting them as strings rather than variables. Try this:
#!/usr/bin/env python
import httplib
last = 1
while last <> 255:
url = "10.1.1.%d" % last
connection = httplib.HTTPConnection(url, 80)
connection.request("GET","/")
response = connection.getresponse()
print (response.status)
last = last + 1
A:
You're trying to connect to an url that is literally the string 'url': that's what the quotes you're using in
connection = httplib.HTTPConnection("url", 80)
mean. Once you remedy that (by removing those quotes) you'll be trying to connect to "10.1.1.last", given the quotes in the previous line. Set that line to
url = "10.1.1." + str(last)
and it could work!-)
|
http checks python
|
learning python here, I want to check if anybody is running a web server on my local
network, using this code, but it gives me a lot of error in the concole.
#!/usr/bin/env python
import httplib
last = 1
while last <> 255:
url = "10.1.1." + "last"
connection = httplib.HTTPConnection("url", 80)
connection.request("GET","/")
response = connection.getresponse()
print (response.status)
last = last + 1
|
[
"I do suggest changing the while loop to the more idiomatic for loop, and handling exceptions:\n#!/usr/bin/env python\n\nimport httplib\nimport socket\n\n\nfor i in range(1, 256):\n try:\n url = \"10.1.1.%d\" % i\n connection = httplib.HTTPConnection(url, 80)\n connection.request(\"GET\",\"/\")\n response = connection.getresponse()\n print url + \":\", response.status\n except socket.error:\n print url + \":\", \"error!\"\n\nTo see how to add a timeout to this, so it doesn't take so long to check each server, see here.\n",
"as pointed out, you have some basic\nquotation issues. but more fundamentally:\n\nyou're not using Pythonesque\nconstructs to handle things but\nyou're coding them as simple\nimperative code. that's fine, of course, but below are examples of funner (and better) ways to express things\nyou need to explicitly set timeouts or it'll\ntake forever\nyou need to multithread or it'll take forever\nyou need to handle various common exception types or your code will crash: connections will fail (including\ntime out) under numerous conditions\nagainst real web servers\n10.1.1.* is only one possible set of \"local\" servers. RFC 1918 spells out that\nthe \"local\" ranges are 10.0.0.0 - 10.255.255.255, 172.16.0.0 - 172.31.255.255, and\n192.168.0.0 - 192.168.255.255. the problem of\ngeneric detection of responders in\nyour \"local\" network is a hard one\nweb servers (especially local\nones) often run on other ports than\n80 (notably on 8000, 8001, or 8080)\nthe complexity of general\nweb servers, dns, etc is such that\nyou can get various timeout\nbehaviors at different times (and affected by recent operations)\n\nbelow, some sample code to get you started, that pretty much addresses all of\nthe above problems except (5), which i'll assume is (well) beyond\nthe scope of the question.\nbtw i'm printing the size of the returned web page, since it's a simple\n\"signature\" of what the page is. the sample IPs return various Yahoo\nassets.\nimport urllib\nimport threading\nimport socket\n\ndef t_run(thread_list, chunks):\n t_count = len(thread_list)\n print \"Running %s jobs in groups of %s threads\" % (t_count, chunks)\n for x in range(t_count / chunks + 1):\n i = x * chunks\n i_c = min(i + chunks, t_count)\n c = len([t.start() for t in thread_list[i:i_c]])\n print \"Started %s threads for jobs %s...%s\" % (c, i, i_c - 1)\n c = len([t.join() for t in thread_list[i:i_c]])\n print \"Finished %s threads for job index %s\" % (c, i)\n\ndef url_scan(ip_base, timeout=5):\n socket.setdefaulttimeout(timeout)\n def f(url):\n # print \"-- Trying (%s)\" % url\n try:\n # the print will only complete if there's a server there\n r = urllib.urlopen(url)\n if r:\n print \"## (%s) got %s bytes\" % (url, len(r.read()))\n else:\n print \"## (%s) failed to connect\" % url\n except IOError, msg:\n # these are just the common cases\n if str(msg)==\"[Errno socket error] timed out\":\n return\n if str(msg)==\"[Errno socket error] (10061, 'Connection refused')\":\n return\n print \"## (%s) got error '%s'\" % (url, msg)\n # you might want 8000 and 8001, too\n return [threading.Thread(target=f, \n args=(\"http://\" + ip_base + str(x) + \":\" + str(p),)) \n for x in range(255) for p in [80, 8080]]\n\n# run them (increase chunk size depending on your memory)\n# also, try different timeouts\nt_run(url_scan(\"209.131.36.\"), 100)\nt_run(url_scan(\"209.131.36.\", 30), 100)\n\n",
"Remove the quotes from the variable names last and url. Python is interpreting them as strings rather than variables. Try this:\n#!/usr/bin/env python\n\nimport httplib\nlast = 1\nwhile last <> 255:\n url = \"10.1.1.%d\" % last\n connection = httplib.HTTPConnection(url, 80)\n connection.request(\"GET\",\"/\")\n response = connection.getresponse()\n print (response.status)\n last = last + 1\n\n",
"You're trying to connect to an url that is literally the string 'url': that's what the quotes you're using in \n connection = httplib.HTTPConnection(\"url\", 80)\n\nmean. Once you remedy that (by removing those quotes) you'll be trying to connect to \"10.1.1.last\", given the quotes in the previous line. Set that line to\n url = \"10.1.1.\" + str(last)\n\nand it could work!-)\n"
] |
[
5,
2,
1,
1
] |
[] |
[] |
[
"http",
"python"
] |
stackoverflow_0001495367_http_python.txt
|
Q:
Python/WebApp Google App Engine - testing for user/pass in the headers
When you call a web service like this:
username = 'test12'
password = 'test34'
client = httplib2.Http(".cache")
client.add_credentials(username,password)
URL = "http://localhost:8080/wyWebServiceTest"
response, content = client.request(URL)
How do you get the username/password into variables on the server side (i.e. in the web-service that I'm writing).
I checked the self.request.headers and self.request.environ and couldn't find them.
(I'm not using Google Login, need to bounce this userid/pass against my own database to verify security.)
I was trying to ideas from this page: http://pythonpaste.org/webob/reference.html#headers
Thanks,
Neal Walters
Slight enhancement to Peter's code below:
auth = None
if 'Authorization' in self.request.headers:
auth = self.request.headers['Authorization']
if not auth:
A:
I haven't tested this code (insert smiley) but I think this is the sort of thing you need. Basically your credentials won't be in the header if your server hasn't bounced a 401 back to your client (the client needs to know the realm to know what credentials to provide).
class MYREALM_securepage(webapp.RequestHandler):
def get(self):
if not 'Authorization' in self.request.headers:
self.response.headers['WWW-Authenticate'] = 'Basic realm="MYREALM"'
self.response.set_status(401)
self.response.out.write("Authorization required")
else:
auth = self.request.headers['Authorization']
(username, password) = base64.b64decode(auth.split(' ')[1]).split(':')
# Check the username and password, and proceed ...
A:
The credentials will appear in the Authorization header. The steps work like this:
Client makes a request to your app with no attempt at authorization
Server responds with a "401 Authorization Required" response, and the "WWW-Authenticate" header set to 'Basic realm="something"' (for basic auth).
Client responds with an Authorization header set appropriately (see below).
The exact content of the client's Authorization header in step 3 depends on the authorization method used. For HTTP Basic auth, it's the base64-encoded user credentials - see here. For HTTP digest auth, both the server's header and the response from the client are a bit more complicated - see here.
A:
httplib2 will only pass the credentials after a 401 response from the web server, after which the credentials should be sent in an Authorization: header.
|
Python/WebApp Google App Engine - testing for user/pass in the headers
|
When you call a web service like this:
username = 'test12'
password = 'test34'
client = httplib2.Http(".cache")
client.add_credentials(username,password)
URL = "http://localhost:8080/wyWebServiceTest"
response, content = client.request(URL)
How do you get the username/password into variables on the server side (i.e. in the web-service that I'm writing).
I checked the self.request.headers and self.request.environ and couldn't find them.
(I'm not using Google Login, need to bounce this userid/pass against my own database to verify security.)
I was trying to ideas from this page: http://pythonpaste.org/webob/reference.html#headers
Thanks,
Neal Walters
Slight enhancement to Peter's code below:
auth = None
if 'Authorization' in self.request.headers:
auth = self.request.headers['Authorization']
if not auth:
|
[
"I haven't tested this code (insert smiley) but I think this is the sort of thing you need. Basically your credentials won't be in the header if your server hasn't bounced a 401 back to your client (the client needs to know the realm to know what credentials to provide).\nclass MYREALM_securepage(webapp.RequestHandler):\n def get(self):\n if not 'Authorization' in self.request.headers:\n self.response.headers['WWW-Authenticate'] = 'Basic realm=\"MYREALM\"'\n self.response.set_status(401)\n self.response.out.write(\"Authorization required\")\n else:\n auth = self.request.headers['Authorization']\n (username, password) = base64.b64decode(auth.split(' ')[1]).split(':')\n # Check the username and password, and proceed ...\n\n",
"The credentials will appear in the Authorization header. The steps work like this:\n\nClient makes a request to your app with no attempt at authorization\nServer responds with a \"401 Authorization Required\" response, and the \"WWW-Authenticate\" header set to 'Basic realm=\"something\"' (for basic auth).\nClient responds with an Authorization header set appropriately (see below).\n\nThe exact content of the client's Authorization header in step 3 depends on the authorization method used. For HTTP Basic auth, it's the base64-encoded user credentials - see here. For HTTP digest auth, both the server's header and the response from the client are a bit more complicated - see here.\n",
"httplib2 will only pass the credentials after a 401 response from the web server, after which the credentials should be sent in an Authorization: header.\n"
] |
[
7,
3,
1
] |
[] |
[] |
[
"google_app_engine",
"python",
"web_applications"
] |
stackoverflow_0001499832_google_app_engine_python_web_applications.txt
|
Q:
There’s PyQuery… is there one for Ruby?
You guys know PyQuery?
I was wondering if there’s an equivalent for Ruby.
A:
There's JRails, but it's outdated. If you're just looking for a way to use JQuery with Rails though: http://railscasts.com/episodes/136-jquery
A:
Hpricot - Most jQuery-like HTML parser for Ruby
|
There’s PyQuery… is there one for Ruby?
|
You guys know PyQuery?
I was wondering if there’s an equivalent for Ruby.
|
[
"There's JRails, but it's outdated. If you're just looking for a way to use JQuery with Rails though: http://railscasts.com/episodes/136-jquery\n",
"Hpricot - Most jQuery-like HTML parser for Ruby\n"
] |
[
1,
1
] |
[] |
[] |
[
"python",
"ruby"
] |
stackoverflow_0001484963_python_ruby.txt
|
Q:
Programmatically determine maximum command line length with Python
Does anyone know a portable way for Python to determine a system's maximum command line length? The program I'm working on builds a command and feeds it to subprocess. For systems with smaller command line length maximums, it is possible that the command will be too long. If I can detect that, the command can be broken up to avoid exceeding the maximum length, but I've not found a (portable) way to determine the maximum.
A:
Just ask sysconf:
os.sysconf('SC_ARG_MAX')
|
Programmatically determine maximum command line length with Python
|
Does anyone know a portable way for Python to determine a system's maximum command line length? The program I'm working on builds a command and feeds it to subprocess. For systems with smaller command line length maximums, it is possible that the command will be too long. If I can detect that, the command can be broken up to avoid exceeding the maximum length, but I've not found a (portable) way to determine the maximum.
|
[
"Just ask sysconf:\nos.sysconf('SC_ARG_MAX')\n\n"
] |
[
6
] |
[] |
[] |
[
"command_line",
"python",
"subprocess"
] |
stackoverflow_0001500542_command_line_python_subprocess.txt
|
Q:
Django: How can you stop long queries from killing your database?
I'm using Django 1.1 with Mysql 5.* and MyISAM tables.
Some of my queries can take a TON of time for outliers in my data set. These lock the tables and shut the site down. Other times it seems some users cancel the request before it is done and some queries will be stuck in the "Preparing" phase locking all other queries out.
I'm going to try to track down all the corner cases, but its nice to have a safety net so the site doesn't come down.
How do I avoid this? Can I set maximum query times?
A:
Unfortunately MySQL doesn't allow you an easy way to avoid this. A common method is basically to write a script that checks all running processes every X seconds (based on what you think is "long") and kill ones it sees are running too long. You can at least get some basic diagnostics, however, by setting log_slow_queries in MySQL which will write all queries that take longer than 10 seconds into a log. If that's too long for what you regard as "slow" for your purposes, you can set long_query_time to a value other than 10 to change the threshold.
A:
I'm doing a Django DB-replication app and have the same predicament, queries across a WAN can sometimes just seem to hang if the network latency increases.
From http://code.activestate.com/recipes/576780/
Recipe 576780: Timeout for (nearly) any callable
Create a time limited version of any callable.
For example, to limit function f to t seconds,
first create a time limited version of f.
from timelimited import *
f_t = TimeLimited(f, t)
Then, instead of invoking f(...), use f_t like
try:
r = f_t(...)
except TimeLimitExpired:
r = ... # timed out
Use it the following way for example:
def _run_timed_query(cursor, log_msg, timeout, query_string, *query_args):
"""Run a timed query, do error handling and logging"""
import sys
import traceback
from timelimited import *
try:
return TimeLimited(cursor.execute, timeout)(query_string, *query_args)
except TimeLimitExpired:
logger_ec.error('%s; Timeout error.' % log_msg)
raise TimeLimitExpired
except:
(exc_type, exc_info, tb) = sys.exc_info()
logger_ec.error('%s; %s.' % (log_msg, traceback.format_exception(exc_type, exc_info, None)[0]))
raise exc_type
A:
It seems that the only reliable way to abort a query is the kill command. A less drastic measure is to close the connection (and reopen a new one); this will terminate queries as soon as they try to send some output to the client.
A:
Do you know what the queries are? Maybe you could optimise the SQL or put some indexes on your tables?
A:
Use InnoDB Tables, they do row-locking instead of table-locking.
A:
You shouldn't write queries like that, at least not to run against your live database. Mysql has a "slow queries" pararameter which you can use to identify the queries that are killing you. Most of the time, these slow queries are either buggy or can be speeded up by defining a new index or two.
|
Django: How can you stop long queries from killing your database?
|
I'm using Django 1.1 with Mysql 5.* and MyISAM tables.
Some of my queries can take a TON of time for outliers in my data set. These lock the tables and shut the site down. Other times it seems some users cancel the request before it is done and some queries will be stuck in the "Preparing" phase locking all other queries out.
I'm going to try to track down all the corner cases, but its nice to have a safety net so the site doesn't come down.
How do I avoid this? Can I set maximum query times?
|
[
"Unfortunately MySQL doesn't allow you an easy way to avoid this. A common method is basically to write a script that checks all running processes every X seconds (based on what you think is \"long\") and kill ones it sees are running too long. You can at least get some basic diagnostics, however, by setting log_slow_queries in MySQL which will write all queries that take longer than 10 seconds into a log. If that's too long for what you regard as \"slow\" for your purposes, you can set long_query_time to a value other than 10 to change the threshold. \n",
"I'm doing a Django DB-replication app and have the same predicament, queries across a WAN can sometimes just seem to hang if the network latency increases. \nFrom http://code.activestate.com/recipes/576780/\nRecipe 576780: Timeout for (nearly) any callable \nCreate a time limited version of any callable.\nFor example, to limit function f to t seconds,\nfirst create a time limited version of f.\nfrom timelimited import *\n\nf_t = TimeLimited(f, t)\n\nThen, instead of invoking f(...), use f_t like\ntry:\n r = f_t(...)\nexcept TimeLimitExpired:\n r = ... # timed out\n\nUse it the following way for example:\ndef _run_timed_query(cursor, log_msg, timeout, query_string, *query_args):\n \"\"\"Run a timed query, do error handling and logging\"\"\"\n import sys\n import traceback\n from timelimited import *\n\n try:\n return TimeLimited(cursor.execute, timeout)(query_string, *query_args)\n except TimeLimitExpired:\n logger_ec.error('%s; Timeout error.' % log_msg)\n raise TimeLimitExpired\n except:\n (exc_type, exc_info, tb) = sys.exc_info()\n logger_ec.error('%s; %s.' % (log_msg, traceback.format_exception(exc_type, exc_info, None)[0]))\n raise exc_type\n\n",
"It seems that the only reliable way to abort a query is the kill command. A less drastic measure is to close the connection (and reopen a new one); this will terminate queries as soon as they try to send some output to the client.\n",
"Do you know what the queries are? Maybe you could optimise the SQL or put some indexes on your tables?\n",
"Use InnoDB Tables, they do row-locking instead of table-locking.\n",
"You shouldn't write queries like that, at least not to run against your live database. Mysql has a \"slow queries\" pararameter which you can use to identify the queries that are killing you. Most of the time, these slow queries are either buggy or can be speeded up by defining a new index or two.\n"
] |
[
1,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"django",
"mysql",
"python",
"timeout"
] |
stackoverflow_0001353206_django_mysql_python_timeout.txt
|
Q:
Named and default arguments class arguments in Python
In PHP, I have to pass the arguments in the same order as the arguments are in the constructor.
Now, in Python, take
listbox = Listbox(root, yscrollcommand=scrollbar.set)
for example.
If I had passed yscrollcommand=scrollbar.set as the third argument and yscrollcommand was the second argument in the constructor, would I still be able to access scrollbar.set from yscrollcommand inside the listbox constructor?
I ask this because even though the arguments are not in the same order, with an equal sign they have a name.
A:
Named arguments don't have to be in order.
class xyz:
def __init__ (self, a='1', b='2'):
print a,b
xyz(b=3,a=4)
xyz(a=5,b=6)
>>4 3
>>5 6
|
Named and default arguments class arguments in Python
|
In PHP, I have to pass the arguments in the same order as the arguments are in the constructor.
Now, in Python, take
listbox = Listbox(root, yscrollcommand=scrollbar.set)
for example.
If I had passed yscrollcommand=scrollbar.set as the third argument and yscrollcommand was the second argument in the constructor, would I still be able to access scrollbar.set from yscrollcommand inside the listbox constructor?
I ask this because even though the arguments are not in the same order, with an equal sign they have a name.
|
[
"Named arguments don't have to be in order.\nclass xyz:\n def __init__ (self, a='1', b='2'):\n print a,b\n\nxyz(b=3,a=4)\nxyz(a=5,b=6)\n\n>>4 3\n>>5 6\n\n"
] |
[
4
] |
[] |
[] |
[
"arguments",
"default",
"named",
"python"
] |
stackoverflow_0001501208_arguments_default_named_python.txt
|
Q:
Parsing commandline output progression containing carriage returns in real time with python
I am able to transform carriage returns into new lines. The problem however is to get it running in nearly 'real time'. It will be quite stupid looking if progres bar only values are 0 and 100 :-)
This code returns output at once:
import subprocess
p = subprocess.Popen(['mplayer', '/home/user/sample.mkv'], stdout=subprocess.PIPE).communicate()[0]
for line in p.splitlines():
if line.strip():
print line
A:
pexpect anywhere but Windows, and wexpect on Windows, are always my recommendations when you need to "defeat buffering" and read a subprocess's output "in near real-time", as you put it. Since the subprocess you're running most likely buffers its output differently when it's outputting to a terminal vs. anything else (as that's the normal behavior of the C runtime libraries), you need to trick it into believing it IS outputting to a terminal rather than to your program, and that's what pexpect achieves (by building a pseudo-terminal via the lower-level pty module). I'm actually amazed that wexpect was able to do much the same on Windows, yet, while occasionally imperfect, it does also appear to work;-).
A:
You are in for a world of pain with buffering in my experience. The reason being is that the standard C library will detect stdout isn't connected to a terminal and use more buffering. There is nothing you can do about that except hack the source of mplayer.
If you use python-pexpect though, it will start your subprocess using pseudo-ttys which the C library believes to be a terminal and it won't reset the buffering.
It is very easy to make subprocess deadlock when doing this sort of thing too which is another problem python-pexpect gets over.
A:
Ok, Thanks! I will keep on eye for pexpect.
But I found out that there is cosplatform way to do it: PyQt4 and QProcess. While it is not certainly solution for every program, it certainly fits for Qt4 frontend application :)
|
Parsing commandline output progression containing carriage returns in real time with python
|
I am able to transform carriage returns into new lines. The problem however is to get it running in nearly 'real time'. It will be quite stupid looking if progres bar only values are 0 and 100 :-)
This code returns output at once:
import subprocess
p = subprocess.Popen(['mplayer', '/home/user/sample.mkv'], stdout=subprocess.PIPE).communicate()[0]
for line in p.splitlines():
if line.strip():
print line
|
[
"pexpect anywhere but Windows, and wexpect on Windows, are always my recommendations when you need to \"defeat buffering\" and read a subprocess's output \"in near real-time\", as you put it. Since the subprocess you're running most likely buffers its output differently when it's outputting to a terminal vs. anything else (as that's the normal behavior of the C runtime libraries), you need to trick it into believing it IS outputting to a terminal rather than to your program, and that's what pexpect achieves (by building a pseudo-terminal via the lower-level pty module). I'm actually amazed that wexpect was able to do much the same on Windows, yet, while occasionally imperfect, it does also appear to work;-).\n",
"You are in for a world of pain with buffering in my experience. The reason being is that the standard C library will detect stdout isn't connected to a terminal and use more buffering. There is nothing you can do about that except hack the source of mplayer.\nIf you use python-pexpect though, it will start your subprocess using pseudo-ttys which the C library believes to be a terminal and it won't reset the buffering.\nIt is very easy to make subprocess deadlock when doing this sort of thing too which is another problem python-pexpect gets over.\n",
"Ok, Thanks! I will keep on eye for pexpect. \nBut I found out that there is cosplatform way to do it: PyQt4 and QProcess. While it is not certainly solution for every program, it certainly fits for Qt4 frontend application :)\n"
] |
[
2,
1,
0
] |
[
"You need to do two things:\n\nYou must make sure that mplayer flushes the output for every line (should happen with progress output that gets printed into the same line).\nYou must read the output line by line. Instead of calling communicate(), you must close the p.stdin and then read p.stdout until EOF.\n\n"
] |
[
-1
] |
[
"carriage_return",
"parsing",
"python",
"real_time"
] |
stackoverflow_0001486305_carriage_return_parsing_python_real_time.txt
|
Q:
Python XML Serializers
Can some recommend an XML serializer that is element or attribute centric, and that doesn't use key-value pairs.
For example, GAE db.model has a to_xml() function but it writes out like this:
<property name="firstname" type="string">John</property>
<property name="lastname" type="string">Doe</property>
<property name="city" type="string">Dallas</property>
<property name="dateTimeCreated" type="gd:when">2009-09-30 19:45:45.975270</property>
From what I remember, these are much harder to map in XSLT tools than simple elements/attributes like this:
DESIRED OUTPUT
<firstname>John</firstname>
<lastname>Doe</lastname>
<city>Dallas</city>
<dateTimeCreated type="gd:when">2009-09-30 19:45:45.975270</dateTimeCreated>
I just tried the GNOSIS lib, and my first attempt worked, but also created name value pairs something like this:
<attr name="__coredata__" type="dict" id="4760164835402068688" >
<entry>
<key type="string">firstname</key>
<val type="string">John</val>
</entry>
<entry>
<key type="string">lastname</key>
<val type="string">Doe</val>
</entry>
etc...
Thanks,
Neal Walters
A:
pyxslt.serialize looks closest to your specs but not a 100% map (for example, it doesn't record the type -- just turns everything into strings). Could still be a good basis from where to customize (maybe by copy / paste / edit, if it doesn't offer all the hooks you need for a cleaner customization).
|
Python XML Serializers
|
Can some recommend an XML serializer that is element or attribute centric, and that doesn't use key-value pairs.
For example, GAE db.model has a to_xml() function but it writes out like this:
<property name="firstname" type="string">John</property>
<property name="lastname" type="string">Doe</property>
<property name="city" type="string">Dallas</property>
<property name="dateTimeCreated" type="gd:when">2009-09-30 19:45:45.975270</property>
From what I remember, these are much harder to map in XSLT tools than simple elements/attributes like this:
DESIRED OUTPUT
<firstname>John</firstname>
<lastname>Doe</lastname>
<city>Dallas</city>
<dateTimeCreated type="gd:when">2009-09-30 19:45:45.975270</dateTimeCreated>
I just tried the GNOSIS lib, and my first attempt worked, but also created name value pairs something like this:
<attr name="__coredata__" type="dict" id="4760164835402068688" >
<entry>
<key type="string">firstname</key>
<val type="string">John</val>
</entry>
<entry>
<key type="string">lastname</key>
<val type="string">Doe</val>
</entry>
etc...
Thanks,
Neal Walters
|
[
"pyxslt.serialize looks closest to your specs but not a 100% map (for example, it doesn't record the type -- just turns everything into strings). Could still be a good basis from where to customize (maybe by copy / paste / edit, if it doesn't offer all the hooks you need for a cleaner customization).\n"
] |
[
2
] |
[] |
[] |
[
"python",
"xml_serialization"
] |
stackoverflow_0001500575_python_xml_serialization.txt
|
Q:
Parsing out data using BeautifulSoup in Python
I am attempting to use BeautifulSoup to parse through a DOM tree and extract the names of authors. Below is a snippet of HTML to show the structure of the code I'm going to scrape.
<html>
<body>
<div class="list-authors">
<span class="descriptor">Authors:</span>
<a href="/find/astro-ph/1/au:+Lin_D/0/1/0/all/0/1">Dacheng Lin</a>,
<a href="/find/astro-ph/1/au:+Remillard_R/0/1/0/all/0/1">Ronald A. Remillard</a>,
<a href="/find/astro-ph/1/au:+Homan_J/0/1/0/all/0/1">Jeroen Homan</a>
</div>
<div class="list-authors">
<span class="descriptor">Authors:</span>
<a href="/find/astro-ph/1/au:+Kosovichev_A/0/1/0/all/0/1">A.G. Kosovichev</a>
</div>
<!--There are many other div tags with this structure-->
</body>
</html>
My point of confusion is that when I do soup.find, it finds the first occurrence of the div tag that I'm searching for. After that, I search for all 'a' link tags. At this stage, how do I extract the authors names from each of the link tags and print them out? Is there a way to do it using BeautifulSoup or do I need to use Regex? How do I continue iterating over every other other div tag and extract the authors names?
import re
import urllib2,sys
from BeautifulSoup import BeautifulSoup, NavigableString
html = urllib2.urlopen(address).read()
soup = BeautifulSoup(html)
try:
authordiv = soup.find('div', attrs={'class': 'list-authors'})
links=tds.findAll('a')
for link in links:
print ''.join(link[0].contents)
#Iterate through entire page and print authors
except IOError:
print 'IO error'
A:
just use findAll for the divs link you do for the links
for authordiv in soup.findAll('div', attrs={'class': 'list-authors'}):
A:
Since link is already taken from an iterable, you don't need to subindex link -- you can just do link.contents[0].
print link.contents[0] with your new example with two separate <div class="list-authors"> yields:
Dacheng Lin
Ronald A. Remillard
Jeroen Homan
A.G. Kosovichev
So I'm not sure I understand the comment about searching other divs. If they are different classes, you will either need to do a separate soup.find and soup.findAll, or just modify your first soup.find.
|
Parsing out data using BeautifulSoup in Python
|
I am attempting to use BeautifulSoup to parse through a DOM tree and extract the names of authors. Below is a snippet of HTML to show the structure of the code I'm going to scrape.
<html>
<body>
<div class="list-authors">
<span class="descriptor">Authors:</span>
<a href="/find/astro-ph/1/au:+Lin_D/0/1/0/all/0/1">Dacheng Lin</a>,
<a href="/find/astro-ph/1/au:+Remillard_R/0/1/0/all/0/1">Ronald A. Remillard</a>,
<a href="/find/astro-ph/1/au:+Homan_J/0/1/0/all/0/1">Jeroen Homan</a>
</div>
<div class="list-authors">
<span class="descriptor">Authors:</span>
<a href="/find/astro-ph/1/au:+Kosovichev_A/0/1/0/all/0/1">A.G. Kosovichev</a>
</div>
<!--There are many other div tags with this structure-->
</body>
</html>
My point of confusion is that when I do soup.find, it finds the first occurrence of the div tag that I'm searching for. After that, I search for all 'a' link tags. At this stage, how do I extract the authors names from each of the link tags and print them out? Is there a way to do it using BeautifulSoup or do I need to use Regex? How do I continue iterating over every other other div tag and extract the authors names?
import re
import urllib2,sys
from BeautifulSoup import BeautifulSoup, NavigableString
html = urllib2.urlopen(address).read()
soup = BeautifulSoup(html)
try:
authordiv = soup.find('div', attrs={'class': 'list-authors'})
links=tds.findAll('a')
for link in links:
print ''.join(link[0].contents)
#Iterate through entire page and print authors
except IOError:
print 'IO error'
|
[
"just use findAll for the divs link you do for the links\nfor authordiv in soup.findAll('div', attrs={'class': 'list-authors'}):\n",
"Since link is already taken from an iterable, you don't need to subindex link -- you can just do link.contents[0].\nprint link.contents[0] with your new example with two separate <div class=\"list-authors\"> yields:\nDacheng Lin\nRonald A. Remillard\nJeroen Homan\nA.G. Kosovichev\n\nSo I'm not sure I understand the comment about searching other divs. If they are different classes, you will either need to do a separate soup.find and soup.findAll, or just modify your first soup.find.\n"
] |
[
13,
1
] |
[] |
[] |
[
"beautifulsoup",
"html",
"parsing",
"python"
] |
stackoverflow_0001501690_beautifulsoup_html_parsing_python.txt
|
Q:
How to display outcoming and incoming SOAP message for ZSI.ServiceProxy in Python?
How to display a SOAP message generated by ZSI.ServiceProxy and a respond from a Web Service when a Web Service method is invoked?
A:
Here is some documentation on the ServiceProxy class. The constructor accepts a tracefile argument which can be any object with a write method, so this looks like what you are after. Modifying the example from the documentation:
from ZSI import ServiceProxy
import BabelTypes
import sys
dbgfile = open('dbgfile', 'w') # to log trace to a file, or
dbgfile = sys.stdout # to log trace to stdout
service = ServiceProxy('http://www.xmethods.net/sd/BabelFishService.wsdl',
tracefile=dbgfile,
typesmodule=BabelTypes)
value = service.BabelFish('en_de', 'This is a test!')
dbgfile.close()
|
How to display outcoming and incoming SOAP message for ZSI.ServiceProxy in Python?
|
How to display a SOAP message generated by ZSI.ServiceProxy and a respond from a Web Service when a Web Service method is invoked?
|
[
"Here is some documentation on the ServiceProxy class. The constructor accepts a tracefile argument which can be any object with a write method, so this looks like what you are after. Modifying the example from the documentation:\nfrom ZSI import ServiceProxy\nimport BabelTypes\nimport sys\n\ndbgfile = open('dbgfile', 'w') # to log trace to a file, or\ndbgfile = sys.stdout # to log trace to stdout\nservice = ServiceProxy('http://www.xmethods.net/sd/BabelFishService.wsdl',\n tracefile=dbgfile,\n typesmodule=BabelTypes)\nvalue = service.BabelFish('en_de', 'This is a test!')\n\ndbgfile.close()\n\n"
] |
[
3
] |
[] |
[] |
[
"python",
"web_services",
"zsi"
] |
stackoverflow_0001497038_python_web_services_zsi.txt
|
Q:
Python regex question
I am wondering what would be a pythonic solution for this question.
In "aa67bc54c9", is there any way to print "aa" 67 times, "bc" 54 times and so on, using regular expressions?
A:
This would be a concise way:
import re
s = "aa67bc54c9"
print ''.join(t * int(n) for t, n in re.findall(r"([a-z]+)([0-9]+)", s))
This solution uses a regular expression to match "one or more letters followed by one or more numbers", searching for all of them in the input string. Then it uses a list comprehension to iterate through each group found, assigning the letters to t and the digits to n in turn. The list generates strings using the string * operator, which repeates a string a given number of times (int() is used to convert the digit string into an integer). Finally, ''.join() is used to paste everything together.
For the regular expression, [a-z] is a character class consisting of any single (lowercase) letter of the alphabet. [a-z]+ means one or more lowercase letters. Similarly, [0-9]+ means one or more digits. The grouping parentheses around each component "capture" the characters within them and make them available as a result of the findall() function. There are two groups of parentheses, so there are two output values, which get assigned to t and n in the list comprehension.
A:
Here is my Python solution.
import re
pat = re.compile("^(\D+)(\d+)(.*)$")
def rle_expand(s):
lst = []
while True:
m = pat.match(s)
if m:
n = int(m.group(2))
lst.append(m.group(1) * n)
else:
lst.append(s)
break
s = m.group(3)
return "".join(lst)
s = "aa03bc05d9whew"
print rle_expand(s)
# prints aaaaaabcbcbcbcbcdddddddddwhew
s = “aa67bc54c9”
print rle_expand(s)
# prints: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcccccccccc
The problem is basically to expand a run-length encoding. First you have some sort of a pattern, then some digits that specify how many times to repeat the pattern.
First we import the re module, to gain access to Python's regular expressions.
Next we compile a pattern once, so we can use it later. What will this pattern do?
The pattern uses parentheses to mark groups of letters from the string being matched. There are three pairs of parens so this will match three groups. Before the first group is a '^' character, which anchors to the start of the string, and after the last group is a '$' character, which anchors to the end of the string; these aren't strictly necessary in this case. The first group matches anything that is not a digit using the special sequence \D; the + extends it to match a run of one or more instances of not-a-digit. The second group is similar, using \d+ to match a run of one or more digits. The third group uses . to match any character, then extends it with * to match a run of 0 or more of any character. (Note that * and + are very similar; it's just that * matches 0 or more of something and + matches one or more.)
Using a standard Python idiom, we build a string using a list. We start out with an empty list (called lst). As long as the pattern keeps matching things, we append things to this list. When we are done, we use "".join() to join the list together into a string.
pat.match() returns an object called a "match object", or None if the match failed. If the match succeeded, we convert match group 2 to an integer, and use Python string repetition operator ("multiply") on match group 1 to do the run-length expansion. After this, we rebind the name s with the results of match group 3, thereby snipping off the part of the string we just processed, and loop. If the match failed, we just append all of s to the list and break out of the loop.
Building a list and then using "".join() on the list is a standard Python idiom. It will give good performance with any version of Python. Because Python strings are immutable, you can suffer from very slow performance if you build up a long dynamic string by repeatedly appending to a string; you wind up copying the early parts of the string many times as you build your final string. Python lists can be trivially appended to, and then the final join operation is quite fast. (Recent versions of Python have optimized the case where you repeatedly append to a string, and no longer suffer from the repeated copying in this case.)
Greg Hewgill's solution only recognizes lower-case letters from 'a' to 'z' for the expansion text; you could fix that by putting \D instead of [a-z]. His solution uses explicit ranges such as [0-9] where my solution uses the Python shorthand abbreviations such as \d. His solution only expands the run length encoded sequences; if there is a trailing sequence that does not have an integer on it, mine passes that sequence unchanged, while his silently ignores it. However, it must be said that his solution is brutally elegant and I wish I had thought of it. :-)
|
Python regex question
|
I am wondering what would be a pythonic solution for this question.
In "aa67bc54c9", is there any way to print "aa" 67 times, "bc" 54 times and so on, using regular expressions?
|
[
"This would be a concise way:\nimport re\n\ns = \"aa67bc54c9\"\nprint ''.join(t * int(n) for t, n in re.findall(r\"([a-z]+)([0-9]+)\", s))\n\nThis solution uses a regular expression to match \"one or more letters followed by one or more numbers\", searching for all of them in the input string. Then it uses a list comprehension to iterate through each group found, assigning the letters to t and the digits to n in turn. The list generates strings using the string * operator, which repeates a string a given number of times (int() is used to convert the digit string into an integer). Finally, ''.join() is used to paste everything together.\nFor the regular expression, [a-z] is a character class consisting of any single (lowercase) letter of the alphabet. [a-z]+ means one or more lowercase letters. Similarly, [0-9]+ means one or more digits. The grouping parentheses around each component \"capture\" the characters within them and make them available as a result of the findall() function. There are two groups of parentheses, so there are two output values, which get assigned to t and n in the list comprehension.\n",
"Here is my Python solution.\nimport re\npat = re.compile(\"^(\\D+)(\\d+)(.*)$\")\n\ndef rle_expand(s):\n lst = []\n while True:\n m = pat.match(s)\n if m:\n n = int(m.group(2))\n lst.append(m.group(1) * n)\n else:\n lst.append(s)\n break\n s = m.group(3)\n return \"\".join(lst)\n\ns = \"aa03bc05d9whew\"\n\nprint rle_expand(s)\n# prints aaaaaabcbcbcbcbcdddddddddwhew\n\ns = “aa67bc54c9”\nprint rle_expand(s)\n# prints: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaabcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcbcccccccccc \n\nThe problem is basically to expand a run-length encoding. First you have some sort of a pattern, then some digits that specify how many times to repeat the pattern.\nFirst we import the re module, to gain access to Python's regular expressions.\nNext we compile a pattern once, so we can use it later. What will this pattern do?\nThe pattern uses parentheses to mark groups of letters from the string being matched. There are three pairs of parens so this will match three groups. Before the first group is a '^' character, which anchors to the start of the string, and after the last group is a '$' character, which anchors to the end of the string; these aren't strictly necessary in this case. The first group matches anything that is not a digit using the special sequence \\D; the + extends it to match a run of one or more instances of not-a-digit. The second group is similar, using \\d+ to match a run of one or more digits. The third group uses . to match any character, then extends it with * to match a run of 0 or more of any character. (Note that * and + are very similar; it's just that * matches 0 or more of something and + matches one or more.)\nUsing a standard Python idiom, we build a string using a list. We start out with an empty list (called lst). As long as the pattern keeps matching things, we append things to this list. When we are done, we use \"\".join() to join the list together into a string.\npat.match() returns an object called a \"match object\", or None if the match failed. If the match succeeded, we convert match group 2 to an integer, and use Python string repetition operator (\"multiply\") on match group 1 to do the run-length expansion. After this, we rebind the name s with the results of match group 3, thereby snipping off the part of the string we just processed, and loop. If the match failed, we just append all of s to the list and break out of the loop.\nBuilding a list and then using \"\".join() on the list is a standard Python idiom. It will give good performance with any version of Python. Because Python strings are immutable, you can suffer from very slow performance if you build up a long dynamic string by repeatedly appending to a string; you wind up copying the early parts of the string many times as you build your final string. Python lists can be trivially appended to, and then the final join operation is quite fast. (Recent versions of Python have optimized the case where you repeatedly append to a string, and no longer suffer from the repeated copying in this case.)\nGreg Hewgill's solution only recognizes lower-case letters from 'a' to 'z' for the expansion text; you could fix that by putting \\D instead of [a-z]. His solution uses explicit ranges such as [0-9] where my solution uses the Python shorthand abbreviations such as \\d. His solution only expands the run length encoded sequences; if there is a trailing sequence that does not have an integer on it, mine passes that sequence unchanged, while his silently ignores it. However, it must be said that his solution is brutally elegant and I wish I had thought of it. :-)\n"
] |
[
3,
3
] |
[] |
[] |
[
"python",
"regex"
] |
stackoverflow_0001501772_python_regex.txt
|
Q:
cProfile and Python: Finding the specific line number that code spends most time on
I'm using cProfile, pstats and Gprof2dot to profile a rather long python script.
The results tell me that the most time is spent calling a method in an object I've defined. However, what I would really like is to know exactly what line number within that function is eating up the time.
Any idea's how to get this additional information?
(By the way, I'm using Python 2.6 on OSX snow leopard if that helps...)
A:
There is a line profiler in python written by Robert Kern.
A:
Suppose the amount of time being "eaten up" is some number, like 40%. Then if you just interrupt the program or pause it at a random time, the probability is 40% that you will see it, precisely exposed on the call stack. Do this 10 times, and on 4 samples, +/-, you will see it.
This tells why it works. This is an example.
A:
cProfile does not track line numbers within a function; it only tracks the line number of where the function was defined.
cProfile attempts to duplicate the behavior of profile (which is pure Python). profile uses pstats to store the data from running, and pstats only stores line numbers for function definitions, not for individual Python statements.
If you need to figure out with finer granularity what is eating all your time, then you need to refactor your big function into several, smaller functions.
|
cProfile and Python: Finding the specific line number that code spends most time on
|
I'm using cProfile, pstats and Gprof2dot to profile a rather long python script.
The results tell me that the most time is spent calling a method in an object I've defined. However, what I would really like is to know exactly what line number within that function is eating up the time.
Any idea's how to get this additional information?
(By the way, I'm using Python 2.6 on OSX snow leopard if that helps...)
|
[
"There is a line profiler in python written by Robert Kern.\n",
"Suppose the amount of time being \"eaten up\" is some number, like 40%. Then if you just interrupt the program or pause it at a random time, the probability is 40% that you will see it, precisely exposed on the call stack. Do this 10 times, and on 4 samples, +/-, you will see it.\nThis tells why it works. This is an example.\n",
"cProfile does not track line numbers within a function; it only tracks the line number of where the function was defined. \ncProfile attempts to duplicate the behavior of profile (which is pure Python). profile uses pstats to store the data from running, and pstats only stores line numbers for function definitions, not for individual Python statements.\nIf you need to figure out with finer granularity what is eating all your time, then you need to refactor your big function into several, smaller functions.\n"
] |
[
3,
2,
2
] |
[] |
[] |
[
"line",
"numbers",
"profiler",
"python",
"scripting"
] |
stackoverflow_0001500564_line_numbers_profiler_python_scripting.txt
|
Q:
Python Decorator for GAE Web-Service Security Check
In this post, Nick suggested a decoartor:
Python/WebApp Google App Engine - testing for user/pass in the headers
I'm writing an API to expose potentially dozens of methods as web-services, so the decorator sounds like a great idea.
I tried to start coding one based on this sample:
http://groups.google.com/group/google-appengine/browse_thread/thread/ac51cc32196d62f8/aa6ccd47f217cb9a?lnk=gst&q=timeout#aa6ccd47f217cb9a
I need it compatible with Python 2.5 to run under Google App Engine (GAE).
Here's my attempt. Please just point the way to if I'm on the right track or not.
Currently getting an error "Invalid Syntax" on this line:
class WSTest(webapp.RequestHandler):
My idea is to pass an array of roles to the decorator. These are the only roles (from my db that should have access to each various web service).
def BasicAuthentication(roles=[]):
def _decorator(func):
def _wrapper(*args, **kwds):
logging.info("\n\n BasicAuthentication:START:__call__ \n\n")
auth = None
if 'Authorization' in self.request.headers:
auth = self.request.headers['Authorization']
if not auth:
self.response.headers['WWW-Authenticate'] = 'Basic realm="MYREALM"'
self.response.set_status(401)
self.response.out.write("Authorization required")
logging.info ("\n\n Authorization required \n\n")
return
(username, password) = base64.b64decode(auth.split(' ')[1]).split(':')
logging.info ("\n\n username = " + username + " password=" + password + "\n\n")
isValidUserPass = False
usersSimulatedRole = "Admin"
#check against database here...
if user == "test12" and password == "test34":
isValidUserPass = True
isValidRole = False
if usersSimulatedRole in roles:
isValidRole = True
#next check that user has one of the roles
# TODO
if not isValidUserPass:
self.response.set_status(403)
self.response.out.write("Forbidden: Userid/password combination failed")
logging.info("\n\n BasicAuthentication:END:__call__ \n\n")
return func(*args, **kwds)
return _wrapper
return _decorator
@BasicAuthentication(["Admin","Worker"]) #list of roles that can run this function
class WSTest(webapp.RequestHandler):
def get(self):
logging.info("\n\n\n WSTest \n\n")
...etc...
Thanks,
Neal Walters
A:
Class decorators were added in Python 2.6.
You'll have to manually wrap the class or think of another solution to work under 2.5. How about writing a decorator for the get method instead?
A:
You need to write a method decorator, not a class decorator: As lost-theory points out, class decorators don't exist in Python 2.5, and they wouldn't work very well in any case, because the RequestHandler class isn't initialized with request data until after it's constructed. A method decorator also gives you more control - eg, you could allow GET requests unauthenticated, but still require authentication for POST requests.
Other than that, your decorator looks fine - just apply it to the relevant methods. The only change I would really suggest is replacing the .set_status() calls with .error() calls and remove the response.write calls; this allows you to override .error() on the RequestHandler class to output a nice error page for each possible status code.
|
Python Decorator for GAE Web-Service Security Check
|
In this post, Nick suggested a decoartor:
Python/WebApp Google App Engine - testing for user/pass in the headers
I'm writing an API to expose potentially dozens of methods as web-services, so the decorator sounds like a great idea.
I tried to start coding one based on this sample:
http://groups.google.com/group/google-appengine/browse_thread/thread/ac51cc32196d62f8/aa6ccd47f217cb9a?lnk=gst&q=timeout#aa6ccd47f217cb9a
I need it compatible with Python 2.5 to run under Google App Engine (GAE).
Here's my attempt. Please just point the way to if I'm on the right track or not.
Currently getting an error "Invalid Syntax" on this line:
class WSTest(webapp.RequestHandler):
My idea is to pass an array of roles to the decorator. These are the only roles (from my db that should have access to each various web service).
def BasicAuthentication(roles=[]):
def _decorator(func):
def _wrapper(*args, **kwds):
logging.info("\n\n BasicAuthentication:START:__call__ \n\n")
auth = None
if 'Authorization' in self.request.headers:
auth = self.request.headers['Authorization']
if not auth:
self.response.headers['WWW-Authenticate'] = 'Basic realm="MYREALM"'
self.response.set_status(401)
self.response.out.write("Authorization required")
logging.info ("\n\n Authorization required \n\n")
return
(username, password) = base64.b64decode(auth.split(' ')[1]).split(':')
logging.info ("\n\n username = " + username + " password=" + password + "\n\n")
isValidUserPass = False
usersSimulatedRole = "Admin"
#check against database here...
if user == "test12" and password == "test34":
isValidUserPass = True
isValidRole = False
if usersSimulatedRole in roles:
isValidRole = True
#next check that user has one of the roles
# TODO
if not isValidUserPass:
self.response.set_status(403)
self.response.out.write("Forbidden: Userid/password combination failed")
logging.info("\n\n BasicAuthentication:END:__call__ \n\n")
return func(*args, **kwds)
return _wrapper
return _decorator
@BasicAuthentication(["Admin","Worker"]) #list of roles that can run this function
class WSTest(webapp.RequestHandler):
def get(self):
logging.info("\n\n\n WSTest \n\n")
...etc...
Thanks,
Neal Walters
|
[
"Class decorators were added in Python 2.6.\nYou'll have to manually wrap the class or think of another solution to work under 2.5. How about writing a decorator for the get method instead?\n",
"You need to write a method decorator, not a class decorator: As lost-theory points out, class decorators don't exist in Python 2.5, and they wouldn't work very well in any case, because the RequestHandler class isn't initialized with request data until after it's constructed. A method decorator also gives you more control - eg, you could allow GET requests unauthenticated, but still require authentication for POST requests.\nOther than that, your decorator looks fine - just apply it to the relevant methods. The only change I would really suggest is replacing the .set_status() calls with .error() calls and remove the response.write calls; this allows you to override .error() on the RequestHandler class to output a nice error page for each possible status code.\n"
] |
[
2,
2
] |
[] |
[] |
[
"decorator",
"google_app_engine",
"python",
"web_services"
] |
stackoverflow_0001500982_decorator_google_app_engine_python_web_services.txt
|
Q:
Using CherryPy as a blocking/non-threading server for easier debugging
Is it possible to use the CherrPy server as a blocking/non-threading server (for easier debugging?)
A:
No. Not only does the wsgiserver start its own set of worker threads (10 by default, but even if you only specified 1 that's still 1 thread for the listening socket and 1 worker thread). Even if that were not true, if you use the rest of CherryPy (i.e. the engine), it runs that 1 listener thread in a separate thread from the main thread.
|
Using CherryPy as a blocking/non-threading server for easier debugging
|
Is it possible to use the CherrPy server as a blocking/non-threading server (for easier debugging?)
|
[
"No. Not only does the wsgiserver start its own set of worker threads (10 by default, but even if you only specified 1 that's still 1 thread for the listening socket and 1 worker thread). Even if that were not true, if you use the rest of CherryPy (i.e. the engine), it runs that 1 listener thread in a separate thread from the main thread.\n"
] |
[
3
] |
[] |
[] |
[
"cherrypy",
"debugging",
"python"
] |
stackoverflow_0001502431_cherrypy_debugging_python.txt
|
Q:
Paster cannot stop daemon
I'm using the following command in my pylons app in an attempt to stop the daemon on the server:
paster serve --daemon dev.ini stop
This is the error I get:
No PID file exists in paster.pid
Could not stop daemon; aborting
Wondering how I can stop this daemon so I can reload dev.ini.
Thanks!
A:
kill the process.
|
Paster cannot stop daemon
|
I'm using the following command in my pylons app in an attempt to stop the daemon on the server:
paster serve --daemon dev.ini stop
This is the error I get:
No PID file exists in paster.pid
Could not stop daemon; aborting
Wondering how I can stop this daemon so I can reload dev.ini.
Thanks!
|
[
"kill the process.\n"
] |
[
0
] |
[] |
[] |
[
"command_line",
"paster",
"pylons",
"python",
"ssh"
] |
stackoverflow_0001502568_command_line_paster_pylons_python_ssh.txt
|
Q:
What language could I use for fast execution of this database summarization task?
So I wrote a Python program to handle a little data processing
task.
Here's a very brief specification in a made-up language of the computation I want:
parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.
I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.
#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
Here’s some sample input data:
3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
Here’s the output I get from it:
3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
There are seven values for 3, and so we drop the c and d values
because their bb value puts them out of the top 5. Because 4 has
only one value, its “top 5” consists of just that one value.
This runs faster than doing the same queries in MySQL (at least, the
way we’ve found to do the queries) but I’m pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So I’d like to
write it in a language that has a faster implementation.
But I’m not sure what language to choose.
I haven’t been able to figure out how to express this as a single query in SQL, and
actually I’m really unimpressed with MySQL’s ability even to merely
select * from foo into outfile 'bar'; the input data.
C is an obvious choice, but things like line.split(), sorting a list
of 2-tuples, and making a hash table require writing some code that’s
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.
C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL.
OCaml would be fine, but does it have an equivalent of line.split(),
and will I be sad about the performance of its map?
Common Lisp might work?
Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried Pig?
(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)
(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)
Edit:
There was an eerily similar question asked on sbcl-devel in 2007 (thanks, Rainer!), and here's an awk script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):
BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
A:
I have a hard time believing that any script without any prior knowledge of the data (unlike MySql which has such info pre-loaded), would be faster than a SQL approach.
Aside from the time spent parsing the input, the script needs to "keep" sorting the order by array etc...
The following is a first guess at what should work decently fast in SQL, assuming a index (*) on the table's aa, bb, cc columns, in that order. (A possible alternative would be an "aa, bb DESC, cc" index
(*) This index could be clustered or not, not affecting the following query. Choice of clustering or not, and of needing an "aa,bb,cc" separate index depends on use case, on the size of the rows in table etc. etc.
SELECT T1.aa, T1.bb, T1.cc , COUNT(*)
FROM tblAbc T1
LEFT OUTER JOIN tblAbc T2 ON T1.aa = T2.aa AND
(T1.bb < T2.bb OR(T1.bb = T2.bb AND T1.cc < T2.cc))
GROUP BY T1.aa, T1.bb, T1.cc
HAVING COUNT(*) < 5 -- trick, remember COUNT(*) goes 1,1,2,3,...
ORDER BY T1.aa, T1.bb, T1.cc, COUNT(*) DESC
The idea is to get a count of how many records, within a given aa value are smaller than self. There is a small trick however: we need to use LEFT OUTER join, lest we discard the record with the biggest bb value or the last one (which may happen to be one of the top 5). As a result of left joining it, the COUNT(*) value counts 1, 1, 2, 3, 4 etc. and the HAVING test therefore is "<5" to effectively pick the top 5.
To emulate the sample output of the OP, the ORDER BY uses DESC on the COUNT(), which could be removed to get a more traditional top 5 type of listing. Also, the COUNT() in the select list can be removed if so desired, this doesn't impact the logic of the query and the ability to properly sort.
Also note that this query is deterministic in terms of the dealing with ties, i,e, when a given set of records have a same value for bb (within an aa group); I think the Python program may provide slightly different outputs when the order of the input data is changed, that is because of its occasional truncating of the sorting dictionary.
Real solution: A SQL-based procedural approach
The self-join approach described above demonstrates how declarative statements can be used to express the OP's requirement. However this approach is naive in a sense that its performance is roughly bound to the sum of the squares of record counts within each aa 'category'. (not O(n^2) but roughly O((n/a)^2) where a is the number of different values for the aa column) In other words it performs well with data such that on average the number of records associated with a given aa value doesn't exceed a few dozens. If the data is such that the aa column is not selective, the following approach is much -much!- better suited. It leverages SQL's efficient sorting framework, while implementing a simple algorithm that would be hard to express in declarative fashion. This approach could further be improved for datasets with particularly huge number of records each/most aa 'categories' by introducing a simple binary search of the next aa value, by looking ahead (and sometimes back...) in the cursor. For cases where the number of aa 'categories' relative to the overall row count in tblAbc is low, see yet another approach, after this next one.
DECLARE @aa AS VARCHAR(10), @bb AS INT, @cc AS VARCHAR(10)
DECLARE @curAa AS VARCHAR(10)
DECLARE @Ctr AS INT
DROP TABLE tblResults;
CREATE TABLE tblResults
( aa VARCHAR(10),
bb INT,
cc VARCHAR(10)
);
DECLARE abcCursor CURSOR
FOR SELECT aa, bb, cc
FROM tblABC
ORDER BY aa, bb DESC, cc
FOR READ ONLY;
OPEN abcCursor;
SET @curAa = ''
FETCH NEXT FROM abcCursor INTO @aa, @bb, @cc;
WHILE @@FETCH_STATUS = 0
BEGIN
IF @curAa <> @aa
BEGIN
SET @Ctr = 0
SET @curAa = @aa
END
IF @Ctr < 5
BEGIN
SET @Ctr = @Ctr + 1;
INSERT tblResults VALUES(@aa, @bb, @cc);
END
FETCH NEXT FROM AbcCursor INTO @aa, @bb, @cc;
END;
CLOSE abcCursor;
DEALLOCATE abcCursor;
SELECT * from tblResults
ORDER BY aa, bb, cc -- OR .. bb DESC ... for a more traditional order.
Alternative to the above for cases when aa is very unselective. In other words, when we have relatively few aa 'categories'. The idea is to go through the list of distinct categories and to run a "LIMIT" (MySql) "TOP" (MSSQL) query for each of these values.
For reference purposes, the following ran in 63 seconds for tblAbc of 61 Million records divided in 45 aa values, on MSSQL 8.0, on a relatively old/weak host.
DECLARE @aa AS VARCHAR(10)
DECLARE @aaCount INT
DROP TABLE tblResults;
CREATE TABLE tblResults
( aa VARCHAR(10),
bb INT,
cc VARCHAR(10)
);
DECLARE aaCountCursor CURSOR
FOR SELECT aa, COUNT(*)
FROM tblABC
GROUP BY aa
ORDER BY aa
FOR READ ONLY;
OPEN aaCountCursor;
FETCH NEXT FROM aaCountCursor INTO @aa, @aaCount
WHILE @@FETCH_STATUS = 0
BEGIN
INSERT tblResults
SELECT TOP 5 aa, bb, cc
FROM tblproh
WHERE aa = @aa
ORDER BY aa, bb DESC, cc
FETCH NEXT FROM aaCountCursor INTO @aa, @aaCount;
END;
CLOSE aaCountCursor
DEALLOCATE aaCountCursor
SELECT * from tblResults
ORDER BY aa, bb, cc -- OR .. bb DESC ... for a more traditional order.
On the question of needing an index or not. (cf OP's remark)
When merely running a "SELECT * FROM myTable", a table scan is effectively the fastest appraoch, no need to bother with indexes. However, the main reason why SQL is typically better suited for this kind of things (aside from being the repository where the data has been accumulating in the first place, whereas any external solution needs to account for the time to export the relevant data), is that it can rely on indexes to avoid scanning. Many general purpose languages are far better suited to handle raw processing, but they are fighting an unfair battle with SQL because they need to rebuilt any prior knowledge of the data which SQL has gathered in the course of its data collection / import phase. Since sorting is a typically a time and sometimes space consuming task, SQL and its relatively slower processing power often ends up ahead of alternative solutions.
Also, even without pre-built indexes, modern query optimizers may decide on a plan that involves the creation of a temporary index. And, because sorting is an intrinsic part of DDMS, the SQL servers are generally efficient in that area.
So... Is SQL better?
This said, if we are trying to compare SQL and other languages for pure ETL jobs, i.e. for dealing with heaps (unindexed tables) as its input to perform various transformations and filtering, it is likely that multi-thread-able utilities written in say C, and leveraging efficient sorting libaries, would likely be faster. The determining question to decide on a SQL vs. Non-SQL approach is where the data is located and where should it eventually reside. If we merely to convert a file to be supplied down "the chain" external programs are better suited. If we have or need the data in a SQL server, there are only rare cases that make it worthwhile exporting and processing externally.
A:
You could use smarter data structures and still use python.
I've ran your reference implementation and my python implementation on my machine and even compared the output to be sure in results.
This is yours:
$ time python ./ref.py < data-large.txt > ref-large.txt
real 1m57.689s
user 1m56.104s
sys 0m0.573s
This is mine:
$ time python ./my.py < data-large.txt > my-large.txt
real 1m35.132s
user 1m34.649s
sys 0m0.261s
$ diff my-large.txt ref-large.txt
$ echo $?
0
And this is the source:
#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
import heapq
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
if len(current) < 5:
heapq.heappush(current, (bb, cc))
else:
if current[0] < (bb, cc):
heapq.heapreplace(current, (bb, cc))
for aa in top_5:
current = top_5[aa]
while len(current) > 0:
bb, cc = heapq.heappop(current)
print aa, bb, cc
Update: Know your limits.
I've also timed a noop code, to know the fastest possible python solution with code similar to the original:
$ time python noop.py < data-large.txt > noop-large.txt
real 1m20.143s
user 1m19.846s
sys 0m0.267s
And the noop.py itself:
#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
import heapq
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
if len(current) < 5:
current.append((bb, cc))
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
A:
This is a sketch in Common Lisp
Note that for long files there is a penalty for using READ-LINE, because it conses a fresh string for each line. Then use one of the derivatives of READ-LINE that are floating around that are using a line buffer. Also you might check if you want the hash table be case sensitive or not.
second version
Splitting the string is no longer needed, because we do it here. It is low level code, in the hope that some speed gains will be possible. It checks for one or more spaces as field delimiter and also tabs.
(defun read-a-line (stream)
(let ((line (read-line stream nil nil)))
(flet ((delimiter-p (c)
(or (char= c #\space) (char= c #\tab))))
(when line
(let* ((s0 (position-if #'delimiter-p line))
(s1 (position-if-not #'delimiter-p line :start s0))
(s2 (position-if #'delimiter-p line :start (1+ s1)))
(s3 (position-if #'delimiter-p line :from-end t)))
(values (subseq line 0 s0)
(list (read-from-string line nil nil :start s1 :end s2)
(subseq line (1+ s3)))))))))
Above function returns two values: the key and a list of the rest.
(defun dbscan (top-5-table stream)
"get triples from each line and put them in the hash table"
(loop with aa = nil and bbcc = nil do
(multiple-value-setq (aa bbcc) (read-a-line stream))
while aa do
(setf (gethash aa top-5-table)
(let ((l (merge 'list (gethash aa top-5-table) (list bbcc)
#'> :key #'first)))
(or (and (nth 5 l) (subseq l 0 5)) l)))))
(defun dbprint (table output)
"print the hashtable contents"
(maphash (lambda (aa value)
(loop for (bb cc) in value
do (format output "~a ~a ~a~%" aa bb cc)))
table))
(defun dbsum (input &optional (output *standard-output*))
"scan and sum from a stream"
(let ((top-5-table (make-hash-table :test #'equal)))
(dbscan top-5-table input)
(dbprint top-5-table output)))
(defun fsum (infile outfile)
"scan and sum a file"
(with-open-file (input infile :direction :input)
(with-open-file (output outfile
:direction :output :if-exists :supersede)
(dbsum input output))))
some test data
(defun create-test-data (&key (file "/tmp/test.data") (n-lines 100000))
(with-open-file (stream file :direction :output :if-exists :supersede)
(loop repeat n-lines
do (format stream "~a ~a ~a~%"
(random 1000) (random 100.0) (random 10000)))))
; (create-test-data)
(defun test ()
(time (fsum "/tmp/test.data" "/tmp/result.data")))
third version, LispWorks
Uses some SPLIT-STRING and PARSE-FLOAT functions, otherwise generic CL.
(defun fsum (infile outfile)
(let ((top-5-table (make-hash-table :size 50000000 :test #'equal)))
(with-open-file (input infile :direction :input)
(loop for line = (read-line input nil nil)
while line do
(destructuring-bind (aa bb cc) (split-string '(#\space #\tab) line)
(setf bb (parse-float bb))
(let ((v (gethash aa top-5-table)))
(unless v
(setf (gethash aa top-5-table)
(setf v (make-array 6 :fill-pointer 0))))
(vector-push (cons bb cc) v)
(when (> (length v) 5)
(setf (fill-pointer (sort v #'> :key #'car)) 5))))))
(with-open-file (output outfile :direction :output :if-exists :supersede)
(maphash (lambda (aa value)
(loop for (bb . cc) across value do
(format output "~a ~f ~a~%" aa bb cc)))
top-5-table))))
A:
This took 45.7s on my machine with 27M rows of data that looked like this:
42 0.49357 0
96 0.48075 1
27 0.640761 2
8 0.389128 3
75 0.395476 4
24 0.212069 5
80 0.121367 6
81 0.271959 7
91 0.18581 8
69 0.258922 9
Your script took 1m42 on this data, the c++ example too 1m46 (g++ t.cpp -o t to compile it, I don't know anything about c++).
Java 6, not that it matters really. Output isn't perfect, but it's easy to fix.
package top5;
import java.io.BufferedReader;
import java.io.FileReader;
import java.util.Arrays;
import java.util.Map;
import java.util.TreeMap;
public class Main {
public static void main(String[] args) throws Exception {
long start = System.currentTimeMillis();
Map<String, Pair[]> top5map = new TreeMap<String, Pair[]>();
BufferedReader br = new BufferedReader(new FileReader("/tmp/file.dat"));
String line = br.readLine();
while(line != null) {
String parts[] = line.split(" ");
String key = parts[0];
double score = Double.valueOf(parts[1]);
String value = parts[2];
Pair[] pairs = top5map.get(key);
boolean insert = false;
Pair p = null;
if (pairs != null) {
insert = (score > pairs[pairs.length - 1].score) || pairs.length < 5;
} else {
insert = true;
}
if (insert) {
p = new Pair(score, value);
if (pairs == null) {
pairs = new Pair[1];
pairs[0] = new Pair(score, value);
} else {
if (pairs.length < 5) {
Pair[] newpairs = new Pair[pairs.length + 1];
System.arraycopy(pairs, 0, newpairs, 0, pairs.length);
pairs = newpairs;
}
int k = 0;
for(int i = pairs.length - 2; i >= 0; i--) {
if (pairs[i].score <= p.score) {
pairs[i + 1] = pairs[i];
} else {
k = i + 1;
break;
}
}
pairs[k] = p;
}
top5map.put(key, pairs);
}
line = br.readLine();
}
for(Map.Entry<String, Pair[]> e : top5map.entrySet()) {
System.out.print(e.getKey());
System.out.print(" ");
System.out.println(Arrays.toString(e.getValue()));
}
System.out.println(System.currentTimeMillis() - start);
}
static class Pair {
double score;
String value;
public Pair(double score, String value) {
this.score = score;
this.value = value;
}
public int compareTo(Object o) {
Pair p = (Pair) o;
return (int)Math.signum(score - p.score);
}
public String toString() {
return String.valueOf(score) + ", " + value;
}
}
}
AWK script to fake the data:
BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
A:
Here is one more OCaml version - targeted for speed - with custom parser on Streams. Too long, but parts of the parser are reusable. Thanks peufeu for triggering competition :)
Speed :
simple ocaml - 27 sec
ocaml with Stream parser - 15 sec
c with manual parser - 5 sec
Compile with :
ocamlopt -pp camlp4o code.ml -o caml
Code :
open Printf
let cmp x y = compare (fst x : float) (fst y)
let digit c = Char.code c - Char.code '0'
let rec parse f = parser
| [< a=int; _=spaces; b=float; _=spaces;
c=rest (Buffer.create 100); t >] -> f a b c; parse f t
| [< >] -> ()
and int = parser
| [< ''0'..'9' as c; t >] -> int_ (digit c) t
| [< ''-'; ''0'..'9' as c; t >] -> - (int_ (digit c) t)
and int_ n = parser
| [< ''0'..'9' as c; t >] -> int_ (n * 10 + digit c) t
| [< >] -> n
and float = parser
| [< n=int; t=frem; e=fexp >] -> (float_of_int n +. t) *. (10. ** e)
and frem = parser
| [< ''.'; r=frem_ 0.0 10. >] -> r
| [< >] -> 0.0
and frem_ f base = parser
| [< ''0'..'9' as c; t >] ->
frem_ (float_of_int (digit c) /. base +. f) (base *. 10.) t
| [< >] -> f
and fexp = parser
| [< ''e'; e=int >] -> float_of_int e
| [< >] -> 0.0
and spaces = parser
| [< '' '; t >] -> spaces t
| [< ''\t'; t >] -> spaces t
| [< >] -> ()
and crlf = parser
| [< ''\r'; t >] -> crlf t
| [< ''\n'; t >] -> crlf t
| [< >] -> ()
and rest b = parser
| [< ''\r'; _=crlf >] -> Buffer.contents b
| [< ''\n'; _=crlf >] -> Buffer.contents b
| [< 'c; t >] -> Buffer.add_char b c; rest b t
| [< >] -> Buffer.contents b
let () =
let all = Array.make 200 [] in
let each a b c =
assert (a >= 0 && a < 200);
match all.(a) with
| [] -> all.(a) <- [b,c]
| (bmin,_) as prev::tl -> if b > bmin then
begin
let m = List.sort cmp ((b,c)::tl) in
all.(a) <- if List.length tl < 4 then prev::m else m
end
in
parse each (Stream.of_channel stdin);
Array.iteri
(fun a -> List.iter (fun (b,c) -> printf "%i %f %s\n" a b c))
all
A:
Of all the programs in this thread that I've tested so far, the OCaml version is the fastest and also among the shortest. (Line-of-code-based measurements are a little fuzzy, but it's not clearly longer than the Python version or the C or C++ versions, and it is clearly faster.)
Note: I figured out why my earlier runtimes were so nondeterministic! My CPU heatsink was clogged with dust and my CPU was overheating as a result. Now I am getting nice deterministic benchmark times. I think I've now redone all the timing measurements in this thread now that I have a reliable way to time things.
Here are the timings for the different versions so far, running on a 27-million-row 630-megabyte input data file. I'm on Ubuntu Intrepid Ibex on a dual-core 1.6GHz Celeron, running a 32-bit version of the OS (the Ethernet driver was broken in the 64-bit version). I ran each program five times and report the range of times those five tries took. I'm using Python 2.5.2, OpenJDK 1.6.0.0, OCaml 3.10.2, GCC 4.3.2, SBCL 1.0.8.debian, and Octave 3.0.1.
SquareCog's Pig version: not yet tested (because I can't just apt-get install pig), 7 lines of code.
mjv's pure SQL version: not yet tested, but I predict a runtime of several days; 7 lines of code.
ygrek's OCaml version: 68.7 seconds ±0.9 in 15 lines of code.
My Python version: 169 seconds ±4 or 86 seconds ±2 with Psyco, in 16 lines of code.
abbot's heap-based Python version: 177 seconds ±5 in 18 lines of code, or 83 seconds ±5 with Psyco.
My C version below, composed with GNU sort -n: 90 + 5.5 seconds (±3, ±0.1), but gives the wrong answer because of a deficiency in GNU sort, in 22 lines of code (including one line of shell.)
hrnt's C++ version: 217 seconds ±3 in 25 lines of code.
mjv's alternative SQL-based procedural approach: not yet tested, 26 lines of code.
mjv's first SQL-based procedural approach: not yet tested, 29 lines of code.
peufeu's Python version with Psyco: 181 seconds ±4, somewhere around 30 lines of code.
Rainer Joswig's Common Lisp version: 478 seconds (only run once) in 42 lines of code.
abbot's noop.py, which intentionally gives incorrect results to establish a lower bound: not yet tested, 15 lines of code.
Will Hartung's Java version: 96 seconds ±10 in, according to David A. Wheeler’s SLOCCount, 74 lines of code.
Greg's Matlab version: doesn't work.
Schuyler Erle's suggestion of using Pyrex on one of the Python versions: not yet tried.
I supect abbot's version comes out relatively worse for me than for them because the real dataset has a highly nonuniform distribution: as I said, some aa values (“players”) have thousands of lines, while others only have one.
About Psyco: I applied Psyco to my original code (and abbot's version) by putting it in a main function, which by itself cut the time down to about 140 seconds, and calling psyco.full() before calling main(). This added about four lines of code.
I can almost solve the problem using GNU sort, as follows:
kragen@inexorable:~/devel$ time LANG=C sort -nr infile -o sorted
real 1m27.476s
user 0m59.472s
sys 0m8.549s
kragen@inexorable:~/devel$ time ./top5_sorted_c < sorted > outfile
real 0m5.515s
user 0m4.868s
sys 0m0.452s
Here top5_sorted_c is this short C program:
#include <ctype.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
enum { linesize = 1024 };
char buf[linesize];
char key[linesize]; /* last key seen */
int main() {
int n = 0;
char *p;
while (fgets(buf, linesize, stdin)) {
for (p = buf; *p && !isspace(*p); p++) /* find end of key on this line */
;
if (p - buf != strlen(key) || 0 != memcmp(buf, key, p - buf))
n = 0; /* this is a new key */
n++;
if (n <= 5) /* copy up to five lines for each key */
if (fputs(buf, stdout) == EOF) abort();
if (n == 1) { /* save new key in `key` */
memcpy(key, buf, p - buf);
key[p-buf] = '\0';
}
}
return 0;
}
I first tried writing that program in C++ as follows, and I got runtimes which were substantially slower, at 33.6±2.3 seconds instead of 5.5±0.1 seconds:
#include <map>
#include <iostream>
#include <string>
int main() {
using namespace std;
int n = 0;
string prev, aa, bb, cc;
while (cin >> aa >> bb >> cc) {
if (aa != prev) n = 0;
++n;
if (n <= 5) cout << aa << " " << bb << " " << cc << endl;
prev = aa;
}
return 0;
}
I did say almost. The problem is that sort -n does okay for most of the data, but it fails when it's trying to compare 0.33 with 3.78168e-05. So to get this kind of performance and actually solve the problem, I need a better sort.
Anyway, I kind of feel like I'm whining, but the sort-and-filter approach is about 5× faster than the Python program, while the elegant STL program from hrnt is actually a little slower — there seems to be some kind of gross inefficiency in <iostream>. I don't know where the other 83% of the runtime is going in that little C++ version of the filter, but it isn't going anywhere useful, which makes me suspect I don't know where it's going in hrnt's std::map version either. Could that version be sped up 5× too? Because that would be pretty cool. Its working set might be bigger than my L2 cache, but as it happens it probably isn't.
Some investigation with callgrind says my filter program in C++ is executing 97% of its instructions inside of operator >>. I can identify at least 10 function calls per input byte, and cin.sync_with_stdio(false); doesn’t help. This probably means I could get hrnt’s C program to run substantially faster by parsing input lines more efficiently.
Edit: kcachegrind claims that hrnt’s program executes 62% of its instructions (on a small 157000 line input file) extracting doubles from an istream. A substantial part of this is because the istreams library apparently executes about 13 function calls per input byte when trying to parse a double. Insane. Could I be misunderstanding kcachegrind's output?
Anyway, any other suggestions?
A:
Pretty straightforward Caml (27 * 10^6 rows -- 27 sec, C++ by hrnt -- 29 sec)
open Printf
open ExtLib
let (>>) x f = f x
let cmp x y = compare (fst x : float) (fst y)
let wsp = Str.regexp "[ \t]+"
let () =
let all = Hashtbl.create 1024 in
Std.input_lines stdin >> Enum.iter (fun line ->
let [a;b;c] = Str.split wsp line in
let b = float_of_string b in
try
match Hashtbl.find all a with
| [] -> assert false
| (bmin,_) as prev::tl -> if b > bmin then
begin
let m = List.sort ~cmp ((b,c)::tl) in
Hashtbl.replace all a (if List.length tl < 4 then prev::m else m)
end
with Not_found -> Hashtbl.add all a [b,c]
);
all >> Hashtbl.iter (fun a -> List.iter (fun (b,c) -> printf "%s %f %s\n" a b c))
A:
Here is a C++ solution. I didn't have a lot of data to test it with, however, so I don't know how fast it actually is.
[edit] Thanks to the test data provided by the awk script in this thread, I
managed to clean up and speed up the code a bit. I am not trying to find out the fastest possible version - the intent is to provide a reasonably fast version that isn't as ugly as people seem to think STL solutions can be.
This version should be about twice as fast as the first version (goes through 27 million lines in about 35 seconds). Gcc users, remember to
compile this with -O2.
#include <map>
#include <iostream>
#include <functional>
#include <utility>
#include <string>
int main() {
using namespace std;
typedef std::map<string, std::multimap<double, string> > Map;
Map m;
string aa, cc;
double bb;
std::cin.sync_with_stdio(false); // Dunno if this has any effect, but anyways.
while (std::cin >> aa >> bb >> cc)
{
if (m[aa].size() == 5)
{
Map::mapped_type::iterator iter = m[aa].begin();
if (bb < iter->first)
continue;
m[aa].erase(iter);
}
m[aa].insert(make_pair(bb, cc));
}
for (Map::const_iterator iter = m.begin(); iter != m.end(); ++iter)
for (Map::mapped_type::const_iterator iter2 = iter->second.begin();
iter2 != iter->second.end();
++iter2)
std::cout << iter->first << " " << iter2->first << " " << iter2->second <<
std::endl;
}
A:
Interestingly, the original Python solution is by far the cleanest looking (although the C++ example comes close).
How about using Pyrex or Psyco on your original code?
A:
Has anybody tried doing this problem with just awk. Specifically 'mawk'? It should be faster than even Java and C++, according to this blog post: http://anyall.org/blog/2009/09/dont-mawk-awk-the-fastest-and-most-elegant-big-data-munging-language/
EDIT: Just wanted to clarify that the only claim being made in that blog post is that for a certain class of problems that are specifically suited to awk-style processing, the mawk virtual machine can beat 'vanilla' implementations in Java and C++.
A:
Since you asked about Matlab, here's how I did something like what you're asking for. I tried to do it without any for loops, but I do have one because I didn't care to take a long time with it. If you were worried about memory then you could pull data from the stream in chunks with fscanf rather than reading the entire buffer.
fid = fopen('fakedata.txt','r');
tic
A=fscanf(fid,'%d %d %d\n');
A=reshape(A,3,length(A)/3)'; %Matlab reads the data into one long column'
Names = unique(A(:,1));
for i=1:length(Names)
indices = find(A(:,1)==Names(i)); %Grab all instances of key i
[Y,I] = sort(A(indices,2),1,'descend'); %sort in descending order of 2nd record
A(indices(I(1:min([5,length(indices(I))]))),:) %Print the top five
end
toc
fclose(fid)
A:
Speaking of lower bounds on compute time :
Let's analyze my algo above :
for each row (key,score,id) :
create or fetch a list of top scores for the row's key
if len( this list ) < N
append current
else if current score > minimum score in list
replace minimum of list with current row
update minimum of all lists if needed
Let N be the N in top-N
Let R be the number of rows in your data set
Let K be the number of distinct keys
What assumptions can we make ?
R * sizeof( row ) > RAM or at least it's big enough that we don't want to load it all, use a hash to group by key, and sort each bin. For the same reason we don't sort the whole stuff.
Kragen likes hashtables, so K * sizeof(per-key state) << RAM, most probably it fits in L2/3 cache
Kragen is not sorting, so K*N << R ie each key has much more than N entries
(note : A << B means A is small relative to B)
If the data has a random distribution, then
after a small number of rows, the majority of rows will be rejected by the per-key minimum condition, the cost is 1 comparison per row.
So the cost per row is 1 hash lookup + 1 comparison + epsilon * (list insertion + (N+1) comparisons for the minimum)
If the scores have a random distribution (say between 0 and 1) and the conditions above hold, both epsilons will be very small.
Experimental proof :
The 27 million rows dataset above produces 5933 insertions into the top-N lists. All other rows are rejected by a simple key lookup and comparison. epsilon = 0.0001
So roughly, the cost is 1 lookup + coparison per row, which takes a few nanoseconds.
On current hardware, there is no way this is not going to be negligible versus IO cost and especially parsing costs.
A:
Isn't this just as simple as
SELECT DISTINCT aa, bb, cc FROM tablename ORDER BY bb DESC LIMIT 5
?
Of course, it's hard to tell what would be fastest without testing it against the data. And if this is something you need to run very fast, it might make sense to optimize your database to make the query faster, rather than optimizing the query.
And, of course, if you need the flat file anyway, you might as well use that.
A:
Pick "top 5" would look something like this. Note that there's no sorting. Nor does any list in the top_5 dictionary ever grow beyond 5 elements.
from collections import defaultdict
import sys
def keep_5( aList, aPair ):
minbb= min( bb for bb,cc in aList )
bb, cc = aPair
if bb < minbb: return aList
aList.append( aPair )
min_i= 0
for i in xrange(1,6):
if aList[i][0] < aList[min_i][0]
min_i= i
aList.pop(min_i)
return aList
top_5= defaultdict(list)
for row in sys.stdin:
aa, bb, cc = row.split()
bb = float(bb)
if len(top_5[aa]) < 5:
top_5[aa].append( (bb,cc) )
else:
top_5[aa]= keep_5( top_5[aa], (bb,cc) )
A:
The Pig version would go something like this (untested):
Data = LOAD '/my/data' using PigStorage() as (aa:int, bb:float, cc:chararray);
grp = GROUP Data by aa;
topK = FOREACH grp (
sorted = ORDER Data by bb DESC;
lim = LIMIT sorted 5;
GENERATE group as aa, lim;
)
STORE topK INTO '/my/output' using PigStorage();
Pig isn't optimized for performance; it's goal is to enable processing of multi-terabyte datasets using parallel execution frameworks. It does have a local mode, so you can try it, but I doubt it will beat your script.
A:
That was a nice lunch break challenge, he, he.
Top-N is a well-known database killer. As shown by the post above, there is no way to efficiently express it in common SQL.
As for the various implementations, you got to keep in mind that the slow part in this is not the sorting or the top-N, it's the parsing of text. Have you looked at the source code for glibc's strtod() lately ?
For instance, I get, using Python :
Read data : 80.5 s
My TopN : 34.41 s
HeapTopN : 30.34 s
It is quite likely that you'll never get very fast timings, no matter what language you use, unless your data is in some format that is a lot faster to parse than text. For instance, loading the test data into postgres takes 70 s, and the majority of that is text parsing, too.
If the N in your topN is small, like 5, a C implementation of my algorithm below would probably be the fastest. If N can be larger, heaps are a much better option.
So, since your data is probably in a database, and your problem is getting at the data, not the actual processing, if you're really in need of a super fast TopN engine, what you should do is write a C module for your database of choice. Since postgres is faster for about anything, I suggest using postgres, plus it isn't difficult to write a C module for it.
Here's my Python code :
import random, sys, time, heapq
ROWS = 27000000
def make_data( fname ):
f = open( fname, "w" )
r = random.Random()
for i in xrange( 0, ROWS, 10000 ):
for j in xrange( i,i+10000 ):
f.write( "%d %f %d\n" % (r.randint(0,100), r.uniform(0,1000), j))
print ("write: %d\r" % i),
sys.stdout.flush()
print
def read_data( fname ):
for n, line in enumerate( open( fname ) ):
r = line.strip().split()
yield int(r[0]),float(r[1]),r[2]
if not (n % 10000 ):
print ("read: %d\r" % n),
sys.stdout.flush()
print
def topn( ntop, data ):
ntop -= 1
assert ntop > 0
min_by_key = {}
top_by_key = {}
for key,value,label in data:
tup = (value,label)
if key not in top_by_key:
# initialize
top_by_key[key] = [ tup ]
else:
top = top_by_key[ key ]
l = len( top )
if l > ntop:
# replace minimum value in top if it is lower than current value
idx = min_by_key[ key ]
if top[idx] < tup:
top[idx] = tup
min_by_key[ key ] = top.index( min( top ) )
elif l < ntop:
# fill until we have ntop entries
top.append( tup )
else:
# we have ntop entries in list, we'll have ntop+1
top.append( tup )
# initialize minimum to keep
min_by_key[ key ] = top.index( min( top ) )
# finalize:
return dict( (key, sorted( values, reverse=True )) for key,values in top_by_key.iteritems() )
def grouptopn( ntop, data ):
top_by_key = {}
for key,value,label in data:
if key in top_by_key:
top_by_key[ key ].append( (value,label) )
else:
top_by_key[ key ] = [ (value,label) ]
return dict( (key, sorted( values, reverse=True )[:ntop]) for key,values in top_by_key.iteritems() )
def heaptopn( ntop, data ):
top_by_key = {}
for key,value,label in data:
tup = (value,label)
if key not in top_by_key:
top_by_key[ key ] = [ tup ]
else:
top = top_by_key[ key ]
if len(top) < ntop:
heapq.heappush(top, tup)
else:
if top[0] < tup:
heapq.heapreplace(top, tup)
return dict( (key, sorted( values, reverse=True )) for key,values in top_by_key.iteritems() )
def dummy( data ):
for row in data:
pass
make_data( "data.txt" )
t = time.clock()
dummy( read_data( "data.txt" ) )
t_read = time.clock() - t
t = time.clock()
top_result = topn( 5, read_data( "data.txt" ) )
t_topn = time.clock() - t
t = time.clock()
htop_result = heaptopn( 5, read_data( "data.txt" ) )
t_htopn = time.clock() - t
# correctness checking :
for key in top_result:
print key, " : ", " ".join (("%f:%s"%(value,label)) for (value,label) in top_result[key])
print key, " : ", " ".join (("%f:%s"%(value,label)) for (value,label) in htop_result[key])
print
print "Read data :", t_read
print "TopN : ", t_topn - t_read
print "HeapTopN : ", t_htopn - t_read
for key in top_result:
assert top_result[key] == htop_result[key]
A:
Well, please grab a coffee and read the source code for strtod -- it's mindboggling, but needed, if you want to float -> text -> float to give back the same float you started with.... really...
Parsing integers is a lot faster (not so much in python, though, but in C, yes).
Anyway, putting the data in a Postgres table :
SELECT count( key ) FROM the dataset in the above program
=> 7 s (so it takes 7 s to read the 27M records)
CREATE INDEX topn_key_value ON topn( key, value );
191 s
CREATE TEMPORARY TABLE topkeys AS SELECT key FROM topn GROUP BY key;
12 s
(You can use the index to get distinct values of 'key' faster too but it requires some light plpgsql hacking)
CREATE TEMPORARY TABLE top AS SELECT (r).* FROM (SELECT (SELECT b AS r FROM topn b WHERE b.key=a.key ORDER BY value DESC LIMIT 1) AS r FROM topkeys a) foo;
Temps : 15,310 ms
INSERT INTO top SELECT (r).* FROM (SELECT (SELECT b AS r FROM topn b WHERE b.key=a.key ORDER BY value DESC LIMIT 1 OFFSET 1) AS r FROM topkeys a) foo;
Temps : 17,853 ms
INSERT INTO top SELECT (r).* FROM (SELECT (SELECT b AS r FROM topn b WHERE b.key=a.key ORDER BY value DESC LIMIT 1 OFFSET 2) AS r FROM topkeys a) foo;
Temps : 13,983 ms
INSERT INTO top SELECT (r).* FROM (SELECT (SELECT b AS r FROM topn b WHERE b.key=a.key ORDER BY value DESC LIMIT 1 OFFSET 3) AS r FROM topkeys a) foo;
Temps : 16,860 ms
INSERT INTO top SELECT (r).* FROM (SELECT (SELECT b AS r FROM topn b WHERE b.key=a.key ORDER BY value DESC LIMIT 1 OFFSET 4) AS r FROM topkeys a) foo;
Temps : 17,651 ms
INSERT INTO top SELECT (r).* FROM (SELECT (SELECT b AS r FROM topn b WHERE b.key=a.key ORDER BY value DESC LIMIT 1 OFFSET 5) AS r FROM topkeys a) foo;
Temps : 19,216 ms
SELECT * FROM top ORDER BY key,value;
As you can see computing the top-n is extremely fast (provided n is small) but creating the (mandatory) index is extremely slow because it involves a full sort.
Your best bet is to use a format that is fast to parse (either binary, or write a custom C aggregate for your database, which would be the best choice IMHO). The runtime in the C program shouldn't be more than 1s if python can do it in 1 s.
A:
I love lunch break challenges. Here's a 1 hour implementation.
OK, when you don't want do some extremely exotic crap like additions, nothing stops you from using a custom base-10 floating point format whose only implemented operator is comparison, right ? lol.
I had some fast-atoi code lying around from a previous project, so I just imported that.
http://www.copypastecode.com/11541/
This C source code takes about 6.6 seconds to parse the 580MB of input text (27 million lines), half of that time is fgets, lol. Then it takes approximately 0.05 seconds to compute the top-n, but I don't know for sure, since the time it takes for the top-n is less than the timer noise.
You'll be the one to test it for correctness though XDDDDDDDDDDD
Interesting huh ?
|
What language could I use for fast execution of this database summarization task?
|
So I wrote a Python program to handle a little data processing
task.
Here's a very brief specification in a made-up language of the computation I want:
parse "%s %lf %s" aa bb cc | group_by aa | quickselect --key=bb 0:5 | \
flatten | format "%s %lf %s" aa bb cc
That is, for each line, parse out a word, a floating-point number, and another word. Think of them as a player ID, a score, and a date. I want the top five scores and dates for each player. The data size is not trivial, but not huge; about 630 megabytes.
I want to know what real, executable language I should have written it in to
get it to be similarly short (as the Python below) but much faster.
#!/usr/bin/python
# -*- coding: utf-8; -*-
import sys
top_5 = {}
for line in sys.stdin:
aa, bb, cc = line.split()
# We want the top 5 for each distinct value of aa. There are
# hundreds of thousands of values of aa.
bb = float(bb)
if aa not in top_5: top_5[aa] = []
current = top_5[aa]
current.append((bb, cc))
# Every once in a while, we drop the values that are not in
# the top 5, to keep our memory footprint down, because some
# values of aa have thousands of (bb, cc) pairs.
if len(current) > 10:
current.sort()
current[:-5] = []
for aa in top_5:
current = top_5[aa]
current.sort()
for bb, cc in current[-5:]:
print aa, bb, cc
Here’s some sample input data:
3 1.5 a
3 1.6 b
3 0.8 c
3 0.9 d
4 1.2 q
3 1.5 e
3 1.8 f
3 1.9 g
Here’s the output I get from it:
3 1.5 a
3 1.5 e
3 1.6 b
3 1.8 f
3 1.9 g
4 1.2 q
There are seven values for 3, and so we drop the c and d values
because their bb value puts them out of the top 5. Because 4 has
only one value, its “top 5” consists of just that one value.
This runs faster than doing the same queries in MySQL (at least, the
way we’ve found to do the queries) but I’m pretty sure it's spending
most of its time in the Python bytecode interpreter. I think that in
another language, I could probably get it to process hundreds of
thousands of rows per second instead of per minute. So I’d like to
write it in a language that has a faster implementation.
But I’m not sure what language to choose.
I haven’t been able to figure out how to express this as a single query in SQL, and
actually I’m really unimpressed with MySQL’s ability even to merely
select * from foo into outfile 'bar'; the input data.
C is an obvious choice, but things like line.split(), sorting a list
of 2-tuples, and making a hash table require writing some code that’s
not in the standard library, so I would end up with 100 lines of code
or more instead of 14.
C++ seems like it might be a better choice (it has strings, maps,
pairs, and vectors in the standard library) but it seems like the code
would be a lot messier with STL.
OCaml would be fine, but does it have an equivalent of line.split(),
and will I be sad about the performance of its map?
Common Lisp might work?
Is there some equivalent of Matlab for database computation like this
that lets me push the loops down into fast code? Has anybody tried Pig?
(Edit: responded to davethegr8's comment by providing some sample input and output data, and fixed a bug in the Python program!)
(Additional edit: Wow, this comment thread is really excellent so far. Thanks, everybody!)
Edit:
There was an eerily similar question asked on sbcl-devel in 2007 (thanks, Rainer!), and here's an awk script from Will Hartung for producing some test data (although it doesn't have the Zipfian distribution of the real data):
BEGIN {
for (i = 0; i < 27000000; i++) {
v = rand();
k = int(rand() * 100);
print k " " v " " i;
}
exit;
}
|
[
"I have a hard time believing that any script without any prior knowledge of the data (unlike MySql which has such info pre-loaded), would be faster than a SQL approach.\nAside from the time spent parsing the input, the script needs to \"keep\" sorting the order by array etc...\nThe following is a first guess at what should work decently fast in SQL, assuming a index (*) on the table's aa, bb, cc columns, in that order. (A possible alternative would be an \"aa, bb DESC, cc\" index\n(*) This index could be clustered or not, not affecting the following query. Choice of clustering or not, and of needing an \"aa,bb,cc\" separate index depends on use case, on the size of the rows in table etc. etc.\nSELECT T1.aa, T1.bb, T1.cc , COUNT(*)\nFROM tblAbc T1\nLEFT OUTER JOIN tblAbc T2 ON T1.aa = T2.aa AND \n (T1.bb < T2.bb OR(T1.bb = T2.bb AND T1.cc < T2.cc))\nGROUP BY T1.aa, T1.bb, T1.cc\nHAVING COUNT(*) < 5 -- trick, remember COUNT(*) goes 1,1,2,3,...\nORDER BY T1.aa, T1.bb, T1.cc, COUNT(*) DESC\n\nThe idea is to get a count of how many records, within a given aa value are smaller than self. There is a small trick however: we need to use LEFT OUTER join, lest we discard the record with the biggest bb value or the last one (which may happen to be one of the top 5). As a result of left joining it, the COUNT(*) value counts 1, 1, 2, 3, 4 etc. and the HAVING test therefore is \"<5\" to effectively pick the top 5.\nTo emulate the sample output of the OP, the ORDER BY uses DESC on the COUNT(), which could be removed to get a more traditional top 5 type of listing. Also, the COUNT() in the select list can be removed if so desired, this doesn't impact the logic of the query and the ability to properly sort.\nAlso note that this query is deterministic in terms of the dealing with ties, i,e, when a given set of records have a same value for bb (within an aa group); I think the Python program may provide slightly different outputs when the order of the input data is changed, that is because of its occasional truncating of the sorting dictionary.\nReal solution: A SQL-based procedural approach\nThe self-join approach described above demonstrates how declarative statements can be used to express the OP's requirement. However this approach is naive in a sense that its performance is roughly bound to the sum of the squares of record counts within each aa 'category'. (not O(n^2) but roughly O((n/a)^2) where a is the number of different values for the aa column) In other words it performs well with data such that on average the number of records associated with a given aa value doesn't exceed a few dozens. If the data is such that the aa column is not selective, the following approach is much -much!- better suited. It leverages SQL's efficient sorting framework, while implementing a simple algorithm that would be hard to express in declarative fashion. This approach could further be improved for datasets with particularly huge number of records each/most aa 'categories' by introducing a simple binary search of the next aa value, by looking ahead (and sometimes back...) in the cursor. For cases where the number of aa 'categories' relative to the overall row count in tblAbc is low, see yet another approach, after this next one.\nDECLARE @aa AS VARCHAR(10), @bb AS INT, @cc AS VARCHAR(10)\nDECLARE @curAa AS VARCHAR(10)\nDECLARE @Ctr AS INT\n\nDROP TABLE tblResults;\nCREATE TABLE tblResults\n( aa VARCHAR(10),\n bb INT,\n cc VARCHAR(10)\n);\n\nDECLARE abcCursor CURSOR \n FOR SELECT aa, bb, cc\n FROM tblABC\n ORDER BY aa, bb DESC, cc\n FOR READ ONLY;\n\nOPEN abcCursor;\n\nSET @curAa = ''\n\nFETCH NEXT FROM abcCursor INTO @aa, @bb, @cc;\nWHILE @@FETCH_STATUS = 0\nBEGIN\n IF @curAa <> @aa\n BEGIN\n SET @Ctr = 0\n SET @curAa = @aa\n END\n IF @Ctr < 5\n BEGIN\n SET @Ctr = @Ctr + 1;\n INSERT tblResults VALUES(@aa, @bb, @cc);\n END\n FETCH NEXT FROM AbcCursor INTO @aa, @bb, @cc;\nEND;\n\nCLOSE abcCursor;\nDEALLOCATE abcCursor;\n\nSELECT * from tblResults\nORDER BY aa, bb, cc -- OR .. bb DESC ... for a more traditional order.\n\nAlternative to the above for cases when aa is very unselective. In other words, when we have relatively few aa 'categories'. The idea is to go through the list of distinct categories and to run a \"LIMIT\" (MySql) \"TOP\" (MSSQL) query for each of these values.\nFor reference purposes, the following ran in 63 seconds for tblAbc of 61 Million records divided in 45 aa values, on MSSQL 8.0, on a relatively old/weak host.\nDECLARE @aa AS VARCHAR(10)\nDECLARE @aaCount INT\n\nDROP TABLE tblResults;\nCREATE TABLE tblResults\n( aa VARCHAR(10),\n bb INT,\n cc VARCHAR(10)\n);\n\nDECLARE aaCountCursor CURSOR \n FOR SELECT aa, COUNT(*)\n FROM tblABC\n GROUP BY aa\n ORDER BY aa\n FOR READ ONLY;\nOPEN aaCountCursor;\n\n\nFETCH NEXT FROM aaCountCursor INTO @aa, @aaCount\nWHILE @@FETCH_STATUS = 0\nBEGIN\n INSERT tblResults \n SELECT TOP 5 aa, bb, cc\n FROM tblproh\n WHERE aa = @aa\n ORDER BY aa, bb DESC, cc\n\n FETCH NEXT FROM aaCountCursor INTO @aa, @aaCount;\nEND;\n\nCLOSE aaCountCursor\nDEALLOCATE aaCountCursor\n\nSELECT * from tblResults\nORDER BY aa, bb, cc -- OR .. bb DESC ... for a more traditional order.\n\nOn the question of needing an index or not. (cf OP's remark)\nWhen merely running a \"SELECT * FROM myTable\", a table scan is effectively the fastest appraoch, no need to bother with indexes. However, the main reason why SQL is typically better suited for this kind of things (aside from being the repository where the data has been accumulating in the first place, whereas any external solution needs to account for the time to export the relevant data), is that it can rely on indexes to avoid scanning. Many general purpose languages are far better suited to handle raw processing, but they are fighting an unfair battle with SQL because they need to rebuilt any prior knowledge of the data which SQL has gathered in the course of its data collection / import phase. Since sorting is a typically a time and sometimes space consuming task, SQL and its relatively slower processing power often ends up ahead of alternative solutions.\nAlso, even without pre-built indexes, modern query optimizers may decide on a plan that involves the creation of a temporary index. And, because sorting is an intrinsic part of DDMS, the SQL servers are generally efficient in that area.\nSo... Is SQL better?\nThis said, if we are trying to compare SQL and other languages for pure ETL jobs, i.e. for dealing with heaps (unindexed tables) as its input to perform various transformations and filtering, it is likely that multi-thread-able utilities written in say C, and leveraging efficient sorting libaries, would likely be faster. The determining question to decide on a SQL vs. Non-SQL approach is where the data is located and where should it eventually reside. If we merely to convert a file to be supplied down \"the chain\" external programs are better suited. If we have or need the data in a SQL server, there are only rare cases that make it worthwhile exporting and processing externally.\n",
"You could use smarter data structures and still use python.\nI've ran your reference implementation and my python implementation on my machine and even compared the output to be sure in results.\nThis is yours:\n$ time python ./ref.py < data-large.txt > ref-large.txt\n\nreal 1m57.689s\nuser 1m56.104s\nsys 0m0.573s\n\nThis is mine:\n$ time python ./my.py < data-large.txt > my-large.txt\n\nreal 1m35.132s\nuser 1m34.649s\nsys 0m0.261s\n$ diff my-large.txt ref-large.txt \n$ echo $?\n0\n\nAnd this is the source:\n#!/usr/bin/python\n# -*- coding: utf-8; -*-\nimport sys\nimport heapq\n\ntop_5 = {}\n\nfor line in sys.stdin:\n aa, bb, cc = line.split()\n\n # We want the top 5 for each distinct value of aa. There are\n # hundreds of thousands of values of aa.\n bb = float(bb)\n if aa not in top_5: top_5[aa] = []\n current = top_5[aa]\n if len(current) < 5:\n heapq.heappush(current, (bb, cc))\n else:\n if current[0] < (bb, cc):\n heapq.heapreplace(current, (bb, cc))\n\nfor aa in top_5:\n current = top_5[aa]\n while len(current) > 0:\n bb, cc = heapq.heappop(current)\n print aa, bb, cc\n\nUpdate: Know your limits.\nI've also timed a noop code, to know the fastest possible python solution with code similar to the original:\n$ time python noop.py < data-large.txt > noop-large.txt\n\nreal 1m20.143s\nuser 1m19.846s\nsys 0m0.267s\n\nAnd the noop.py itself:\n#!/usr/bin/python\n# -*- coding: utf-8; -*-\nimport sys\nimport heapq\n\ntop_5 = {}\n\nfor line in sys.stdin:\n aa, bb, cc = line.split()\n\n bb = float(bb)\n if aa not in top_5: top_5[aa] = []\n current = top_5[aa]\n if len(current) < 5:\n current.append((bb, cc))\n\nfor aa in top_5:\n current = top_5[aa]\n current.sort()\n for bb, cc in current[-5:]:\n print aa, bb, cc\n\n",
"This is a sketch in Common Lisp\nNote that for long files there is a penalty for using READ-LINE, because it conses a fresh string for each line. Then use one of the derivatives of READ-LINE that are floating around that are using a line buffer. Also you might check if you want the hash table be case sensitive or not.\nsecond version\nSplitting the string is no longer needed, because we do it here. It is low level code, in the hope that some speed gains will be possible. It checks for one or more spaces as field delimiter and also tabs.\n(defun read-a-line (stream)\n (let ((line (read-line stream nil nil)))\n (flet ((delimiter-p (c)\n (or (char= c #\\space) (char= c #\\tab))))\n (when line\n (let* ((s0 (position-if #'delimiter-p line))\n (s1 (position-if-not #'delimiter-p line :start s0))\n (s2 (position-if #'delimiter-p line :start (1+ s1)))\n (s3 (position-if #'delimiter-p line :from-end t)))\n (values (subseq line 0 s0)\n (list (read-from-string line nil nil :start s1 :end s2)\n (subseq line (1+ s3)))))))))\n\nAbove function returns two values: the key and a list of the rest.\n(defun dbscan (top-5-table stream)\n \"get triples from each line and put them in the hash table\"\n (loop with aa = nil and bbcc = nil do\n (multiple-value-setq (aa bbcc) (read-a-line stream))\n while aa do\n (setf (gethash aa top-5-table)\n (let ((l (merge 'list (gethash aa top-5-table) (list bbcc)\n #'> :key #'first)))\n (or (and (nth 5 l) (subseq l 0 5)) l)))))\n\n\n(defun dbprint (table output)\n \"print the hashtable contents\"\n (maphash (lambda (aa value)\n (loop for (bb cc) in value\n do (format output \"~a ~a ~a~%\" aa bb cc)))\n table))\n\n(defun dbsum (input &optional (output *standard-output*))\n \"scan and sum from a stream\"\n (let ((top-5-table (make-hash-table :test #'equal)))\n (dbscan top-5-table input)\n (dbprint top-5-table output)))\n\n\n(defun fsum (infile outfile)\n \"scan and sum a file\"\n (with-open-file (input infile :direction :input)\n (with-open-file (output outfile\n :direction :output :if-exists :supersede)\n (dbsum input output))))\n\nsome test data\n(defun create-test-data (&key (file \"/tmp/test.data\") (n-lines 100000))\n (with-open-file (stream file :direction :output :if-exists :supersede)\n (loop repeat n-lines\n do (format stream \"~a ~a ~a~%\"\n (random 1000) (random 100.0) (random 10000)))))\n\n; (create-test-data)\n(defun test ()\n (time (fsum \"/tmp/test.data\" \"/tmp/result.data\")))\n\nthird version, LispWorks\nUses some SPLIT-STRING and PARSE-FLOAT functions, otherwise generic CL.\n(defun fsum (infile outfile)\n (let ((top-5-table (make-hash-table :size 50000000 :test #'equal)))\n (with-open-file (input infile :direction :input)\n (loop for line = (read-line input nil nil)\n while line do\n (destructuring-bind (aa bb cc) (split-string '(#\\space #\\tab) line)\n (setf bb (parse-float bb))\n (let ((v (gethash aa top-5-table)))\n (unless v\n (setf (gethash aa top-5-table)\n (setf v (make-array 6 :fill-pointer 0))))\n (vector-push (cons bb cc) v)\n (when (> (length v) 5)\n (setf (fill-pointer (sort v #'> :key #'car)) 5))))))\n (with-open-file (output outfile :direction :output :if-exists :supersede)\n (maphash (lambda (aa value)\n (loop for (bb . cc) across value do\n (format output \"~a ~f ~a~%\" aa bb cc)))\n top-5-table)))) \n\n",
"This took 45.7s on my machine with 27M rows of data that looked like this:\n42 0.49357 0\n96 0.48075 1\n27 0.640761 2\n8 0.389128 3\n75 0.395476 4\n24 0.212069 5\n80 0.121367 6\n81 0.271959 7\n91 0.18581 8\n69 0.258922 9\n\nYour script took 1m42 on this data, the c++ example too 1m46 (g++ t.cpp -o t to compile it, I don't know anything about c++).\nJava 6, not that it matters really. Output isn't perfect, but it's easy to fix.\npackage top5;\n\nimport java.io.BufferedReader;\nimport java.io.FileReader;\nimport java.util.Arrays;\nimport java.util.Map;\nimport java.util.TreeMap;\n\npublic class Main {\n\n public static void main(String[] args) throws Exception {\n long start = System.currentTimeMillis();\n Map<String, Pair[]> top5map = new TreeMap<String, Pair[]>();\n BufferedReader br = new BufferedReader(new FileReader(\"/tmp/file.dat\"));\n\n String line = br.readLine();\n while(line != null) {\n String parts[] = line.split(\" \");\n\n String key = parts[0];\n double score = Double.valueOf(parts[1]);\n String value = parts[2];\n Pair[] pairs = top5map.get(key);\n\n boolean insert = false;\n Pair p = null;\n if (pairs != null) {\n insert = (score > pairs[pairs.length - 1].score) || pairs.length < 5;\n } else {\n insert = true;\n }\n if (insert) {\n p = new Pair(score, value);\n if (pairs == null) {\n pairs = new Pair[1];\n pairs[0] = new Pair(score, value);\n } else {\n if (pairs.length < 5) {\n Pair[] newpairs = new Pair[pairs.length + 1];\n System.arraycopy(pairs, 0, newpairs, 0, pairs.length);\n pairs = newpairs;\n }\n int k = 0;\n for(int i = pairs.length - 2; i >= 0; i--) {\n if (pairs[i].score <= p.score) {\n pairs[i + 1] = pairs[i];\n } else {\n k = i + 1;\n break;\n }\n }\n pairs[k] = p;\n }\n top5map.put(key, pairs);\n }\n line = br.readLine();\n }\n for(Map.Entry<String, Pair[]> e : top5map.entrySet()) {\n System.out.print(e.getKey());\n System.out.print(\" \");\n System.out.println(Arrays.toString(e.getValue()));\n }\n System.out.println(System.currentTimeMillis() - start);\n }\n\n static class Pair {\n double score;\n String value;\n\n public Pair(double score, String value) {\n this.score = score;\n this.value = value;\n }\n\n public int compareTo(Object o) {\n Pair p = (Pair) o;\n return (int)Math.signum(score - p.score);\n }\n\n public String toString() {\n return String.valueOf(score) + \", \" + value;\n }\n }\n}\n\nAWK script to fake the data:\nBEGIN {\n for (i = 0; i < 27000000; i++) {\n v = rand();\n k = int(rand() * 100);\n print k \" \" v \" \" i;\n }\n exit;\n}\n\n",
"Here is one more OCaml version - targeted for speed - with custom parser on Streams. Too long, but parts of the parser are reusable. Thanks peufeu for triggering competition :)\nSpeed :\n\nsimple ocaml - 27 sec\nocaml with Stream parser - 15 sec\nc with manual parser - 5 sec\n\nCompile with :\nocamlopt -pp camlp4o code.ml -o caml\n\nCode :\nopen Printf\n\nlet cmp x y = compare (fst x : float) (fst y)\nlet digit c = Char.code c - Char.code '0'\n\nlet rec parse f = parser\n | [< a=int; _=spaces; b=float; _=spaces; \n c=rest (Buffer.create 100); t >] -> f a b c; parse f t\n | [< >] -> ()\nand int = parser\n | [< ''0'..'9' as c; t >] -> int_ (digit c) t\n | [< ''-'; ''0'..'9' as c; t >] -> - (int_ (digit c) t)\nand int_ n = parser\n | [< ''0'..'9' as c; t >] -> int_ (n * 10 + digit c) t\n | [< >] -> n\nand float = parser\n | [< n=int; t=frem; e=fexp >] -> (float_of_int n +. t) *. (10. ** e)\nand frem = parser\n | [< ''.'; r=frem_ 0.0 10. >] -> r\n | [< >] -> 0.0\nand frem_ f base = parser\n | [< ''0'..'9' as c; t >] -> \n frem_ (float_of_int (digit c) /. base +. f) (base *. 10.) t\n | [< >] -> f\nand fexp = parser\n | [< ''e'; e=int >] -> float_of_int e\n | [< >] -> 0.0\nand spaces = parser\n | [< '' '; t >] -> spaces t\n | [< ''\\t'; t >] -> spaces t\n | [< >] -> ()\nand crlf = parser\n | [< ''\\r'; t >] -> crlf t\n | [< ''\\n'; t >] -> crlf t\n | [< >] -> ()\nand rest b = parser\n | [< ''\\r'; _=crlf >] -> Buffer.contents b\n | [< ''\\n'; _=crlf >] -> Buffer.contents b\n | [< 'c; t >] -> Buffer.add_char b c; rest b t\n | [< >] -> Buffer.contents b\n\nlet () =\n let all = Array.make 200 [] in\n let each a b c =\n assert (a >= 0 && a < 200);\n match all.(a) with\n | [] -> all.(a) <- [b,c]\n | (bmin,_) as prev::tl -> if b > bmin then\n begin\n let m = List.sort cmp ((b,c)::tl) in\n all.(a) <- if List.length tl < 4 then prev::m else m\n end\n in\n parse each (Stream.of_channel stdin);\n Array.iteri \n (fun a -> List.iter (fun (b,c) -> printf \"%i %f %s\\n\" a b c))\n all\n\n",
"Of all the programs in this thread that I've tested so far, the OCaml version is the fastest and also among the shortest. (Line-of-code-based measurements are a little fuzzy, but it's not clearly longer than the Python version or the C or C++ versions, and it is clearly faster.)\n\nNote: I figured out why my earlier runtimes were so nondeterministic! My CPU heatsink was clogged with dust and my CPU was overheating as a result. Now I am getting nice deterministic benchmark times. I think I've now redone all the timing measurements in this thread now that I have a reliable way to time things.\n\nHere are the timings for the different versions so far, running on a 27-million-row 630-megabyte input data file. I'm on Ubuntu Intrepid Ibex on a dual-core 1.6GHz Celeron, running a 32-bit version of the OS (the Ethernet driver was broken in the 64-bit version). I ran each program five times and report the range of times those five tries took. I'm using Python 2.5.2, OpenJDK 1.6.0.0, OCaml 3.10.2, GCC 4.3.2, SBCL 1.0.8.debian, and Octave 3.0.1.\n\nSquareCog's Pig version: not yet tested (because I can't just apt-get install pig), 7 lines of code.\nmjv's pure SQL version: not yet tested, but I predict a runtime of several days; 7 lines of code.\nygrek's OCaml version: 68.7 seconds ±0.9 in 15 lines of code.\nMy Python version: 169 seconds ±4 or 86 seconds ±2 with Psyco, in 16 lines of code.\nabbot's heap-based Python version: 177 seconds ±5 in 18 lines of code, or 83 seconds ±5 with Psyco.\nMy C version below, composed with GNU sort -n: 90 + 5.5 seconds (±3, ±0.1), but gives the wrong answer because of a deficiency in GNU sort, in 22 lines of code (including one line of shell.)\nhrnt's C++ version: 217 seconds ±3 in 25 lines of code.\nmjv's alternative SQL-based procedural approach: not yet tested, 26 lines of code.\nmjv's first SQL-based procedural approach: not yet tested, 29 lines of code.\npeufeu's Python version with Psyco: 181 seconds ±4, somewhere around 30 lines of code.\nRainer Joswig's Common Lisp version: 478 seconds (only run once) in 42 lines of code.\nabbot's noop.py, which intentionally gives incorrect results to establish a lower bound: not yet tested, 15 lines of code.\nWill Hartung's Java version: 96 seconds ±10 in, according to David A. Wheeler’s SLOCCount, 74 lines of code.\nGreg's Matlab version: doesn't work.\nSchuyler Erle's suggestion of using Pyrex on one of the Python versions: not yet tried.\n\nI supect abbot's version comes out relatively worse for me than for them because the real dataset has a highly nonuniform distribution: as I said, some aa values (“players”) have thousands of lines, while others only have one.\nAbout Psyco: I applied Psyco to my original code (and abbot's version) by putting it in a main function, which by itself cut the time down to about 140 seconds, and calling psyco.full() before calling main(). This added about four lines of code.\nI can almost solve the problem using GNU sort, as follows:\nkragen@inexorable:~/devel$ time LANG=C sort -nr infile -o sorted\n\nreal 1m27.476s\nuser 0m59.472s\nsys 0m8.549s\nkragen@inexorable:~/devel$ time ./top5_sorted_c < sorted > outfile\n\nreal 0m5.515s\nuser 0m4.868s\nsys 0m0.452s\n\nHere top5_sorted_c is this short C program:\n#include <ctype.h>\n#include <stdio.h>\n#include <string.h>\n#include <stdlib.h>\n\nenum { linesize = 1024 };\n\nchar buf[linesize];\nchar key[linesize]; /* last key seen */\n\nint main() {\n int n = 0;\n char *p;\n\n while (fgets(buf, linesize, stdin)) {\n for (p = buf; *p && !isspace(*p); p++) /* find end of key on this line */\n ;\n if (p - buf != strlen(key) || 0 != memcmp(buf, key, p - buf)) \n n = 0; /* this is a new key */\n n++;\n\n if (n <= 5) /* copy up to five lines for each key */\n if (fputs(buf, stdout) == EOF) abort();\n\n if (n == 1) { /* save new key in `key` */\n memcpy(key, buf, p - buf);\n key[p-buf] = '\\0';\n }\n }\n return 0;\n}\n\nI first tried writing that program in C++ as follows, and I got runtimes which were substantially slower, at 33.6±2.3 seconds instead of 5.5±0.1 seconds:\n#include <map>\n#include <iostream>\n#include <string>\n\nint main() {\n using namespace std;\n int n = 0;\n string prev, aa, bb, cc;\n\n while (cin >> aa >> bb >> cc) {\n if (aa != prev) n = 0;\n ++n;\n if (n <= 5) cout << aa << \" \" << bb << \" \" << cc << endl;\n prev = aa;\n }\n return 0;\n}\n\nI did say almost. The problem is that sort -n does okay for most of the data, but it fails when it's trying to compare 0.33 with 3.78168e-05. So to get this kind of performance and actually solve the problem, I need a better sort.\nAnyway, I kind of feel like I'm whining, but the sort-and-filter approach is about 5× faster than the Python program, while the elegant STL program from hrnt is actually a little slower — there seems to be some kind of gross inefficiency in <iostream>. I don't know where the other 83% of the runtime is going in that little C++ version of the filter, but it isn't going anywhere useful, which makes me suspect I don't know where it's going in hrnt's std::map version either. Could that version be sped up 5× too? Because that would be pretty cool. Its working set might be bigger than my L2 cache, but as it happens it probably isn't.\nSome investigation with callgrind says my filter program in C++ is executing 97% of its instructions inside of operator >>. I can identify at least 10 function calls per input byte, and cin.sync_with_stdio(false); doesn’t help. This probably means I could get hrnt’s C program to run substantially faster by parsing input lines more efficiently.\nEdit: kcachegrind claims that hrnt’s program executes 62% of its instructions (on a small 157000 line input file) extracting doubles from an istream. A substantial part of this is because the istreams library apparently executes about 13 function calls per input byte when trying to parse a double. Insane. Could I be misunderstanding kcachegrind's output?\nAnyway, any other suggestions?\n",
"Pretty straightforward Caml (27 * 10^6 rows -- 27 sec, C++ by hrnt -- 29 sec)\nopen Printf\nopen ExtLib\n\nlet (>>) x f = f x\nlet cmp x y = compare (fst x : float) (fst y)\nlet wsp = Str.regexp \"[ \\t]+\"\n\nlet () =\n let all = Hashtbl.create 1024 in\n Std.input_lines stdin >> Enum.iter (fun line ->\n let [a;b;c] = Str.split wsp line in\n let b = float_of_string b in\n try\n match Hashtbl.find all a with\n | [] -> assert false\n | (bmin,_) as prev::tl -> if b > bmin then\n begin\n let m = List.sort ~cmp ((b,c)::tl) in\n Hashtbl.replace all a (if List.length tl < 4 then prev::m else m)\n end\n with Not_found -> Hashtbl.add all a [b,c]\n );\n all >> Hashtbl.iter (fun a -> List.iter (fun (b,c) -> printf \"%s %f %s\\n\" a b c))\n\n",
"Here is a C++ solution. I didn't have a lot of data to test it with, however, so I don't know how fast it actually is.\n[edit] Thanks to the test data provided by the awk script in this thread, I\nmanaged to clean up and speed up the code a bit. I am not trying to find out the fastest possible version - the intent is to provide a reasonably fast version that isn't as ugly as people seem to think STL solutions can be. \nThis version should be about twice as fast as the first version (goes through 27 million lines in about 35 seconds). Gcc users, remember to\ncompile this with -O2.\n#include <map>\n#include <iostream>\n#include <functional>\n#include <utility>\n#include <string>\nint main() {\n using namespace std;\n typedef std::map<string, std::multimap<double, string> > Map;\n Map m;\n string aa, cc;\n double bb;\n std::cin.sync_with_stdio(false); // Dunno if this has any effect, but anyways.\n\n while (std::cin >> aa >> bb >> cc)\n {\n if (m[aa].size() == 5)\n {\n Map::mapped_type::iterator iter = m[aa].begin();\n if (bb < iter->first)\n continue;\n m[aa].erase(iter);\n }\n m[aa].insert(make_pair(bb, cc));\n }\n for (Map::const_iterator iter = m.begin(); iter != m.end(); ++iter)\n for (Map::mapped_type::const_iterator iter2 = iter->second.begin();\n iter2 != iter->second.end();\n ++iter2)\n std::cout << iter->first << \" \" << iter2->first << \" \" << iter2->second <<\n std::endl;\n\n}\n\n",
"Interestingly, the original Python solution is by far the cleanest looking (although the C++ example comes close). \nHow about using Pyrex or Psyco on your original code?\n",
"Has anybody tried doing this problem with just awk. Specifically 'mawk'? It should be faster than even Java and C++, according to this blog post: http://anyall.org/blog/2009/09/dont-mawk-awk-the-fastest-and-most-elegant-big-data-munging-language/ \nEDIT: Just wanted to clarify that the only claim being made in that blog post is that for a certain class of problems that are specifically suited to awk-style processing, the mawk virtual machine can beat 'vanilla' implementations in Java and C++. \n",
"Since you asked about Matlab, here's how I did something like what you're asking for. I tried to do it without any for loops, but I do have one because I didn't care to take a long time with it. If you were worried about memory then you could pull data from the stream in chunks with fscanf rather than reading the entire buffer.\nfid = fopen('fakedata.txt','r');\ntic\nA=fscanf(fid,'%d %d %d\\n');\nA=reshape(A,3,length(A)/3)'; %Matlab reads the data into one long column'\nNames = unique(A(:,1));\nfor i=1:length(Names)\n indices = find(A(:,1)==Names(i)); %Grab all instances of key i\n [Y,I] = sort(A(indices,2),1,'descend'); %sort in descending order of 2nd record\n A(indices(I(1:min([5,length(indices(I))]))),:) %Print the top five\nend\ntoc\nfclose(fid)\n\n",
"Speaking of lower bounds on compute time :\nLet's analyze my algo above :\nfor each row (key,score,id) :\n create or fetch a list of top scores for the row's key\n if len( this list ) < N\n append current\n else if current score > minimum score in list\n replace minimum of list with current row\n update minimum of all lists if needed\n\nLet N be the N in top-N\nLet R be the number of rows in your data set\nLet K be the number of distinct keys\nWhat assumptions can we make ?\nR * sizeof( row ) > RAM or at least it's big enough that we don't want to load it all, use a hash to group by key, and sort each bin. For the same reason we don't sort the whole stuff.\nKragen likes hashtables, so K * sizeof(per-key state) << RAM, most probably it fits in L2/3 cache\nKragen is not sorting, so K*N << R ie each key has much more than N entries\n(note : A << B means A is small relative to B)\nIf the data has a random distribution, then \nafter a small number of rows, the majority of rows will be rejected by the per-key minimum condition, the cost is 1 comparison per row.\nSo the cost per row is 1 hash lookup + 1 comparison + epsilon * (list insertion + (N+1) comparisons for the minimum)\nIf the scores have a random distribution (say between 0 and 1) and the conditions above hold, both epsilons will be very small.\nExperimental proof :\nThe 27 million rows dataset above produces 5933 insertions into the top-N lists. All other rows are rejected by a simple key lookup and comparison. epsilon = 0.0001\nSo roughly, the cost is 1 lookup + coparison per row, which takes a few nanoseconds.\nOn current hardware, there is no way this is not going to be negligible versus IO cost and especially parsing costs.\n",
"Isn't this just as simple as \n SELECT DISTINCT aa, bb, cc FROM tablename ORDER BY bb DESC LIMIT 5\n\n?\nOf course, it's hard to tell what would be fastest without testing it against the data. And if this is something you need to run very fast, it might make sense to optimize your database to make the query faster, rather than optimizing the query.\nAnd, of course, if you need the flat file anyway, you might as well use that.\n",
"Pick \"top 5\" would look something like this. Note that there's no sorting. Nor does any list in the top_5 dictionary ever grow beyond 5 elements.\nfrom collections import defaultdict\nimport sys\n\ndef keep_5( aList, aPair ):\n minbb= min( bb for bb,cc in aList )\n bb, cc = aPair\n if bb < minbb: return aList\n aList.append( aPair )\n min_i= 0\n for i in xrange(1,6):\n if aList[i][0] < aList[min_i][0]\n min_i= i\n aList.pop(min_i)\n return aList\n\n\ntop_5= defaultdict(list)\nfor row in sys.stdin:\n aa, bb, cc = row.split()\n bb = float(bb)\n if len(top_5[aa]) < 5:\n top_5[aa].append( (bb,cc) )\n else:\n top_5[aa]= keep_5( top_5[aa], (bb,cc) )\n\n",
"The Pig version would go something like this (untested):\n Data = LOAD '/my/data' using PigStorage() as (aa:int, bb:float, cc:chararray);\n grp = GROUP Data by aa;\n topK = FOREACH grp (\n sorted = ORDER Data by bb DESC;\n lim = LIMIT sorted 5;\n GENERATE group as aa, lim;\n)\nSTORE topK INTO '/my/output' using PigStorage();\n\nPig isn't optimized for performance; it's goal is to enable processing of multi-terabyte datasets using parallel execution frameworks. It does have a local mode, so you can try it, but I doubt it will beat your script.\n",
"That was a nice lunch break challenge, he, he.\nTop-N is a well-known database killer. As shown by the post above, there is no way to efficiently express it in common SQL.\nAs for the various implementations, you got to keep in mind that the slow part in this is not the sorting or the top-N, it's the parsing of text. Have you looked at the source code for glibc's strtod() lately ?\nFor instance, I get, using Python :\nRead data : 80.5 s\nMy TopN : 34.41 s\nHeapTopN : 30.34 s\n\nIt is quite likely that you'll never get very fast timings, no matter what language you use, unless your data is in some format that is a lot faster to parse than text. For instance, loading the test data into postgres takes 70 s, and the majority of that is text parsing, too.\nIf the N in your topN is small, like 5, a C implementation of my algorithm below would probably be the fastest. If N can be larger, heaps are a much better option.\nSo, since your data is probably in a database, and your problem is getting at the data, not the actual processing, if you're really in need of a super fast TopN engine, what you should do is write a C module for your database of choice. Since postgres is faster for about anything, I suggest using postgres, plus it isn't difficult to write a C module for it.\nHere's my Python code :\nimport random, sys, time, heapq\n\nROWS = 27000000\n\ndef make_data( fname ):\n f = open( fname, \"w\" )\n r = random.Random()\n for i in xrange( 0, ROWS, 10000 ):\n for j in xrange( i,i+10000 ):\n f.write( \"%d %f %d\\n\" % (r.randint(0,100), r.uniform(0,1000), j))\n print (\"write: %d\\r\" % i),\n sys.stdout.flush()\n print\n\ndef read_data( fname ):\n for n, line in enumerate( open( fname ) ):\n r = line.strip().split()\n yield int(r[0]),float(r[1]),r[2]\n if not (n % 10000 ):\n print (\"read: %d\\r\" % n),\n sys.stdout.flush()\n print\n\ndef topn( ntop, data ):\n ntop -= 1\n assert ntop > 0\n min_by_key = {}\n top_by_key = {}\n for key,value,label in data:\n tup = (value,label)\n if key not in top_by_key:\n # initialize\n top_by_key[key] = [ tup ]\n else:\n top = top_by_key[ key ]\n l = len( top )\n if l > ntop:\n # replace minimum value in top if it is lower than current value\n idx = min_by_key[ key ]\n if top[idx] < tup:\n top[idx] = tup\n min_by_key[ key ] = top.index( min( top ) )\n elif l < ntop:\n # fill until we have ntop entries\n top.append( tup )\n else:\n # we have ntop entries in list, we'll have ntop+1\n top.append( tup )\n # initialize minimum to keep\n min_by_key[ key ] = top.index( min( top ) )\n\n # finalize:\n return dict( (key, sorted( values, reverse=True )) for key,values in top_by_key.iteritems() )\n\ndef grouptopn( ntop, data ):\n top_by_key = {}\n for key,value,label in data:\n if key in top_by_key:\n top_by_key[ key ].append( (value,label) )\n else:\n top_by_key[ key ] = [ (value,label) ]\n\n return dict( (key, sorted( values, reverse=True )[:ntop]) for key,values in top_by_key.iteritems() )\n\ndef heaptopn( ntop, data ):\n top_by_key = {}\n for key,value,label in data:\n tup = (value,label)\n if key not in top_by_key:\n top_by_key[ key ] = [ tup ]\n else:\n top = top_by_key[ key ]\n if len(top) < ntop:\n heapq.heappush(top, tup)\n else:\n if top[0] < tup:\n heapq.heapreplace(top, tup)\n\n return dict( (key, sorted( values, reverse=True )) for key,values in top_by_key.iteritems() )\n\ndef dummy( data ):\n for row in data:\n pass\n\nmake_data( \"data.txt\" )\n\nt = time.clock()\ndummy( read_data( \"data.txt\" ) )\nt_read = time.clock() - t\n\nt = time.clock()\ntop_result = topn( 5, read_data( \"data.txt\" ) )\nt_topn = time.clock() - t\n\nt = time.clock()\nhtop_result = heaptopn( 5, read_data( \"data.txt\" ) )\nt_htopn = time.clock() - t\n\n# correctness checking :\nfor key in top_result:\n print key, \" : \", \" \".join ((\"%f:%s\"%(value,label)) for (value,label) in top_result[key])\n print key, \" : \", \" \".join ((\"%f:%s\"%(value,label)) for (value,label) in htop_result[key])\n\nprint\nprint \"Read data :\", t_read\nprint \"TopN : \", t_topn - t_read\nprint \"HeapTopN : \", t_htopn - t_read\n\nfor key in top_result:\n assert top_result[key] == htop_result[key]\n\n",
"Well, please grab a coffee and read the source code for strtod -- it's mindboggling, but needed, if you want to float -> text -> float to give back the same float you started with.... really...\nParsing integers is a lot faster (not so much in python, though, but in C, yes).\nAnyway, putting the data in a Postgres table :\nSELECT count( key ) FROM the dataset in the above program\n\n=> 7 s (so it takes 7 s to read the 27M records)\nCREATE INDEX topn_key_value ON topn( key, value );\n\n191 s\nCREATE TEMPORARY TABLE topkeys AS SELECT key FROM topn GROUP BY key;\n\n12 s\n(You can use the index to get distinct values of 'key' faster too but it requires some light plpgsql hacking)\nCREATE TEMPORARY TABLE top AS SELECT (r).* FROM (SELECT (SELECT b AS r FROM topn b WHERE b.key=a.key ORDER BY value DESC LIMIT 1) AS r FROM topkeys a) foo;\n\nTemps : 15,310 ms\nINSERT INTO top SELECT (r).* FROM (SELECT (SELECT b AS r FROM topn b WHERE b.key=a.key ORDER BY value DESC LIMIT 1 OFFSET 1) AS r FROM topkeys a) foo;\n\nTemps : 17,853 ms\nINSERT INTO top SELECT (r).* FROM (SELECT (SELECT b AS r FROM topn b WHERE b.key=a.key ORDER BY value DESC LIMIT 1 OFFSET 2) AS r FROM topkeys a) foo;\n\nTemps : 13,983 ms\nINSERT INTO top SELECT (r).* FROM (SELECT (SELECT b AS r FROM topn b WHERE b.key=a.key ORDER BY value DESC LIMIT 1 OFFSET 3) AS r FROM topkeys a) foo;\n\nTemps : 16,860 ms\nINSERT INTO top SELECT (r).* FROM (SELECT (SELECT b AS r FROM topn b WHERE b.key=a.key ORDER BY value DESC LIMIT 1 OFFSET 4) AS r FROM topkeys a) foo;\n\nTemps : 17,651 ms\nINSERT INTO top SELECT (r).* FROM (SELECT (SELECT b AS r FROM topn b WHERE b.key=a.key ORDER BY value DESC LIMIT 1 OFFSET 5) AS r FROM topkeys a) foo;\n\nTemps : 19,216 ms\nSELECT * FROM top ORDER BY key,value;\n\nAs you can see computing the top-n is extremely fast (provided n is small) but creating the (mandatory) index is extremely slow because it involves a full sort.\nYour best bet is to use a format that is fast to parse (either binary, or write a custom C aggregate for your database, which would be the best choice IMHO). The runtime in the C program shouldn't be more than 1s if python can do it in 1 s.\n",
"I love lunch break challenges. Here's a 1 hour implementation.\nOK, when you don't want do some extremely exotic crap like additions, nothing stops you from using a custom base-10 floating point format whose only implemented operator is comparison, right ? lol.\nI had some fast-atoi code lying around from a previous project, so I just imported that.\nhttp://www.copypastecode.com/11541/\nThis C source code takes about 6.6 seconds to parse the 580MB of input text (27 million lines), half of that time is fgets, lol. Then it takes approximately 0.05 seconds to compute the top-n, but I don't know for sure, since the time it takes for the top-n is less than the timer noise.\nYou'll be the one to test it for correctness though XDDDDDDDDDDD\nInteresting huh ?\n"
] |
[
9,
6,
3,
3,
3,
2,
2,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"apache_pig",
"lisp",
"ocaml",
"python",
"sql"
] |
stackoverflow_0001467898_apache_pig_lisp_ocaml_python_sql.txt
|
Q:
basic unique ModelForm field for Google App Engine
I do not care about concurrency issues.
It is relatively easy to build unique form field:
from django import forms
class UniqueUserEmailField(forms.CharField):
def clean(self, value):
self.check_uniqueness(super(UniqueUserEmailField, self).clean(value))
def check_uniqueness(self, value):
same_user = users.User.all().filter('email', value).get()
if same_user:
raise forms.ValidationError('%s already_registered' % value)
so one could add users on-the-fly. Editing existing user is tricky. This field would not allow to save user having other user email. At the same time it would not allow to save a user with the same email. What code do you use to put a field with uniqueness check into ModelForm?
A:
quick and dirty way would be:
make check_uniqueness classmethod
use custom field check in ModelForm, like this:
class User(forms.ModelForm):
email = forms.EmailField()
def clean_email(self):
data = self.cleaned_data['email']
original = self.instance.email
if original == data:
return data
UniqueUserEmailField.check_uniqueness(data)
return data
better options?
|
basic unique ModelForm field for Google App Engine
|
I do not care about concurrency issues.
It is relatively easy to build unique form field:
from django import forms
class UniqueUserEmailField(forms.CharField):
def clean(self, value):
self.check_uniqueness(super(UniqueUserEmailField, self).clean(value))
def check_uniqueness(self, value):
same_user = users.User.all().filter('email', value).get()
if same_user:
raise forms.ValidationError('%s already_registered' % value)
so one could add users on-the-fly. Editing existing user is tricky. This field would not allow to save user having other user email. At the same time it would not allow to save a user with the same email. What code do you use to put a field with uniqueness check into ModelForm?
|
[
"quick and dirty way would be:\n\nmake check_uniqueness classmethod\nuse custom field check in ModelForm, like this:\nclass User(forms.ModelForm): \n email = forms.EmailField() \ndef clean_email(self):\n data = self.cleaned_data['email']\n original = self.instance.email\n if original == data:\n return data\n UniqueUserEmailField.check_uniqueness(data)\n return data\n\n\nbetter options?\n"
] |
[
1
] |
[] |
[] |
[
"app_engine_patch",
"django",
"google_app_engine",
"python"
] |
stackoverflow_0001502818_app_engine_patch_django_google_app_engine_python.txt
|
Q:
Problem loading a specific website through Qt Webkit
I am currently using the following PyQt code to create a simple browser:
import sys
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from PyQt4.QtWebKit import *
app = QApplication(sys.argv)
web = QWebView()
web.load(QUrl("http://www.robeez.com"))
web.show()
sys.exit(app.exec_())
Websites like google.com or stackoverflow.com work fine but robeez.com doesn't. Does anyone with Webkit experience know what might be wrong? robeez.com works fine in a regular browser like Chrome or Firefox.
A:
try arora (a very simple wrapping on top of QtWebKit); if it works, its your code. if it doesn't, its the website.
A:
For some reason http://www.robeeez.com which I think redirects to rebeez.com DOES work.
In some cases rebeez.com sends out a blank index.html page, dillo and wget also receive
nothing as does the qt45 demo browser. So is it the browser or the way the site is set up??
A:
Try sending the Accept-Language header also, it then works for me.
|
Problem loading a specific website through Qt Webkit
|
I am currently using the following PyQt code to create a simple browser:
import sys
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from PyQt4.QtWebKit import *
app = QApplication(sys.argv)
web = QWebView()
web.load(QUrl("http://www.robeez.com"))
web.show()
sys.exit(app.exec_())
Websites like google.com or stackoverflow.com work fine but robeez.com doesn't. Does anyone with Webkit experience know what might be wrong? robeez.com works fine in a regular browser like Chrome or Firefox.
|
[
"try arora (a very simple wrapping on top of QtWebKit); if it works, its your code. if it doesn't, its the website.\n",
"For some reason http://www.robeeez.com which I think redirects to rebeez.com DOES work.\nIn some cases rebeez.com sends out a blank index.html page, dillo and wget also receive \nnothing as does the qt45 demo browser. So is it the browser or the way the site is set up??\n",
"Try sending the Accept-Language header also, it then works for me.\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"pyqt",
"python",
"qtwebkit",
"webkit"
] |
stackoverflow_0001111267_pyqt_python_qtwebkit_webkit.txt
|
Q:
What's the most pythonic way to ensure that all elements of a list are different?
I have a list in Python that I generate as part of the program. I have a strong assumption that these are all different, and I check this with an assertion.
This is the way I do it now:
If there are two elements:
try:
assert(x[0] != x[1])
except:
print debug_info
raise Exception("throw to caller")
If there are three:
try:
assert(x[0] != x[1])
assert(x[0] != x[2])
assert(x[1] != x[2])
except:
print debug_info
raise Exception("throw to caller")
And if I ever have to do this with four elements I'll go crazy.
Is there a better way to ensure that all the elements of the list are unique?
A:
Maybe something like this:
if len(x) == len(set(x)):
print "all elements are unique"
else:
print "elements are not unique"
A:
The most popular answers are O(N) (good!-) but, as @Paul and @Mark point out, they require the list's items to be hashable. Both @Paul and @Mark's proposed approaches for unhashable items are general but take O(N squared) -- i.e., a lot.
If your list's items are not hashable but are comparable, you can do better... here's an approach that always work as fast as feasible given the nature of the list's items.
import itertools
def allunique(L):
# first try sets -- fastest, if all items are hashable
try:
return len(L) == len(set(L))
except TypeError:
pass
# next, try sort -- second fastest, if items are comparable
try:
L1 = sorted(L)
except TypeError:
pass
else:
return all(len(list(g))==1 for k, g in itertools.groupby(L1))
# fall back to the slowest but most general approach
return all(v not in L[i+1:] for i, L in enumerate(L))
This is O(N) where feasible (all items hashable), O(N log N) as the most frequent fallback (some items unhashable, but all comparable), O(N squared) where inevitable (some items unhashable, e.g. dicts, and some non-comparable, e.g. complex numbers).
Inspiration for this code comes from an old recipe by the great Tim Peters, which differed by actually producing a list of unique items (and also was so far ago that set was not around -- it had to use a dict...!-), but basically faced identical issues.
A:
How about this:
if len(x) != len(set(x)):
raise Exception("throw to caller")
This assumes that elements in x are hashable.
A:
Hopefully all the items in your sequence are immutable -- if not, you will not be able to call set on the sequence.
>>> set( ([1,2], [3,4]) )
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'list'
If you do have mutable items, you can't hash the items and you will pretty much have to repeatedly check through the list:
def isUnique(lst):
for i,v in enumerate(lst):
if v in lst[i+1:]:
return False
return True
>>> isUnique( ([1,2], [3,4]) )
True
>>> isUnique( ([1,2], [3,4], [1,2]) )
False
A:
As you build the list you can check to see if the value already exists, e.g:
if x in y:
raise Exception("Value %s already in y" % x)
else:
y.append(x)
the benefit of this is that the clashing variable will be reported.
A:
You could process the list to create a known-to-be-unique copy:
def make_unique(seq):
t = type(seq)
seen = set()
return t(c for c in seq if not (c in seen or seen.add(c)))
Or if the seq elements are not hashable:
def unique1(seq):
t = type(seq)
seen = []
return t(c for c in seq if not (c in seen or seen.append(c)))
And this will keep the items in order (omitting duplicates, of course).
A:
I would use this:
mylist = [1,2,3,4]
is_unique = all(mylist.count(x) == 1 for x in mylist)
|
What's the most pythonic way to ensure that all elements of a list are different?
|
I have a list in Python that I generate as part of the program. I have a strong assumption that these are all different, and I check this with an assertion.
This is the way I do it now:
If there are two elements:
try:
assert(x[0] != x[1])
except:
print debug_info
raise Exception("throw to caller")
If there are three:
try:
assert(x[0] != x[1])
assert(x[0] != x[2])
assert(x[1] != x[2])
except:
print debug_info
raise Exception("throw to caller")
And if I ever have to do this with four elements I'll go crazy.
Is there a better way to ensure that all the elements of the list are unique?
|
[
"Maybe something like this:\nif len(x) == len(set(x)):\n print \"all elements are unique\"\nelse:\n print \"elements are not unique\"\n\n",
"The most popular answers are O(N) (good!-) but, as @Paul and @Mark point out, they require the list's items to be hashable. Both @Paul and @Mark's proposed approaches for unhashable items are general but take O(N squared) -- i.e., a lot.\nIf your list's items are not hashable but are comparable, you can do better... here's an approach that always work as fast as feasible given the nature of the list's items.\nimport itertools\n\ndef allunique(L):\n # first try sets -- fastest, if all items are hashable\n try:\n return len(L) == len(set(L))\n except TypeError:\n pass\n # next, try sort -- second fastest, if items are comparable\n try:\n L1 = sorted(L)\n except TypeError:\n pass\n else:\n return all(len(list(g))==1 for k, g in itertools.groupby(L1))\n # fall back to the slowest but most general approach\n return all(v not in L[i+1:] for i, L in enumerate(L))\n\nThis is O(N) where feasible (all items hashable), O(N log N) as the most frequent fallback (some items unhashable, but all comparable), O(N squared) where inevitable (some items unhashable, e.g. dicts, and some non-comparable, e.g. complex numbers).\nInspiration for this code comes from an old recipe by the great Tim Peters, which differed by actually producing a list of unique items (and also was so far ago that set was not around -- it had to use a dict...!-), but basically faced identical issues.\n",
"How about this:\nif len(x) != len(set(x)):\n raise Exception(\"throw to caller\")\n\nThis assumes that elements in x are hashable.\n",
"Hopefully all the items in your sequence are immutable -- if not, you will not be able to call set on the sequence.\n>>> set( ([1,2], [3,4]) )\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: unhashable type: 'list'\n\nIf you do have mutable items, you can't hash the items and you will pretty much have to repeatedly check through the list:\ndef isUnique(lst):\n for i,v in enumerate(lst):\n if v in lst[i+1:]:\n return False\n return True\n\n\n>>> isUnique( ([1,2], [3,4]) )\nTrue\n>>> isUnique( ([1,2], [3,4], [1,2]) )\nFalse\n\n",
"As you build the list you can check to see if the value already exists, e.g:\nif x in y:\n raise Exception(\"Value %s already in y\" % x)\nelse:\n y.append(x)\n\nthe benefit of this is that the clashing variable will be reported.\n",
"You could process the list to create a known-to-be-unique copy:\ndef make_unique(seq): \n t = type(seq) \n seen = set()\n return t(c for c in seq if not (c in seen or seen.add(c)))\n\nOr if the seq elements are not hashable:\ndef unique1(seq):\n t = type(seq) \n seen = [] \n return t(c for c in seq if not (c in seen or seen.append(c)))\n\nAnd this will keep the items in order (omitting duplicates, of course).\n",
"I would use this:\nmylist = [1,2,3,4]\nis_unique = all(mylist.count(x) == 1 for x in mylist)\n\n"
] |
[
26,
18,
7,
2,
1,
0,
0
] |
[] |
[] |
[
"list",
"python",
"unique"
] |
stackoverflow_0001501118_list_python_unique.txt
|
Q:
Sanitising user input using Python
What is the best way to sanitize user input for a Python-based web application? Is there a single function to remove HTML characters and any other necessary characters combinations to prevent an XSS or SQL injection attack?
A:
Here is a snippet that will remove all tags not on the white list, and all tag attributes not on the attribues whitelist (so you can't use onclick).
It is a modified version of http://www.djangosnippets.org/snippets/205/, with the regex on the attribute values to prevent people from using href="javascript:...", and other cases described at http://ha.ckers.org/xss.html.
(e.g. <a href="ja	vascript:alert('hi')"> or <a href="ja vascript:alert('hi')">, etc.)
As you can see, it uses the (awesome) BeautifulSoup library.
import re
from urlparse import urljoin
from BeautifulSoup import BeautifulSoup, Comment
def sanitizeHtml(value, base_url=None):
rjs = r'[\s]*(&#x.{1,7})?'.join(list('javascript:'))
rvb = r'[\s]*(&#x.{1,7})?'.join(list('vbscript:'))
re_scripts = re.compile('(%s)|(%s)' % (rjs, rvb), re.IGNORECASE)
validTags = 'p i strong b u a h1 h2 h3 pre br img'.split()
validAttrs = 'href src width height'.split()
urlAttrs = 'href src'.split() # Attributes which should have a URL
soup = BeautifulSoup(value)
for comment in soup.findAll(text=lambda text: isinstance(text, Comment)):
# Get rid of comments
comment.extract()
for tag in soup.findAll(True):
if tag.name not in validTags:
tag.hidden = True
attrs = tag.attrs
tag.attrs = []
for attr, val in attrs:
if attr in validAttrs:
val = re_scripts.sub('', val) # Remove scripts (vbs & js)
if attr in urlAttrs:
val = urljoin(base_url, val) # Calculate the absolute url
tag.attrs.append((attr, val))
return soup.renderContents().decode('utf8')
As the other posters have said, pretty much all Python db libraries take care of SQL injection, so this should pretty much cover you.
A:
Edit: bleach is a wrapper around html5lib which makes it even easier to use as a whitelist-based sanitiser.
html5lib comes with a whitelist-based HTML sanitiser - it's easy to subclass it to restrict the tags and attributes users are allowed to use on your site, and it even attempts to sanitise CSS if you're allowing use of the style attribute.
Here's now I'm using it in my Stack Overflow clone's sanitize_html utility function:
http://code.google.com/p/soclone/source/browse/trunk/soclone/utils/html.py
I've thrown all the attacks listed in ha.ckers.org's XSS Cheatsheet (which are handily available in XML format at it after performing Markdown to HTML conversion using python-markdown2 and it seems to have held up ok.
The WMD editor component which Stackoverflow currently uses is a problem, though - I actually had to disable JavaScript in order to test the XSS Cheatsheet attacks, as pasting them all into WMD ended up giving me alert boxes and blanking out the page.
A:
The best way to prevent XSS is not to try and filter everything, but rather to simply do HTML Entity encoding. For example, automatically turn < into <. This is the ideal solution assuming you don't need to accept any html input (outside of forum/comment areas where it is used as markup, it should be pretty rare to need to accept HTML); there are so many permutations via alternate encodings that anything but an ultra-restrictive whitelist (a-z,A-Z,0-9 for example) is going to let something through.
SQL Injection, contrary to other opinion, is still possible, if you are just building out a query string. For example, if you are just concatenating an incoming parameter onto a query string, you will have SQL Injection. The best way to protect against this is also not filtering, but rather to religiously use parameterized queries and NEVER concatenate user input.
This is not to say that filtering isn't still a best practice, but in terms of SQL Injection and XSS, you will be far more protected if you religiously use Parameterize Queries and HTML Entity Encoding.
A:
Jeff Atwood himself described how StackOverflow.com sanitizes user input (in non-language-specific terms) on the Stack Overflow blog: https://blog.stackoverflow.com/2008/06/safe-html-and-xss/
However, as Justin points out, if you use Django templates or something similar then they probably sanitize your HTML output anyway.
SQL injection also shouldn't be a concern. All of Python's database libraries (MySQLdb, cx_Oracle, etc) always sanitize the parameters you pass. These libraries are used by all of Python's object-relational mappers (such as Django models), so you don't need to worry about sanitation there either.
A:
I don't do web development much any longer, but when I did, I did something like so:
When no parsing is supposed to happen, I usually just escape the data to not interfere with the database when I store it, and escape everything I read up from the database to not interfere with html when I display it (cgi.escape() in python).
Chances are, if someone tried to input html characters or stuff, they actually wanted that to be displayed as text anyway. If they didn't, well tough :)
In short always escape what can affect the current target for the data.
When I did need some parsing (markup or whatever) I usually tried to keep that language in a non-intersecting set with html so I could still just store it suitably escaped (after validating for syntax errors) and parse it to html when displaying without having to worry about the data the user put in there interfering with your html.
See also Escaping HTML
A:
If you are using a framework like django, the framework can easily do this for you using standard filters. In fact, I'm pretty sure django automatically does it unless you tell it not to.
Otherwise, I would recommend using some sort of regex validation before accepting inputs from forms. I don't think there's a silver bullet for your problem, but using the re module, you should be able to construct what you need.
A:
To sanitize a string input which you want to store to the database (for example a customer name) you need either to escape it or plainly remove any quotes (', ") from it. This effectively prevents classical SQL injection which can happen if you are assembling an SQL query from strings passed by the user.
For example (if it is acceptable to remove quotes completely):
datasetName = datasetName.replace("'","").replace('"',"")
|
Sanitising user input using Python
|
What is the best way to sanitize user input for a Python-based web application? Is there a single function to remove HTML characters and any other necessary characters combinations to prevent an XSS or SQL injection attack?
|
[
"Here is a snippet that will remove all tags not on the white list, and all tag attributes not on the attribues whitelist (so you can't use onclick).\nIt is a modified version of http://www.djangosnippets.org/snippets/205/, with the regex on the attribute values to prevent people from using href=\"javascript:...\", and other cases described at http://ha.ckers.org/xss.html.\n(e.g. <a href=\"ja	vascript:alert('hi')\"> or <a href=\"ja vascript:alert('hi')\">, etc.)\nAs you can see, it uses the (awesome) BeautifulSoup library.\nimport re\nfrom urlparse import urljoin\nfrom BeautifulSoup import BeautifulSoup, Comment\n\ndef sanitizeHtml(value, base_url=None):\n rjs = r'[\\s]*(&#x.{1,7})?'.join(list('javascript:'))\n rvb = r'[\\s]*(&#x.{1,7})?'.join(list('vbscript:'))\n re_scripts = re.compile('(%s)|(%s)' % (rjs, rvb), re.IGNORECASE)\n validTags = 'p i strong b u a h1 h2 h3 pre br img'.split()\n validAttrs = 'href src width height'.split()\n urlAttrs = 'href src'.split() # Attributes which should have a URL\n soup = BeautifulSoup(value)\n for comment in soup.findAll(text=lambda text: isinstance(text, Comment)):\n # Get rid of comments\n comment.extract()\n for tag in soup.findAll(True):\n if tag.name not in validTags:\n tag.hidden = True\n attrs = tag.attrs\n tag.attrs = []\n for attr, val in attrs:\n if attr in validAttrs:\n val = re_scripts.sub('', val) # Remove scripts (vbs & js)\n if attr in urlAttrs:\n val = urljoin(base_url, val) # Calculate the absolute url\n tag.attrs.append((attr, val))\n\n return soup.renderContents().decode('utf8')\n\nAs the other posters have said, pretty much all Python db libraries take care of SQL injection, so this should pretty much cover you.\n",
"Edit: bleach is a wrapper around html5lib which makes it even easier to use as a whitelist-based sanitiser.\nhtml5lib comes with a whitelist-based HTML sanitiser - it's easy to subclass it to restrict the tags and attributes users are allowed to use on your site, and it even attempts to sanitise CSS if you're allowing use of the style attribute.\nHere's now I'm using it in my Stack Overflow clone's sanitize_html utility function:\nhttp://code.google.com/p/soclone/source/browse/trunk/soclone/utils/html.py\nI've thrown all the attacks listed in ha.ckers.org's XSS Cheatsheet (which are handily available in XML format at it after performing Markdown to HTML conversion using python-markdown2 and it seems to have held up ok.\nThe WMD editor component which Stackoverflow currently uses is a problem, though - I actually had to disable JavaScript in order to test the XSS Cheatsheet attacks, as pasting them all into WMD ended up giving me alert boxes and blanking out the page.\n",
"The best way to prevent XSS is not to try and filter everything, but rather to simply do HTML Entity encoding. For example, automatically turn < into <. This is the ideal solution assuming you don't need to accept any html input (outside of forum/comment areas where it is used as markup, it should be pretty rare to need to accept HTML); there are so many permutations via alternate encodings that anything but an ultra-restrictive whitelist (a-z,A-Z,0-9 for example) is going to let something through.\nSQL Injection, contrary to other opinion, is still possible, if you are just building out a query string. For example, if you are just concatenating an incoming parameter onto a query string, you will have SQL Injection. The best way to protect against this is also not filtering, but rather to religiously use parameterized queries and NEVER concatenate user input.\nThis is not to say that filtering isn't still a best practice, but in terms of SQL Injection and XSS, you will be far more protected if you religiously use Parameterize Queries and HTML Entity Encoding.\n",
"Jeff Atwood himself described how StackOverflow.com sanitizes user input (in non-language-specific terms) on the Stack Overflow blog: https://blog.stackoverflow.com/2008/06/safe-html-and-xss/\nHowever, as Justin points out, if you use Django templates or something similar then they probably sanitize your HTML output anyway.\nSQL injection also shouldn't be a concern. All of Python's database libraries (MySQLdb, cx_Oracle, etc) always sanitize the parameters you pass. These libraries are used by all of Python's object-relational mappers (such as Django models), so you don't need to worry about sanitation there either.\n",
"I don't do web development much any longer, but when I did, I did something like so:\nWhen no parsing is supposed to happen, I usually just escape the data to not interfere with the database when I store it, and escape everything I read up from the database to not interfere with html when I display it (cgi.escape() in python).\nChances are, if someone tried to input html characters or stuff, they actually wanted that to be displayed as text anyway. If they didn't, well tough :)\nIn short always escape what can affect the current target for the data.\nWhen I did need some parsing (markup or whatever) I usually tried to keep that language in a non-intersecting set with html so I could still just store it suitably escaped (after validating for syntax errors) and parse it to html when displaying without having to worry about the data the user put in there interfering with your html.\nSee also Escaping HTML\n",
"If you are using a framework like django, the framework can easily do this for you using standard filters. In fact, I'm pretty sure django automatically does it unless you tell it not to.\nOtherwise, I would recommend using some sort of regex validation before accepting inputs from forms. I don't think there's a silver bullet for your problem, but using the re module, you should be able to construct what you need.\n",
"To sanitize a string input which you want to store to the database (for example a customer name) you need either to escape it or plainly remove any quotes (', \") from it. This effectively prevents classical SQL injection which can happen if you are assembling an SQL query from strings passed by the user.\nFor example (if it is acceptable to remove quotes completely):\ndatasetName = datasetName.replace(\"'\",\"\").replace('\"',\"\")\n\n"
] |
[
29,
23,
13,
6,
4,
0,
0
] |
[] |
[] |
[
"python",
"xss"
] |
stackoverflow_0000016861_python_xss.txt
|
Q:
Dynamic column formatting in SQL - and a backend to store the formatting
I'm trying to create a system in Python in which one can select a number of rows from a set of tables, which are to be formatted in a user-defined way. Let's say the table a has a set of columns, some of which include a date or timestamp value. The user-defined format for each column should be stored in another table, and queried and applied on the main query at runtime.
Let me give you an example: There are different ways of formatting a date column, e.g. using
SELECT to_char(column, 'YYYY-MM-DD') FROM table;
in PostgreSQL.
For example, I'd like the second parameter of the to_char() builtin to be queried dynamically from another table at runtime, and then applied if it has a value.
Reading the definition from a table as such is not that much of a problem, rather than creating a database scheme which would receive data from a user interface from which a user can select which formatting instructions to apply to the different columns. The user should be able to pick the user's set of columns to be included in the user's query, as well as the user's user defined formatting for each column.
I've been thinking about doing this in an elegant and efficient way for some days now, but to no avail. Having the user put in the user's desired definition in a text field and including it in a query would pretty much generate an invitation for SQL injection attacks (although I could use escape() functions), and storing every possible combination doesn't seem feasible to me either.
A:
It seems to me a stored procedure or a sub-select would work well here, though I haven't tested it. Let's say you store a date_format for each user in the users table.
SELECT to_char((SELECT date_format FROM users WHERE users.id=123), column) FROM table;
Your mileage may vary.
A:
Pull the dates out as Unix timestamps and format them in Python:
SELECT DATE_PART('epoch', TIMESTAMP(my_col)) FROM my_table;
my_date = datetime.datetime.fromtimestamp(row[0]) # Or equivalent for your toolkit
I've found a couple of advantages to this approach: unix timestamps are the most space-efficient common format (this approach is effectively language neutral) and because the language you're querying the database in is richer than the underlying database, giving you plenty of options if you start wanting to do friendlier formatting like "today", "yesterday", "last week", "June 23rd".
I don't know what sort of application you're developing but if it's something like a web app which will be used by multiple people I'd also consider storing your database values in UTC so you can apply user-specific timezone settings when formatting without having to consider them for all of your database operations.
|
Dynamic column formatting in SQL - and a backend to store the formatting
|
I'm trying to create a system in Python in which one can select a number of rows from a set of tables, which are to be formatted in a user-defined way. Let's say the table a has a set of columns, some of which include a date or timestamp value. The user-defined format for each column should be stored in another table, and queried and applied on the main query at runtime.
Let me give you an example: There are different ways of formatting a date column, e.g. using
SELECT to_char(column, 'YYYY-MM-DD') FROM table;
in PostgreSQL.
For example, I'd like the second parameter of the to_char() builtin to be queried dynamically from another table at runtime, and then applied if it has a value.
Reading the definition from a table as such is not that much of a problem, rather than creating a database scheme which would receive data from a user interface from which a user can select which formatting instructions to apply to the different columns. The user should be able to pick the user's set of columns to be included in the user's query, as well as the user's user defined formatting for each column.
I've been thinking about doing this in an elegant and efficient way for some days now, but to no avail. Having the user put in the user's desired definition in a text field and including it in a query would pretty much generate an invitation for SQL injection attacks (although I could use escape() functions), and storing every possible combination doesn't seem feasible to me either.
|
[
"It seems to me a stored procedure or a sub-select would work well here, though I haven't tested it. Let's say you store a date_format for each user in the users table.\nSELECT to_char((SELECT date_format FROM users WHERE users.id=123), column) FROM table;\n\nYour mileage may vary.\n",
"Pull the dates out as Unix timestamps and format them in Python:\nSELECT DATE_PART('epoch', TIMESTAMP(my_col)) FROM my_table;\n\nmy_date = datetime.datetime.fromtimestamp(row[0]) # Or equivalent for your toolkit\n\nI've found a couple of advantages to this approach: unix timestamps are the most space-efficient common format (this approach is effectively language neutral) and because the language you're querying the database in is richer than the underlying database, giving you plenty of options if you start wanting to do friendlier formatting like \"today\", \"yesterday\", \"last week\", \"June 23rd\".\nI don't know what sort of application you're developing but if it's something like a web app which will be used by multiple people I'd also consider storing your database values in UTC so you can apply user-specific timezone settings when formatting without having to consider them for all of your database operations.\n"
] |
[
0,
0
] |
[] |
[] |
[
"database",
"database_design",
"postgresql",
"python",
"sql"
] |
stackoverflow_0001503621_database_database_design_postgresql_python_sql.txt
|
Q:
Fixture loading works with loaddata but fails silently in unit test in Django
I can load the fixture file in my django application by using loaddata:
manage.py loaddata palamut
The fixture palamut.yaml is in the directory palamut/fixtures/
I have a unit test module service_tests.py in palamut/tests/. Its content is here:
import unittest
from palamut.models import *
from palamut.service import *
from palamut.pforms import *
class ServiceTest(unittest.TestCase):
fixtures = ['palamut.yaml']
def test_convert_vensim(self):
game_definition = GameDefinition.objects.get(pk=1)
This unit test gives the following error:
DoesNotExist: GameDefinition matching query does not exist.
I debugged the script, and found out that the fixture is not loaded in the unit test module.
Do you have any suggestions about the cause of this behavior?
By the way, test logs don't contain anything related to fixture loading.
A:
Your TestCase should be an instance of django.test.TestCase, not unittest.TestCase
|
Fixture loading works with loaddata but fails silently in unit test in Django
|
I can load the fixture file in my django application by using loaddata:
manage.py loaddata palamut
The fixture palamut.yaml is in the directory palamut/fixtures/
I have a unit test module service_tests.py in palamut/tests/. Its content is here:
import unittest
from palamut.models import *
from palamut.service import *
from palamut.pforms import *
class ServiceTest(unittest.TestCase):
fixtures = ['palamut.yaml']
def test_convert_vensim(self):
game_definition = GameDefinition.objects.get(pk=1)
This unit test gives the following error:
DoesNotExist: GameDefinition matching query does not exist.
I debugged the script, and found out that the fixture is not loaded in the unit test module.
Do you have any suggestions about the cause of this behavior?
By the way, test logs don't contain anything related to fixture loading.
|
[
"Your TestCase should be an instance of django.test.TestCase, not unittest.TestCase\n"
] |
[
9
] |
[] |
[] |
[
"django",
"fixtures",
"python",
"unit_testing"
] |
stackoverflow_0001504255_django_fixtures_python_unit_testing.txt
|
Q:
Fitting a bimodal distribution to a set of values
Given a 1D array of values, what is the simplest way to figure out what the best fit bimodal distribution to it is, where each 'mode' is a normal distribution? Or in other words, how can you find the combination of two normal distributions that bests reproduces the 1D array of values?
Specifically, I'm interested in implementing this in python, but answers don't have to be language specific.
Thanks!
A:
What you are trying to do is called a Gaussian Mixture model. The standard approach to solving this is using Expectation Maximization, scipy svn includes a section on machine learning and em called scikits. I use it a a fair bit.
A:
I suggest using the awesome scipy package.
It provides a few methods for optimisation.
There's a big fat caveat with simply applying a pre-defined least square fit or something along those lines.
Here are a few problems you will run into:
Noise larger than second/both peaks.
Partial peak - your data is cut of at one of the borders.
Sampling - width of peaks are smaller than your sampled data.
It isn't normal - you'll get some result ...
Overlap - If peaks overlap you'll find that often one peak is fitted correctly but the second will apporach zero...
|
Fitting a bimodal distribution to a set of values
|
Given a 1D array of values, what is the simplest way to figure out what the best fit bimodal distribution to it is, where each 'mode' is a normal distribution? Or in other words, how can you find the combination of two normal distributions that bests reproduces the 1D array of values?
Specifically, I'm interested in implementing this in python, but answers don't have to be language specific.
Thanks!
|
[
"What you are trying to do is called a Gaussian Mixture model. The standard approach to solving this is using Expectation Maximization, scipy svn includes a section on machine learning and em called scikits. I use it a a fair bit.\n",
"I suggest using the awesome scipy package.\nIt provides a few methods for optimisation.\nThere's a big fat caveat with simply applying a pre-defined least square fit or something along those lines. \nHere are a few problems you will run into:\n\nNoise larger than second/both peaks.\nPartial peak - your data is cut of at one of the borders.\nSampling - width of peaks are smaller than your sampled data.\nIt isn't normal - you'll get some result ...\nOverlap - If peaks overlap you'll find that often one peak is fitted correctly but the second will apporach zero...\n\n"
] |
[
4,
0
] |
[] |
[] |
[
"algorithm",
"python"
] |
stackoverflow_0001504378_algorithm_python.txt
|
Q:
Python as a Windows Watchdog
Hi I'm considering using Python to make a watchdog app on Windows XP that will perform the following actions:
Restart Windows at a given time.
Start an exe application.
Run a timer to check: is an application still running
I know of the existence of PyWin32, but I hear that the API is not complete. So my question is can Python perform these actions on Windows?
A:
Since you only want this to work on Windows, the easiest way to do that is to use os.system and make system-specific calls from within a Python program.
Use the built in Windows tool to run programs at a particular time.
Use shutdown -r to reboot Windows.
Use tasklist to list all processes, then search that list. If you need to manipulate a process as well, the best way I know of is the COM method described here.
|
Python as a Windows Watchdog
|
Hi I'm considering using Python to make a watchdog app on Windows XP that will perform the following actions:
Restart Windows at a given time.
Start an exe application.
Run a timer to check: is an application still running
I know of the existence of PyWin32, but I hear that the API is not complete. So my question is can Python perform these actions on Windows?
|
[
"Since you only want this to work on Windows, the easiest way to do that is to use os.system and make system-specific calls from within a Python program. \nUse the built in Windows tool to run programs at a particular time.\nUse shutdown -r to reboot Windows. \nUse tasklist to list all processes, then search that list. If you need to manipulate a process as well, the best way I know of is the COM method described here.\n"
] |
[
3
] |
[] |
[] |
[
"python",
"winapi"
] |
stackoverflow_0001504794_python_winapi.txt
|
Q:
Communicate with backend job from web server or web page
I have an "appliance" (for lack of better description) running linux.
Currently I ssh into the box to launch jobs. This isn't friendly enough for my users, so I'm putting together a simple web UI to launch the script. A job runs for anywhere from 10 seconds to several hours. The web UI needs to reflect the status of the job.
I've solved similar problems in the past by running a daemon on the server that watches a spool directory (or db table) for new job requests, spawns a process, monitors the process, and provides info for the web UI in a db table or status file. The web UI then drops job requests into a spool dir (db) and occasionally check the status file (db). This might be overkill for this task.
For the current task, I am considering spawning the job from the cgi and occasionally checking a status file that the job writes as it progresses or exits.
My question: is there a better (simpler/faster-to-write/more robust) way to do this? Are there existing patterns or tools that I should know about?
(Python solutions are ideal.)
Thanks.
A:
I do this in a number of projects. A web-app (mostly Python/CGI) that spawns a separate python script (using subprocess) which instantly daemonizes itself to do the work. The web-app then continues to issue AJAX requests to check on the daemon process progress (I use simple txt files for communication, database would probably be better). One nice touch is to have the daemon email the end user once it finishes (with a link to retrieve results). This way the user can close their web browser on those jobs that take hours.
A:
I'm not sure if I understood your problem correctly, but I assume you have multiple "jobs" that can run simultaneously and want them to show on web page whether they are complete or not?
When launching a job the web page (python & mod_wsgi for example) would launch a python script that would enter the job into, let's say sqlite database and run the job, once the job completes, the script updates the entry for the job so that it is marked as complete
The status page would just show the stuff from the sqlite.
What you want to put in to the DB in addition to the job ID and perhaps start/end times depends on what you want to show on your job status web page
On a sidenote - if the "jobs" are compilations, meet Hudson
|
Communicate with backend job from web server or web page
|
I have an "appliance" (for lack of better description) running linux.
Currently I ssh into the box to launch jobs. This isn't friendly enough for my users, so I'm putting together a simple web UI to launch the script. A job runs for anywhere from 10 seconds to several hours. The web UI needs to reflect the status of the job.
I've solved similar problems in the past by running a daemon on the server that watches a spool directory (or db table) for new job requests, spawns a process, monitors the process, and provides info for the web UI in a db table or status file. The web UI then drops job requests into a spool dir (db) and occasionally check the status file (db). This might be overkill for this task.
For the current task, I am considering spawning the job from the cgi and occasionally checking a status file that the job writes as it progresses or exits.
My question: is there a better (simpler/faster-to-write/more robust) way to do this? Are there existing patterns or tools that I should know about?
(Python solutions are ideal.)
Thanks.
|
[
"I do this in a number of projects. A web-app (mostly Python/CGI) that spawns a separate python script (using subprocess) which instantly daemonizes itself to do the work. The web-app then continues to issue AJAX requests to check on the daemon process progress (I use simple txt files for communication, database would probably be better). One nice touch is to have the daemon email the end user once it finishes (with a link to retrieve results). This way the user can close their web browser on those jobs that take hours.\n",
"I'm not sure if I understood your problem correctly, but I assume you have multiple \"jobs\" that can run simultaneously and want them to show on web page whether they are complete or not?\nWhen launching a job the web page (python & mod_wsgi for example) would launch a python script that would enter the job into, let's say sqlite database and run the job, once the job completes, the script updates the entry for the job so that it is marked as complete\nThe status page would just show the stuff from the sqlite.\nWhat you want to put in to the DB in addition to the job ID and perhaps start/end times depends on what you want to show on your job status web page\nOn a sidenote - if the \"jobs\" are compilations, meet Hudson\n"
] |
[
2,
0
] |
[] |
[] |
[
"backend",
"linux",
"python"
] |
stackoverflow_0001504729_backend_linux_python.txt
|
Q:
How to markup form fields with in Django
I wasn't able to find a way to identify the type of a field in a django template. My solution was to create a simple filter to access the field and widget class names. I've included the code below in case it's helpful for someone else.
Is there a better approach?
## agency/tagutils/templatetags/fieldtags.py
###############################################################
from django import template
register = template.Library()
@register.filter(name='field_type')
def field_type(value):
return value.field.__class__.__name__
@register.filter(name='widget_type')
def widget_type(value):
return value.field.widget.__class__.__name__
## client/project/settings.py
###############################################################
INSTALLED_APPS = (
# ...
'agency.tagutils',
)
## client/project/templates/project/field_snippet.html
###############################################################
{% load fieldtags %}
<div class="field {{ field|field_type }} {{ field|widget_type }} {{ field.name }}">
{{ field.errors }}
<div class="form_label">
{{ field.label_tag }}
</div>
<div class="form_field">
{{ field }}
</div>
</div>
## sample output html
###############################################################
<div class="field CharField TextInput family_name">
<div class="form_label">
<label for="id_family_name">Family name</label>
</div>
<div class="form_field">
<input id="id_family_name" type="text" name="family_name" maxlength="64" />
</div>
</div>
A:
class MyForm(forms.Form):
myfield = forms.CharField(widget=forms.TextInput(attrs={'class' : 'myfieldclass'}))
or, with a ModelForm
class MyForm(forms.ModelForm):
class Meta:
model = MyModel
widgets = {
'myfield': forms.TextInput(attrs={'class': 'myfieldclass'}),
}
or, when you don't want to redefine the widget
class MyForm(forms.ModelForm):
class Meta:
model = MyModel
def __init__(self, *args, **kwargs):
super(MyForm, self).__init__(*args, **kwargs)
self.fields['myfield'].widget.attrs.update({'class' : 'myfieldclass'})
render normally with {{ form }}
|
How to markup form fields with in Django
|
I wasn't able to find a way to identify the type of a field in a django template. My solution was to create a simple filter to access the field and widget class names. I've included the code below in case it's helpful for someone else.
Is there a better approach?
## agency/tagutils/templatetags/fieldtags.py
###############################################################
from django import template
register = template.Library()
@register.filter(name='field_type')
def field_type(value):
return value.field.__class__.__name__
@register.filter(name='widget_type')
def widget_type(value):
return value.field.widget.__class__.__name__
## client/project/settings.py
###############################################################
INSTALLED_APPS = (
# ...
'agency.tagutils',
)
## client/project/templates/project/field_snippet.html
###############################################################
{% load fieldtags %}
<div class="field {{ field|field_type }} {{ field|widget_type }} {{ field.name }}">
{{ field.errors }}
<div class="form_label">
{{ field.label_tag }}
</div>
<div class="form_field">
{{ field }}
</div>
</div>
## sample output html
###############################################################
<div class="field CharField TextInput family_name">
<div class="form_label">
<label for="id_family_name">Family name</label>
</div>
<div class="form_field">
<input id="id_family_name" type="text" name="family_name" maxlength="64" />
</div>
</div>
|
[
"class MyForm(forms.Form):\n myfield = forms.CharField(widget=forms.TextInput(attrs={'class' : 'myfieldclass'}))\n\nor, with a ModelForm\nclass MyForm(forms.ModelForm):\n class Meta:\n model = MyModel\n widgets = {\n 'myfield': forms.TextInput(attrs={'class': 'myfieldclass'}),\n }\n\nor, when you don't want to redefine the widget\nclass MyForm(forms.ModelForm):\n class Meta:\n model = MyModel\n\n def __init__(self, *args, **kwargs):\n super(MyForm, self).__init__(*args, **kwargs)\n self.fields['myfield'].widget.attrs.update({'class' : 'myfieldclass'})\n\nrender normally with {{ form }}\n"
] |
[
31
] |
[] |
[] |
[
"django",
"django_forms",
"django_models",
"django_templates",
"python"
] |
stackoverflow_0001453488_django_django_forms_django_models_django_templates_python.txt
|
Q:
PHP Import Foreign Class' Method into MyClass
Wondering if this is possible in PHP Land:
Let's say I have a class as follows:
class myClass{
var $myVar;
...
myMethod(){
$this->myVar = 10;
}
}
and another class:
class anotherClass {
...
addFive(){
$this->myVar += 5;
}
}
The 'anotherClass' is 3500 lines long and I just want the single 'addFive' method to use in myClass.
Is there a way I can import the function and be able to call it in my class and the $this would reference the myClass object?
Is this good/bad practice?
(optional) How does this work in Python? (Just curious as I'm starting to learn Python)
A:
A better approach would be to move the complex method into its own class. Then both of your classes can instantiate it, pass any necessary data, and call the method.
A:
The easiest way to do this is have one class extend the other
class myClass extends anotherClass {
}
The myClass class now has access to all the methods of anotherClass that are public or protected.
If you only want the class to have one method of the other, or it's not practical to have one class extends from the other, there's nothing built into PHP that will allow you to do this. The concept you're looking to Google for is "Mixin", as in Mix In one class's functionality with another. There's an article or two on some patterns you could try to achieve this functionality in PHP, but I've never tried it myself.
Good idea/Bad Idea? Once you have the technique down it's convenient and useful, but costs you in performance and makes it harder for a newcomer to grok what you're doing with your code, especially (but not limited to) someone less familiar with OO concepts.
A:
If your myClass extends anotherClass it inherits all the methods and properties of anotherClass (except those marked private).
class AnotherClass {
protected $myVar;
public function addFive(){
$this->myVar += 5;
}
}
class MyClass extends AnotherClass {
public function __construct() {
$this->myVar = 0;
}
public function myMethod(){
$this->myVar = 10;
}
}
$m = new MyClass;
$m->myMethod();
$m->addFive();
var_dump($m);
prints
object(MyClass)#1 (1) {
["myVar":protected]=>
int(15)
}
|
PHP Import Foreign Class' Method into MyClass
|
Wondering if this is possible in PHP Land:
Let's say I have a class as follows:
class myClass{
var $myVar;
...
myMethod(){
$this->myVar = 10;
}
}
and another class:
class anotherClass {
...
addFive(){
$this->myVar += 5;
}
}
The 'anotherClass' is 3500 lines long and I just want the single 'addFive' method to use in myClass.
Is there a way I can import the function and be able to call it in my class and the $this would reference the myClass object?
Is this good/bad practice?
(optional) How does this work in Python? (Just curious as I'm starting to learn Python)
|
[
"A better approach would be to move the complex method into its own class. Then both of your classes can instantiate it, pass any necessary data, and call the method.\n",
"The easiest way to do this is have one class extend the other\nclass myClass extends anotherClass {\n}\n\nThe myClass class now has access to all the methods of anotherClass that are public or protected.\nIf you only want the class to have one method of the other, or it's not practical to have one class extends from the other, there's nothing built into PHP that will allow you to do this. The concept you're looking to Google for is \"Mixin\", as in Mix In one class's functionality with another. There's an article or two on some patterns you could try to achieve this functionality in PHP, but I've never tried it myself.\nGood idea/Bad Idea? Once you have the technique down it's convenient and useful, but costs you in performance and makes it harder for a newcomer to grok what you're doing with your code, especially (but not limited to) someone less familiar with OO concepts. \n",
"If your myClass extends anotherClass it inherits all the methods and properties of anotherClass (except those marked private).\nclass AnotherClass {\n protected $myVar;\n public function addFive(){\n $this->myVar += 5;\n }\n}\n\nclass MyClass extends AnotherClass {\n public function __construct() {\n $this->myVar = 0;\n }\n\n public function myMethod(){\n $this->myVar = 10;\n }\n}\n\n$m = new MyClass;\n$m->myMethod();\n$m->addFive();\nvar_dump($m);\n\nprints\nobject(MyClass)#1 (1) {\n [\"myVar\":protected]=>\n int(15)\n}\n\n"
] |
[
2,
2,
0
] |
[] |
[] |
[
"anonymous_methods",
"closures",
"oop",
"php",
"python"
] |
stackoverflow_0001505621_anonymous_methods_closures_oop_php_python.txt
|
Q:
Visual representation of nodes in Python
I have data that I want to represent visually. The actual data is a tree made up of nodes. Each node has a bunch of data associated with it, but as far as this question goes, I just want a way to represent a tree visually using Python. Any ideas?
The different solutions that popped in my head were to use a GUI library like WxPython or PyQT, or maybe even a PDF generator like ReportLab. I'm hoping there's a library out there that deals closer with data so that I don't have to think out the plotting locations of all the nodes.
A:
Not sure if this is applicable to your situation, but have you looked at graphviz?
It has decent python bindings for it and I've used it for visualizing dependencies which sometimes end up looking like trees.
A:
Instead of using graphviz directly, consider using the visualization tools included in NetworkX. The graph objects there are excellent for many purposes.
A:
Consider using a textual representation of the tree. Otherwise, I'd go with graphviz (dotty, actually).
[root]
+------child1
+------child2
+-------child3
+-------child4
To show the same tree in graphviz, put this in a text file:
digraph graphname {
root -> child1;
root -> child2;
child2 -> child3;
child2 -> child4;
}
Then run dotty on it, or your tool or choice.
|
Visual representation of nodes in Python
|
I have data that I want to represent visually. The actual data is a tree made up of nodes. Each node has a bunch of data associated with it, but as far as this question goes, I just want a way to represent a tree visually using Python. Any ideas?
The different solutions that popped in my head were to use a GUI library like WxPython or PyQT, or maybe even a PDF generator like ReportLab. I'm hoping there's a library out there that deals closer with data so that I don't have to think out the plotting locations of all the nodes.
|
[
"Not sure if this is applicable to your situation, but have you looked at graphviz? \nIt has decent python bindings for it and I've used it for visualizing dependencies which sometimes end up looking like trees.\n",
"Instead of using graphviz directly, consider using the visualization tools included in NetworkX. The graph objects there are excellent for many purposes.\n",
"Consider using a textual representation of the tree. Otherwise, I'd go with graphviz (dotty, actually).\n[root]\n+------child1\n+------child2\n +-------child3\n +-------child4\n\nTo show the same tree in graphviz, put this in a text file:\ndigraph graphname {\n root -> child1;\n root -> child2;\n child2 -> child3;\n child2 -> child4;\n}\n\nThen run dotty on it, or your tool or choice.\n"
] |
[
6,
2,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001505131_python.txt
|
Q:
Making a portable (exe) with Python 3.1?
Is there a way to make a portable/stand-alone exe for python apps?
I've read about p2exe but it doesn't support the latest version. I'm reluctant to go to 2.6 because I'd rather stay with the latest version instead of worrying about incompatibilities as soon as 2.6 becomes too outdated.
A:
cx_freeze has worked for me. Here's a link. The page claims to support 3.1. Good luck!
http://cx-freeze.sourceforge.net/
|
Making a portable (exe) with Python 3.1?
|
Is there a way to make a portable/stand-alone exe for python apps?
I've read about p2exe but it doesn't support the latest version. I'm reluctant to go to 2.6 because I'd rather stay with the latest version instead of worrying about incompatibilities as soon as 2.6 becomes too outdated.
|
[
"cx_freeze has worked for me. Here's a link. The page claims to support 3.1. Good luck!\nhttp://cx-freeze.sourceforge.net/\n"
] |
[
13
] |
[] |
[] |
[
"py2exe",
"python"
] |
stackoverflow_0001505783_py2exe_python.txt
|
Q:
PyQt4 Highlighting
I'm trying to add some syntax highlighting to a text editor in PyQt4. I've found an example in the documentation which works find when compiled from C++ but when i convert it to Python/PyQt it no longer works.
The part of the code that fails (no longer highlights anything) is:
def highlightCurrentLine(self):
extraSelections = []
if not self.isReadOnly():
selection = QTextEdit.ExtraSelection()
lineColor = QColor(Qt.yellow).lighter(160)
selection.format.setBackground(lineColor)
selection.format.setProperty(QTextFormat.FullWidthSelection, QVariant(True))
selection.cursor = self.textCursor()
selection.cursor.clearSelection()
extraSelections.append(selection)
self.setExtraSelections(extraSelections)
which is called by:
self.connect(self, SIGNAL('cursorPositionChanged()'), self.highlightCurrentLine)
Anyone have any idea why this doesn't work?
The versions i am usuing are: Python 2.6.2, PyQt 4.4.4
A:
Ok... turns out i wasn't going mad, i was just using an out of date version of PyQt4.
For information the version of PyQt4 that ships with Ubuntu 9.04 is 4.4.4 but this functionality seems to require 4.5+.
I've upgraded to PyQt4 4.6 and it works fine (plus 4.6 seems to have some nice new functionality too).
A:
Save lineColor somewhere (like self.lineColor). Otherwise, Python will discard the object when the method returns and the format will use an illegal pointer.
|
PyQt4 Highlighting
|
I'm trying to add some syntax highlighting to a text editor in PyQt4. I've found an example in the documentation which works find when compiled from C++ but when i convert it to Python/PyQt it no longer works.
The part of the code that fails (no longer highlights anything) is:
def highlightCurrentLine(self):
extraSelections = []
if not self.isReadOnly():
selection = QTextEdit.ExtraSelection()
lineColor = QColor(Qt.yellow).lighter(160)
selection.format.setBackground(lineColor)
selection.format.setProperty(QTextFormat.FullWidthSelection, QVariant(True))
selection.cursor = self.textCursor()
selection.cursor.clearSelection()
extraSelections.append(selection)
self.setExtraSelections(extraSelections)
which is called by:
self.connect(self, SIGNAL('cursorPositionChanged()'), self.highlightCurrentLine)
Anyone have any idea why this doesn't work?
The versions i am usuing are: Python 2.6.2, PyQt 4.4.4
|
[
"Ok... turns out i wasn't going mad, i was just using an out of date version of PyQt4.\nFor information the version of PyQt4 that ships with Ubuntu 9.04 is 4.4.4 but this functionality seems to require 4.5+.\nI've upgraded to PyQt4 4.6 and it works fine (plus 4.6 seems to have some nice new functionality too).\n",
"Save lineColor somewhere (like self.lineColor). Otherwise, Python will discard the object when the method returns and the format will use an illegal pointer.\n"
] |
[
1,
0
] |
[] |
[] |
[
"pyqt4",
"python",
"qt4"
] |
stackoverflow_0001472044_pyqt4_python_qt4.txt
|
Q:
How to pick certain elements of x-tuple returned by a function?
I am a newbie to Python. Consider the function str.partition() which returns a 3-tuple. If I am interested in only elements 0 and 2 of this tuple, what is the best way to pick only certain elements out of such a tuple?
I can currently do either:
# Introduces "part1" variable, which is useless
(part0, part1, part2) = str.partition(' ')
Or:
# Multiple calls and statements, again redundancy
part0 = str.partition(' ')[0]
part2 = str.partition(' ')[2]
I would like to be able to do something like this, but cannot:
(part0, , part2) = str.partition(' ')
# Or:
(part0, part2) = str.partition(' ')[0, 2]
A:
Underscore is often used as a name for stuff you do not need, so something like this would work:
part0, _, part2 = str.partition(' ')
In this particular case, you could do this, but it isn't a pretty solution:
part0, part2 = str.partition(' ')[::2]
A more esoteric solution:
from operator import itemgetter
part0, part2 = itemgetter(0, 2)(str.partition(' '))
A:
Correct, you can not take several ad hoc elements out of a list of tuple in one go.
part0, part1, part2 = str.partition(' ')
Is the way to go. Don't worry about part1, if you don't need it, you don't need it. It's common to call it "dummy" or "unused" to show that it's not used.
You CAN be ugly with:
part0, part2 = str.partition(' ')[::2]
In this specific case, but that's obfuscating and not nice towards others. ;)
A:
I think a question I asked some time ago could help you:
Pythonic way to get some rows of a matrix
NumPy gives you the slice syntax you want to extract various elements using a tuple or list of indices, but I don't think you'd like to convert you list of strings to a numpy.array just to extract a few elements, so maybe you could write a helper:
def extract(lst, *indices):
return [lst[i] for i in indices]
item0, item2 = extract(str.partition(' '), 0, 2)
A:
you could also use str.split(' ', 1) instead of str.partition(' ')
you'll get back a list instead of a tuple, but you won't get the separator back
A:
This is how I would do it:
all_parts = str.partition(' ')
part0, part2 = all_parts[0], all_parts[2]
|
How to pick certain elements of x-tuple returned by a function?
|
I am a newbie to Python. Consider the function str.partition() which returns a 3-tuple. If I am interested in only elements 0 and 2 of this tuple, what is the best way to pick only certain elements out of such a tuple?
I can currently do either:
# Introduces "part1" variable, which is useless
(part0, part1, part2) = str.partition(' ')
Or:
# Multiple calls and statements, again redundancy
part0 = str.partition(' ')[0]
part2 = str.partition(' ')[2]
I would like to be able to do something like this, but cannot:
(part0, , part2) = str.partition(' ')
# Or:
(part0, part2) = str.partition(' ')[0, 2]
|
[
"Underscore is often used as a name for stuff you do not need, so something like this would work:\npart0, _, part2 = str.partition(' ')\n\nIn this particular case, you could do this, but it isn't a pretty solution:\npart0, part2 = str.partition(' ')[::2]\n\nA more esoteric solution:\nfrom operator import itemgetter\npart0, part2 = itemgetter(0, 2)(str.partition(' '))\n\n",
"Correct, you can not take several ad hoc elements out of a list of tuple in one go.\npart0, part1, part2 = str.partition(' ')\n\nIs the way to go. Don't worry about part1, if you don't need it, you don't need it. It's common to call it \"dummy\" or \"unused\" to show that it's not used.\nYou CAN be ugly with:\npart0, part2 = str.partition(' ')[::2]\n\nIn this specific case, but that's obfuscating and not nice towards others. ;)\n",
"I think a question I asked some time ago could help you:\nPythonic way to get some rows of a matrix\nNumPy gives you the slice syntax you want to extract various elements using a tuple or list of indices, but I don't think you'd like to convert you list of strings to a numpy.array just to extract a few elements, so maybe you could write a helper:\ndef extract(lst, *indices):\n return [lst[i] for i in indices]\n\nitem0, item2 = extract(str.partition(' '), 0, 2)\n\n",
"you could also use str.split(' ', 1) instead of str.partition(' ')\nyou'll get back a list instead of a tuple, but you won't get the separator back\n",
"This is how I would do it:\nall_parts = str.partition(' ')\npart0, part2 = all_parts[0], all_parts[2]\n\n"
] |
[
13,
4,
2,
1,
0
] |
[] |
[] |
[
"iterable_unpacking",
"list",
"python",
"tuples"
] |
stackoverflow_0001502470_iterable_unpacking_list_python_tuples.txt
|
Q:
blackle.com queries
I'm trying to query blackle.com for searches, but I get an 403 HTTP error. Can somebody point out what is wrong here?
#!/usr/bin/env python
import urllib2
ss = raw_input('Please enter search string: ')
response = "http://www.google.com/cse?cx=013269018370076798483:gg7jrrhpsy4&cof=FORID:1&q=" + ss + "&sa=Search"
urllib2.urlopen(response)
html = response.read()
print html
A:
HTTP 403 means "forbidden" (see here for a good explanation): google.com doesn't want to let you access that resource. Since it does let browsers access it, presumably it's identifying you as a robot (automated code, not interactive user browser), through user agent checking and the like. Have you checked robots.txt to see if you SHOULD be allowed to access such URLs? In http://www.google.com/robots.txt I see one line:
Disallow: /cse?
which means robots are NOT allowed here. See here for explanations of robots.txt, here for the standard Python library module roboparser that makes it easy for a Python program to understand a robots.txt file.
You could try fooling google's detection of "robots" vs humans, e.g. by falsifying your user agent header and so on, and maybe you'd get away with it for a while, but do you really want to deliberately violate the terms of use and get into a fight about it with google...?
|
blackle.com queries
|
I'm trying to query blackle.com for searches, but I get an 403 HTTP error. Can somebody point out what is wrong here?
#!/usr/bin/env python
import urllib2
ss = raw_input('Please enter search string: ')
response = "http://www.google.com/cse?cx=013269018370076798483:gg7jrrhpsy4&cof=FORID:1&q=" + ss + "&sa=Search"
urllib2.urlopen(response)
html = response.read()
print html
|
[
"HTTP 403 means \"forbidden\" (see here for a good explanation): google.com doesn't want to let you access that resource. Since it does let browsers access it, presumably it's identifying you as a robot (automated code, not interactive user browser), through user agent checking and the like. Have you checked robots.txt to see if you SHOULD be allowed to access such URLs? In http://www.google.com/robots.txt I see one line:\nDisallow: /cse?\n\nwhich means robots are NOT allowed here. See here for explanations of robots.txt, here for the standard Python library module roboparser that makes it easy for a Python program to understand a robots.txt file.\nYou could try fooling google's detection of \"robots\" vs humans, e.g. by falsifying your user agent header and so on, and maybe you'd get away with it for a while, but do you really want to deliberately violate the terms of use and get into a fight about it with google...?\n"
] |
[
2
] |
[] |
[] |
[
"python",
"search"
] |
stackoverflow_0001505958_python_search.txt
|
Q:
Need a HTTPS-capable Python XML-RPC server
I already have a very simple threading XML-RPC server in Python:
from SocketServer import ThreadingMixIn
class AsyncXMLRPCServer(ThreadingMixIn, SimpleXMLRPCServer):
pass
server = AsyncXMLRPCServer(('localhost', 9999))
server.register_instance(some_object())
server.serve_forever()
Now I want to make it accessible exclusively over https. What do I do?
A:
The standard library doesn't support HTTPS servers. There is a Cookbook Recipe using an OpenSSL module. There is also a Twisted solution.
|
Need a HTTPS-capable Python XML-RPC server
|
I already have a very simple threading XML-RPC server in Python:
from SocketServer import ThreadingMixIn
class AsyncXMLRPCServer(ThreadingMixIn, SimpleXMLRPCServer):
pass
server = AsyncXMLRPCServer(('localhost', 9999))
server.register_instance(some_object())
server.serve_forever()
Now I want to make it accessible exclusively over https. What do I do?
|
[
"The standard library doesn't support HTTPS servers. There is a Cookbook Recipe using an OpenSSL module. There is also a Twisted solution.\n"
] |
[
5
] |
[] |
[] |
[
"https",
"python",
"ssl",
"xml_rpc"
] |
stackoverflow_0001506379_https_python_ssl_xml_rpc.txt
|
Q:
Duplicate Insertions in Database using sqlite, sqlalchemy, python
I am learning Python and, through the help of online resources and people on this site, am getting the hang of it. In this first script of mine, in which I'm parsing Twitter RSS feed entries and inserting the results into a database, there is one remaining problem that I cannot fix. Namely, duplicate entries are being inserted into one of the tables.
As a bit of background, I originally found a base script on HalOtis.com for downloading RSS feeds and then modified it in several ways: 1) modified to account for idiosyncracies in Twitter RSS feeds (it's not separated into content, title, URL, etc.); 2) added tables for "hashtags" and for the many-to-many relationship (entry_tag table); 3) changed table set-up to sqlalchemy; 4) made some ad hoc changes to account for weird unicode problems that were occurring. As a result, the code is ugly in places, but it has been a good learning experience and now works--except that it keeps inserting duplicates in the "entries" table.
Since I'm not sure what would be most helpful to people, I've pasted in the entire code below, with some comments in a few places to point out what I think is most important.
I would really appreciate any help with this. Thanks!
Edit: Somebody suggested I provide a schema for the database. I've never done this before, so if I'm not doing it right, bear with me. I am setting up four tables:
RSSFeeds, which contains a list of Twitter RSS feeds
RSSEntries, which contains a list of individual entries downloaded (after parsing) from each of the feeds (with columns for content, hashtags, date, url)
Tags, which contains a list of all the hashtags that are found in individual entries (Tweets)
entry_tag, which contains columns allowing me to map tags to entries.
In short, the script below grabs the five test RSS feeds from the RSS Feeds table, downloads the 20 latest entries / tweets from each feed, parses the entries, and puts the information into the RSS Entries, Tags, and entry_tag tables.
#!/usr/local/bin/python
import sqlite3
import threading
import time
import Queue
from time import strftime
import re
from string import split
import feedparser
from django.utils.encoding import smart_str, smart_unicode
from sqlalchemy import schema, types, ForeignKey, select, orm
from sqlalchemy import create_engine
engine = create_engine('sqlite:///test98.sqlite', echo=True)
metadata = schema.MetaData(engine)
metadata.bind = engine
def now():
return datetime.datetime.now()
#set up four tables, with many-to-many relationship
RSSFeeds = schema.Table('feeds', metadata,
schema.Column('id', types.Integer,
schema.Sequence('feeds_seq_id', optional=True), primary_key=True),
schema.Column('url', types.VARCHAR(1000), default=u''),
)
RSSEntries = schema.Table('entries', metadata,
schema.Column('id', types.Integer,
schema.Sequence('entries_seq_id', optional=True), primary_key=True),
schema.Column('feed_id', types.Integer, schema.ForeignKey('feeds.id')),
schema.Column('short_url', types.VARCHAR(1000), default=u''),
schema.Column('content', types.Text(), nullable=False),
schema.Column('hashtags', types.Unicode(255)),
schema.Column('date', types.String()),
)
tag_table = schema.Table('tag', metadata,
schema.Column('id', types.Integer,
schema.Sequence('tag_seq_id', optional=True), primary_key=True),
schema.Column('tagname', types.Unicode(20), nullable=False, unique=True),
)
entrytag_table = schema.Table('entrytag', metadata,
schema.Column('id', types.Integer,
schema.Sequence('entrytag_seq_id', optional=True), primary_key=True),
schema.Column('entryid', types.Integer, schema.ForeignKey('entries.id')),
schema.Column('tagid', types.Integer, schema.ForeignKey('tag.id')),
)
metadata.create_all(bind=engine, checkfirst=True)
# Insert test set of Twitter RSS feeds
stmt = RSSFeeds.insert()
stmt.execute(
{'url': 'http://twitter.com/statuses/user_timeline/14908909.rss'},
{'url': 'http://twitter.com/statuses/user_timeline/52903246.rss'},
{'url': 'http://twitter.com/statuses/user_timeline/41902319.rss'},
{'url': 'http://twitter.com/statuses/user_timeline/29950404.rss'},
{'url': 'http://twitter.com/statuses/user_timeline/35699859.rss'},
)
#These 3 lines for threading process (see HalOtis.com for example)
THREAD_LIMIT = 20
jobs = Queue.Queue(0)
rss_to_process = Queue.Queue(THREAD_LIMIT)
#connect to sqlite database and grab the 5 test RSS feeds
conn = engine.connect()
feeds = conn.execute('SELECT id, url FROM feeds').fetchall()
#This block contains all the parsing and DB insertion
def store_feed_items(id, items):
""" Takes a feed_id and a list of items and stores them in the DB """
for entry in items:
conn.execute('SELECT id from entries WHERE short_url=?', (entry.link,))
#note: entry.summary contains entire feed entry for Twitter,
#i.e., not separated into content, etc.
s = unicode(entry.summary)
test = s.split()
tinyurl2 = [i for i in test if i.startswith('http://')]
hashtags2 = [i for i in s.split() if i.startswith('#')]
content2 = ' '.join(i for i in s.split() if i not in tinyurl2+hashtags2)
content = unicode(content2)
tinyurl = unicode(tinyurl2)
hashtags = unicode (hashtags2)
print hashtags
date = strftime("%Y-%m-%d %H:%M:%S",entry.updated_parsed)
#Insert parsed feed data into entries table
#THIS IS WHERE DUPLICATES OCCUR
result = conn.execute(RSSEntries.insert(), {'feed_id': id, 'short_url': tinyurl,
'content': content, 'hashtags': hashtags, 'date': date})
entry_id = result.last_inserted_ids()[0]
#Look up tag identifiers and create any that don't exist:
tags = tag_table
tag_id_query = select([tags.c.tagname, tags.c.id], tags.c.tagname.in_(hashtags2))
tag_ids = dict(conn.execute(tag_id_query).fetchall())
for tag in hashtags2:
if tag not in tag_ids:
result = conn.execute(tags.insert(), {'tagname': tag})
tag_ids[tag] = result.last_inserted_ids()[0]
#insert data into entrytag table
if hashtags2: conn.execute(entrytag_table.insert(),
[{'entryid': entry_id, 'tagid': tag_ids[tag]} for tag in hashtags2])
#Rest of file completes the threading process
def thread():
while True:
try:
id, feed_url = jobs.get(False) # False = Don't wait
except Queue.Empty:
return
entries = feedparser.parse(feed_url).entries
rss_to_process.put((id, entries), True) # This will block if full
for info in feeds: # Queue them up
jobs.put([info['id'], info['url']])
for n in xrange(THREAD_LIMIT):
t = threading.Thread(target=thread)
t.start()
while threading.activeCount() > 1 or not rss_to_process.empty():
# That condition means we want to do this loop if there are threads
# running OR there's stuff to process
try:
id, entries = rss_to_process.get(False, 1) # Wait for up to a second
except Queue.Empty:
continue
store_feed_items(id, entries)
A:
It looks like you included SQLAlchemy into a previously existing script that didn't use SQLAlchemy. There are too many moving parts here that none of us apparently understand well enough.
I would recommend starting from scratch. Don't use threading. Don't use sqlalchemy. To start maybe don't even use an SQL database. Write a script that collects the information you want in the simplist possible way into a simple data structure using simple loops and maybe a time.sleep(). Then when that works you can add in storage to an SQL database, and I really don't think writing SQL statements directly is much harder than using an ORM and it's easier to debug IMHO. There is a good chance you will never need to add threading.
"If you think you are smart enough to write multi-threaded programs, you're not." -- James Ahlstrom.
|
Duplicate Insertions in Database using sqlite, sqlalchemy, python
|
I am learning Python and, through the help of online resources and people on this site, am getting the hang of it. In this first script of mine, in which I'm parsing Twitter RSS feed entries and inserting the results into a database, there is one remaining problem that I cannot fix. Namely, duplicate entries are being inserted into one of the tables.
As a bit of background, I originally found a base script on HalOtis.com for downloading RSS feeds and then modified it in several ways: 1) modified to account for idiosyncracies in Twitter RSS feeds (it's not separated into content, title, URL, etc.); 2) added tables for "hashtags" and for the many-to-many relationship (entry_tag table); 3) changed table set-up to sqlalchemy; 4) made some ad hoc changes to account for weird unicode problems that were occurring. As a result, the code is ugly in places, but it has been a good learning experience and now works--except that it keeps inserting duplicates in the "entries" table.
Since I'm not sure what would be most helpful to people, I've pasted in the entire code below, with some comments in a few places to point out what I think is most important.
I would really appreciate any help with this. Thanks!
Edit: Somebody suggested I provide a schema for the database. I've never done this before, so if I'm not doing it right, bear with me. I am setting up four tables:
RSSFeeds, which contains a list of Twitter RSS feeds
RSSEntries, which contains a list of individual entries downloaded (after parsing) from each of the feeds (with columns for content, hashtags, date, url)
Tags, which contains a list of all the hashtags that are found in individual entries (Tweets)
entry_tag, which contains columns allowing me to map tags to entries.
In short, the script below grabs the five test RSS feeds from the RSS Feeds table, downloads the 20 latest entries / tweets from each feed, parses the entries, and puts the information into the RSS Entries, Tags, and entry_tag tables.
#!/usr/local/bin/python
import sqlite3
import threading
import time
import Queue
from time import strftime
import re
from string import split
import feedparser
from django.utils.encoding import smart_str, smart_unicode
from sqlalchemy import schema, types, ForeignKey, select, orm
from sqlalchemy import create_engine
engine = create_engine('sqlite:///test98.sqlite', echo=True)
metadata = schema.MetaData(engine)
metadata.bind = engine
def now():
return datetime.datetime.now()
#set up four tables, with many-to-many relationship
RSSFeeds = schema.Table('feeds', metadata,
schema.Column('id', types.Integer,
schema.Sequence('feeds_seq_id', optional=True), primary_key=True),
schema.Column('url', types.VARCHAR(1000), default=u''),
)
RSSEntries = schema.Table('entries', metadata,
schema.Column('id', types.Integer,
schema.Sequence('entries_seq_id', optional=True), primary_key=True),
schema.Column('feed_id', types.Integer, schema.ForeignKey('feeds.id')),
schema.Column('short_url', types.VARCHAR(1000), default=u''),
schema.Column('content', types.Text(), nullable=False),
schema.Column('hashtags', types.Unicode(255)),
schema.Column('date', types.String()),
)
tag_table = schema.Table('tag', metadata,
schema.Column('id', types.Integer,
schema.Sequence('tag_seq_id', optional=True), primary_key=True),
schema.Column('tagname', types.Unicode(20), nullable=False, unique=True),
)
entrytag_table = schema.Table('entrytag', metadata,
schema.Column('id', types.Integer,
schema.Sequence('entrytag_seq_id', optional=True), primary_key=True),
schema.Column('entryid', types.Integer, schema.ForeignKey('entries.id')),
schema.Column('tagid', types.Integer, schema.ForeignKey('tag.id')),
)
metadata.create_all(bind=engine, checkfirst=True)
# Insert test set of Twitter RSS feeds
stmt = RSSFeeds.insert()
stmt.execute(
{'url': 'http://twitter.com/statuses/user_timeline/14908909.rss'},
{'url': 'http://twitter.com/statuses/user_timeline/52903246.rss'},
{'url': 'http://twitter.com/statuses/user_timeline/41902319.rss'},
{'url': 'http://twitter.com/statuses/user_timeline/29950404.rss'},
{'url': 'http://twitter.com/statuses/user_timeline/35699859.rss'},
)
#These 3 lines for threading process (see HalOtis.com for example)
THREAD_LIMIT = 20
jobs = Queue.Queue(0)
rss_to_process = Queue.Queue(THREAD_LIMIT)
#connect to sqlite database and grab the 5 test RSS feeds
conn = engine.connect()
feeds = conn.execute('SELECT id, url FROM feeds').fetchall()
#This block contains all the parsing and DB insertion
def store_feed_items(id, items):
""" Takes a feed_id and a list of items and stores them in the DB """
for entry in items:
conn.execute('SELECT id from entries WHERE short_url=?', (entry.link,))
#note: entry.summary contains entire feed entry for Twitter,
#i.e., not separated into content, etc.
s = unicode(entry.summary)
test = s.split()
tinyurl2 = [i for i in test if i.startswith('http://')]
hashtags2 = [i for i in s.split() if i.startswith('#')]
content2 = ' '.join(i for i in s.split() if i not in tinyurl2+hashtags2)
content = unicode(content2)
tinyurl = unicode(tinyurl2)
hashtags = unicode (hashtags2)
print hashtags
date = strftime("%Y-%m-%d %H:%M:%S",entry.updated_parsed)
#Insert parsed feed data into entries table
#THIS IS WHERE DUPLICATES OCCUR
result = conn.execute(RSSEntries.insert(), {'feed_id': id, 'short_url': tinyurl,
'content': content, 'hashtags': hashtags, 'date': date})
entry_id = result.last_inserted_ids()[0]
#Look up tag identifiers and create any that don't exist:
tags = tag_table
tag_id_query = select([tags.c.tagname, tags.c.id], tags.c.tagname.in_(hashtags2))
tag_ids = dict(conn.execute(tag_id_query).fetchall())
for tag in hashtags2:
if tag not in tag_ids:
result = conn.execute(tags.insert(), {'tagname': tag})
tag_ids[tag] = result.last_inserted_ids()[0]
#insert data into entrytag table
if hashtags2: conn.execute(entrytag_table.insert(),
[{'entryid': entry_id, 'tagid': tag_ids[tag]} for tag in hashtags2])
#Rest of file completes the threading process
def thread():
while True:
try:
id, feed_url = jobs.get(False) # False = Don't wait
except Queue.Empty:
return
entries = feedparser.parse(feed_url).entries
rss_to_process.put((id, entries), True) # This will block if full
for info in feeds: # Queue them up
jobs.put([info['id'], info['url']])
for n in xrange(THREAD_LIMIT):
t = threading.Thread(target=thread)
t.start()
while threading.activeCount() > 1 or not rss_to_process.empty():
# That condition means we want to do this loop if there are threads
# running OR there's stuff to process
try:
id, entries = rss_to_process.get(False, 1) # Wait for up to a second
except Queue.Empty:
continue
store_feed_items(id, entries)
|
[
"It looks like you included SQLAlchemy into a previously existing script that didn't use SQLAlchemy. There are too many moving parts here that none of us apparently understand well enough.\nI would recommend starting from scratch. Don't use threading. Don't use sqlalchemy. To start maybe don't even use an SQL database. Write a script that collects the information you want in the simplist possible way into a simple data structure using simple loops and maybe a time.sleep(). Then when that works you can add in storage to an SQL database, and I really don't think writing SQL statements directly is much harder than using an ORM and it's easier to debug IMHO. There is a good chance you will never need to add threading.\n\"If you think you are smart enough to write multi-threaded programs, you're not.\" -- James Ahlstrom.\n"
] |
[
2
] |
[] |
[] |
[
"duplicates",
"python",
"sqlalchemy",
"sqlite",
"twitter"
] |
stackoverflow_0001506023_duplicates_python_sqlalchemy_sqlite_twitter.txt
|
Q:
SQLAlchemy - MapperExtension.before_delete not called
I have question regarding the SQLAlchemy. I have database which contains Items, every Item has assigned more Records (1:n). And the Record is partially stored in the database, but it also has an assigned file (1:1) on the filesystem.
What I want to do is to delete the assigned file when the Record is removed from the database. So I wrote the following MapperExtension:
class _StoredRecordEraser(MapperExtension):
def before_delete(self, mapper, connection, instance):
instance.erase()
The following code creates an experimental setup (full code is here: test.py):
session = Session()
i1 = Item(id='item1')
r11 = Record(id='record11', attr='1')
i1.records.append(r11)
r12 = Record(id='record12', attr='2')
i1.records.append(r12)
session.add(i1)
session.commit()
And finally, my problem... The following code works O.k. and the old.erase() method is called:
session = Session()
i1 = session.query(Item).get('item1')
old = i1.records[0]
new = Record(id='record13', attr='3')
i1.records.remove(old)
i1.records.append(new)
session.commit()
But when I change the id of a new Record to record11, which is already in the database, but it is not the same item (attr=3), the old.erase() is not called. Does anybody know why?
Thanks
A:
A delete + insert of two records that ultimately have the same primary key within a single flush are converted into a single update right now. this is not the best behavior - it really should delete then insert, so that the various events assigned to those activities are triggered as expected (not just mapper extension methods, but database level defaults too). But the flush() process is hardwired to perform inserts/updates first, then deletes. As a workaround, you can issue a flush() after the remove/delete operation, then a second for the add/insert.
As far as flushes' current behavior, I've looked into trying to break this out but it gets very complicated - inserts which depend on deletes would have to execute after the deletes, but updates which depend on inserts would have to execute beforehand. Ultimately, the unitofwork module would be rewritten (big time) to consider all insert/update/deletes in a single stream of dependent actions that would be topologically sorted against each other. This would simplify the methods used to execute statements in the correct order, although all new systems for synchronizing data between rows based on server-level defaults would have to be devised, and its possible that complexity would be re-introduced if it turned out the "simpler" method spent too much time naively sorting insert statements that are known at the ORM level to not require any sorting against each other. The topological sort works at a more coarse grained level than that right now.
|
SQLAlchemy - MapperExtension.before_delete not called
|
I have question regarding the SQLAlchemy. I have database which contains Items, every Item has assigned more Records (1:n). And the Record is partially stored in the database, but it also has an assigned file (1:1) on the filesystem.
What I want to do is to delete the assigned file when the Record is removed from the database. So I wrote the following MapperExtension:
class _StoredRecordEraser(MapperExtension):
def before_delete(self, mapper, connection, instance):
instance.erase()
The following code creates an experimental setup (full code is here: test.py):
session = Session()
i1 = Item(id='item1')
r11 = Record(id='record11', attr='1')
i1.records.append(r11)
r12 = Record(id='record12', attr='2')
i1.records.append(r12)
session.add(i1)
session.commit()
And finally, my problem... The following code works O.k. and the old.erase() method is called:
session = Session()
i1 = session.query(Item).get('item1')
old = i1.records[0]
new = Record(id='record13', attr='3')
i1.records.remove(old)
i1.records.append(new)
session.commit()
But when I change the id of a new Record to record11, which is already in the database, but it is not the same item (attr=3), the old.erase() is not called. Does anybody know why?
Thanks
|
[
"A delete + insert of two records that ultimately have the same primary key within a single flush are converted into a single update right now. this is not the best behavior - it really should delete then insert, so that the various events assigned to those activities are triggered as expected (not just mapper extension methods, but database level defaults too). But the flush() process is hardwired to perform inserts/updates first, then deletes. As a workaround, you can issue a flush() after the remove/delete operation, then a second for the add/insert.\nAs far as flushes' current behavior, I've looked into trying to break this out but it gets very complicated - inserts which depend on deletes would have to execute after the deletes, but updates which depend on inserts would have to execute beforehand. Ultimately, the unitofwork module would be rewritten (big time) to consider all insert/update/deletes in a single stream of dependent actions that would be topologically sorted against each other. This would simplify the methods used to execute statements in the correct order, although all new systems for synchronizing data between rows based on server-level defaults would have to be devised, and its possible that complexity would be re-introduced if it turned out the \"simpler\" method spent too much time naively sorting insert statements that are known at the ORM level to not require any sorting against each other. The topological sort works at a more coarse grained level than that right now.\n"
] |
[
4
] |
[] |
[] |
[
"python",
"sqlalchemy"
] |
stackoverflow_0001496429_python_sqlalchemy.txt
|
Q:
Why doesn't os.system('set foo=bar') work?
Possibly a stupid question: Why can't I set an environment variable with this?
os.system('set foo=bar') # on windows
I'm aware of os.environ, and that works for me. I'm just confused about why the former doesn't work.
A:
See the discussion here -- export and set are both shell commands, and whether on Windows or Unix, they're still inevitably being addressed to a child process running the shell (be it bash, cmd.exe, whatever) and so bereft of any further action when that child process terminates (i.e., when os.system returns to the caller).
|
Why doesn't os.system('set foo=bar') work?
|
Possibly a stupid question: Why can't I set an environment variable with this?
os.system('set foo=bar') # on windows
I'm aware of os.environ, and that works for me. I'm just confused about why the former doesn't work.
|
[
"See the discussion here -- export and set are both shell commands, and whether on Windows or Unix, they're still inevitably being addressed to a child process running the shell (be it bash, cmd.exe, whatever) and so bereft of any further action when that child process terminates (i.e., when os.system returns to the caller).\n"
] |
[
11
] |
[] |
[] |
[
"environment",
"environment_variables",
"python"
] |
stackoverflow_0001506579_environment_environment_variables_python.txt
|
Q:
Network programming abstraction, decomposition
I have a problem as follows:
Server process 1
Constantly sends updates that occur to a datastore
Server process 2
Clients contact the server, which queries the datastore, and returns a result
The thing is, the results that process 1 and process 2 are sending back the client are totally different and unrelated.
How does one decompose this?
Do you just have one process constantly sending data, and define the protocol to have a bit which corresponds to whether the return type is 1 or 2?
Do you have two processes? How do they share the datastore then (it is just a structure not a database)?
Thanks!
A:
If you can restrict yourself to Twisted, I recommend to use Perspective Broker. It's essentially an RPC system, and doesn't care much about the notion of "client" and "server" - either the initiator of a TCP connection or the responder can start RPC calls in PB.
So server 1 would accept registration calls with a callback object, and call the callback whenever it has new data available. Server 2 provides various RPC operations as clients require them. If they operate on the very same data, I would put both servers into a single process.
A:
It sounds like you want to stream your series of ints "somewhere" and also collect them in a datastore. In my system I am streaming sensor readings into a database and also allowing them to go directly to web clients, giving them live power readings. I've written a blog entry on why a database is not suitable for live data - though it is perfect for saving the data for later analysis.
I'd have the first server process be a twisted server that uses txamp to stream the ints to RabbitMQ. Any clients that want live data can subscribe to the stream in RabbitMQ, also using Txamp. Web browser clients can use Orbited here is a worked example.
In your design server 1 saves to the database. You could instead have server3 collect data from RabbitMQ and stream it to the database. I plan to have a server that collects chunks of data and render graphs to store to a central fileshare.
Don't create your own messaging system, RabbitMQ is well tested, scalable, and can persist your "messages" (raw data) if something goes wrong.
A:
Why not use a database instead of "just a structure"? Both relational and non-relational DBs offer many practical advantages (separate processes using them, take care of replication [[and/or snapshots, backups, ...]], rich functionality if you need it for the "queries", and so on, and so forth).
Worst case, the "just a structure" can be handled by a third process that's entirely dedicated to it (basically mimicking what any DB engine would offer -- though the engine would probably do it better and faster;-), allowing you to at least keep a good decomposition (with the two server processes both interacting with the "datastore process").
|
Network programming abstraction, decomposition
|
I have a problem as follows:
Server process 1
Constantly sends updates that occur to a datastore
Server process 2
Clients contact the server, which queries the datastore, and returns a result
The thing is, the results that process 1 and process 2 are sending back the client are totally different and unrelated.
How does one decompose this?
Do you just have one process constantly sending data, and define the protocol to have a bit which corresponds to whether the return type is 1 or 2?
Do you have two processes? How do they share the datastore then (it is just a structure not a database)?
Thanks!
|
[
"If you can restrict yourself to Twisted, I recommend to use Perspective Broker. It's essentially an RPC system, and doesn't care much about the notion of \"client\" and \"server\" - either the initiator of a TCP connection or the responder can start RPC calls in PB. \nSo server 1 would accept registration calls with a callback object, and call the callback whenever it has new data available. Server 2 provides various RPC operations as clients require them. If they operate on the very same data, I would put both servers into a single process.\n",
"It sounds like you want to stream your series of ints \"somewhere\" and also collect them in a datastore. In my system I am streaming sensor readings into a database and also allowing them to go directly to web clients, giving them live power readings. I've written a blog entry on why a database is not suitable for live data - though it is perfect for saving the data for later analysis.\nI'd have the first server process be a twisted server that uses txamp to stream the ints to RabbitMQ. Any clients that want live data can subscribe to the stream in RabbitMQ, also using Txamp. Web browser clients can use Orbited here is a worked example.\nIn your design server 1 saves to the database. You could instead have server3 collect data from RabbitMQ and stream it to the database. I plan to have a server that collects chunks of data and render graphs to store to a central fileshare. \nDon't create your own messaging system, RabbitMQ is well tested, scalable, and can persist your \"messages\" (raw data) if something goes wrong.\n",
"Why not use a database instead of \"just a structure\"? Both relational and non-relational DBs offer many practical advantages (separate processes using them, take care of replication [[and/or snapshots, backups, ...]], rich functionality if you need it for the \"queries\", and so on, and so forth).\nWorst case, the \"just a structure\" can be handled by a third process that's entirely dedicated to it (basically mimicking what any DB engine would offer -- though the engine would probably do it better and faster;-), allowing you to at least keep a good decomposition (with the two server processes both interacting with the \"datastore process\").\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"network_programming",
"networking",
"python",
"twisted"
] |
stackoverflow_0001505744_network_programming_networking_python_twisted.txt
|
Q:
What are the advantages of faster server side scripting languages?
Typical performance of Python scripts are about 5 times faster than PHP. What are the advantages of using faster server side scripting languages? Will the speed ever be felt by website visitors? Can PHP's performance be compensated by faster server processors?
A:
According to Andy B. King, in Website Optimization:
For Google an increase in page load time from 0.4 second to 0.9 seconds decreased traffic and ad revenues by 20%. For Amazon every 100 ms increase in load times decreased sales with 1%.
http://www.svennerberg.com/2008/12/page-load-times-vs-conversion-rates/
But even though Python is ~4 times faster, by and large it's the architecture of the software that makes the biggest difference. If your queries and disk access are unoptimized, then you have a massive bottleneck—even when you're just including some 20 different files (5ms seek time equals 100ms). I'm certain that even Python can be slowed by inefficiencies and database queries, just as badly as PHP can.
That said, unfriendly interfaces will cost you more, in the long run, than the decrease in speed will. Just make a registration form with no explanation and fifteen required fields, with strict validation, and you'll scare tons of people away. (In my own opinion.)
A:
How about when you get charged for CPU time?
I just saved a ton of money by switching to Python!
A:
For typical web apps, the difference in speed won't usually be felt within the request itself (the network latency dwarfs your compute time for a typical script that runs inside an HTTP request).
There are plenty of other things that affect scalability, but the processing needs to handle an average request does factor in. So, a lonely user will not feel the difference. A user who is one of many might feel the difference, as the server infrastructure struggles with load.
Of course, throwing more processor will mitigate the issue, but as jrockway says, maintaining two servers is significantly more than twice as hard as maintaining one.
All of that said, in the vast majority of cases, your bottlenecks will not be processor. You'll be running out of memory, or you'll find that your database interaction is the real bottleneck.
A:
Increasing the execution speed of web request handlers usually translates to handling more requests/second with the same hardware. This is valuable in a number of cases; maintaining one server is much easier than maintaining two.
BTW, why Python and not Haskell? Haskell is 100x faster than PHP in some benchmarks.
|
What are the advantages of faster server side scripting languages?
|
Typical performance of Python scripts are about 5 times faster than PHP. What are the advantages of using faster server side scripting languages? Will the speed ever be felt by website visitors? Can PHP's performance be compensated by faster server processors?
|
[
"According to Andy B. King, in Website Optimization:\n\nFor Google an increase in page load time from 0.4 second to 0.9 seconds decreased traffic and ad revenues by 20%. For Amazon every 100 ms increase in load times decreased sales with 1%.\n\nhttp://www.svennerberg.com/2008/12/page-load-times-vs-conversion-rates/\nBut even though Python is ~4 times faster, by and large it's the architecture of the software that makes the biggest difference. If your queries and disk access are unoptimized, then you have a massive bottleneck—even when you're just including some 20 different files (5ms seek time equals 100ms). I'm certain that even Python can be slowed by inefficiencies and database queries, just as badly as PHP can.\nThat said, unfriendly interfaces will cost you more, in the long run, than the decrease in speed will. Just make a registration form with no explanation and fifteen required fields, with strict validation, and you'll scare tons of people away. (In my own opinion.)\n",
"How about when you get charged for CPU time? \n\nI just saved a ton of money by switching to Python!\n\n",
"For typical web apps, the difference in speed won't usually be felt within the request itself (the network latency dwarfs your compute time for a typical script that runs inside an HTTP request).\nThere are plenty of other things that affect scalability, but the processing needs to handle an average request does factor in. So, a lonely user will not feel the difference. A user who is one of many might feel the difference, as the server infrastructure struggles with load.\nOf course, throwing more processor will mitigate the issue, but as jrockway says, maintaining two servers is significantly more than twice as hard as maintaining one.\nAll of that said, in the vast majority of cases, your bottlenecks will not be processor. You'll be running out of memory, or you'll find that your database interaction is the real bottleneck.\n",
"Increasing the execution speed of web request handlers usually translates to handling more requests/second with the same hardware. This is valuable in a number of cases; maintaining one server is much easier than maintaining two.\nBTW, why Python and not Haskell? Haskell is 100x faster than PHP in some benchmarks.\n"
] |
[
13,
10,
4,
1
] |
[] |
[] |
[
"performance",
"php",
"python",
"scripting",
"webserver"
] |
stackoverflow_0001507175_performance_php_python_scripting_webserver.txt
|
Q:
urllib2: submitting a form and then redirecting
My goal is to come up with a portable urllib2 solution that would POST a form and then redirect the user to what comes out.
The POSTing part is simple:
request = urllib2.Request('https://some.site/page', data=urllib.urlencode({'key':'value'}))
response = urllib2.urlopen(request)
Providing data sets request type to POST. Now, what I suspect all the data I should care about comes from response.info() & response.geturl(). I should do a self.redirect(response.geturl()) inside a get(self) method of webapp.RequestHandler.
But what should I do with headers? Anything else I've overlooked? Code snippets are highly appreciated. :)
TIA.
EDIT: Here's a naive solution I came up with. Redirects but the remote server shows an error indicating that there's no match to the previously POSTed form:
info = response.info()
for key in info:
self.response.headers[key] = info[key]
self.response.headers['Location'] = response.geturl()
self.response.set_status(302)
self.response.clear()
A:
The standard way to follow redirects with urllib2 is to use the HTTPRedirectHandler.
(Not sure what you mean by 'what comes out' but I'm assuming a standard http redirect here, javascript redirect is a different beast)
# Created handler
redirectionHandler = urllib2.HTTPRedirectHandler()
# 2 apply the handler to an opener
opener = urllib2.build_opener(redirectionHandler)
# 3. Install the openers
urllib2.install_opener(opener)
request = urllib2.Request('https://some.site/page', data=urllib.urlencode({'key':'value'}))
response = urllib2.urlopen(request)
See urllib2.HTTPRedirectHandler for details on the handler.
A:
I suspect this will almost always fail. When you POST a form, the URL you end up at is just the URL you posted to. Sending someone else to this URL, or even visiting it again with the same browser that just POSTed, is going to do a GET and the page won't have the form data that was POSTed. The only way this would work is if the site redirected after the POST to a URL containing some sort of session info.
A:
You will find using mechanize much easier than using urllib2 directly
http://wwwsearch.sourceforge.net/mechanize/
|
urllib2: submitting a form and then redirecting
|
My goal is to come up with a portable urllib2 solution that would POST a form and then redirect the user to what comes out.
The POSTing part is simple:
request = urllib2.Request('https://some.site/page', data=urllib.urlencode({'key':'value'}))
response = urllib2.urlopen(request)
Providing data sets request type to POST. Now, what I suspect all the data I should care about comes from response.info() & response.geturl(). I should do a self.redirect(response.geturl()) inside a get(self) method of webapp.RequestHandler.
But what should I do with headers? Anything else I've overlooked? Code snippets are highly appreciated. :)
TIA.
EDIT: Here's a naive solution I came up with. Redirects but the remote server shows an error indicating that there's no match to the previously POSTed form:
info = response.info()
for key in info:
self.response.headers[key] = info[key]
self.response.headers['Location'] = response.geturl()
self.response.set_status(302)
self.response.clear()
|
[
"The standard way to follow redirects with urllib2 is to use the HTTPRedirectHandler.\n(Not sure what you mean by 'what comes out' but I'm assuming a standard http redirect here, javascript redirect is a different beast)\n# Created handler\nredirectionHandler = urllib2.HTTPRedirectHandler() \n\n# 2 apply the handler to an opener\nopener = urllib2.build_opener(redirectionHandler)\n\n# 3. Install the openers\nurllib2.install_opener(opener)\n\n\nrequest = urllib2.Request('https://some.site/page', data=urllib.urlencode({'key':'value'}))\nresponse = urllib2.urlopen(request)\n\nSee urllib2.HTTPRedirectHandler for details on the handler.\n",
"I suspect this will almost always fail. When you POST a form, the URL you end up at is just the URL you posted to. Sending someone else to this URL, or even visiting it again with the same browser that just POSTed, is going to do a GET and the page won't have the form data that was POSTed. The only way this would work is if the site redirected after the POST to a URL containing some sort of session info.\n",
"You will find using mechanize much easier than using urllib2 directly\nhttp://wwwsearch.sourceforge.net/mechanize/\n"
] |
[
3,
2,
0
] |
[] |
[] |
[
"google_app_engine",
"python",
"urllib2"
] |
stackoverflow_0001507170_google_app_engine_python_urllib2.txt
|
Q:
remove inner border on gtk.Button
I would like to remove border on a gtk.Button and also set its size fixed. How can I accomplish that?
My button looks like that:
b = gtk.Button()
b.set_relief(gtk.RELIEF_NONE)
Thanks!
p.d: I'm using pygtk
A:
You can use gtk.Widget.set_size_request to set a fixed size for the widget. Note that this is generally a bad idea, since the user may want a larger font size than you had planned for, etc.
As for removing the border, gtk.Button can often take up more space than you wish it would. You can try to set some of the style properties such as "inner-border" to 0 and see if that gets the result you want. If you need the smallest button possible, you should just gtk.Label inside a gtk.EventBox and handle clicks on the event box. That will use the least amount of pixels.
|
remove inner border on gtk.Button
|
I would like to remove border on a gtk.Button and also set its size fixed. How can I accomplish that?
My button looks like that:
b = gtk.Button()
b.set_relief(gtk.RELIEF_NONE)
Thanks!
p.d: I'm using pygtk
|
[
"You can use gtk.Widget.set_size_request to set a fixed size for the widget. Note that this is generally a bad idea, since the user may want a larger font size than you had planned for, etc.\nAs for removing the border, gtk.Button can often take up more space than you wish it would. You can try to set some of the style properties such as \"inner-border\" to 0 and see if that gets the result you want. If you need the smallest button possible, you should just gtk.Label inside a gtk.EventBox and handle clicks on the event box. That will use the least amount of pixels.\n"
] |
[
2
] |
[
"I don't think the EventBox can send button events like \"activated\" and \"clicked\".\n"
] |
[
-1
] |
[
"gtk",
"pygtk",
"python"
] |
stackoverflow_0001383663_gtk_pygtk_python.txt
|
Q:
JavaFX or RIA desktop app (on dvd) also available on the web?
Is it possible to develop an application easily available on the web that also can be distributed on DVD (installer or started from the dvd)?
For the moment, we use static html (frameset!) pages (generated by xml files), with one difference: pdf's are only on the DVD version, the web version only shows a preview of these files.
Can this be done with JavaFX, OpenLaszlo or are there better options?
(for example: turbogears, and using tg2exe for DVD version)
A:
I think if you design it correctly to begin with, a JavaFX app can be interchanged between web-app and desktop-app relatively easily. However, I've only done this with very simple apps (specifically, Tic-Tac-Toe!), so I'm sure there might exist some caveats that I am unaware of (thus the "design it correctly" catch-all). ;)
Why don't you just provide the PDFs in your current web version, rather than redeveloping everything? I'm not aware of any browsers that don't support in-browser PDF reading anymore.
A:
Yes JavaFX or Flash applications can be used to develop applications that run in different contexts.
However, it's not clear from your question why these would be preferable over your current solution.
If the information your sharing is primarily text and you're using DVD because your audience is primarily located in area with bad Internet connectivity, then you're current approach probably makes more sense. JavaFX or Flash might be more fun to write for developers but maybe doesn't serve your audience.
I would suggest that if you are shipping DVD and are looking for ways to make the DVD more useful than as a PDF delivery system would be to add video to the DVDs. And then maybe it would make more sense to use JavaFX or Flash to drive the UI.
A:
Yes, it is possible. If you use JavaFX you will be allowed use multiple deployments. For example, NetBeans 6.7.1 with JavaFX creates several possible deployments from one project. Then you can publish this application on web, DVD, etc. You will need to slightly customize standalone deployment for DVD to be able e.g. start it as autorun if necessary. JavaFX is good choice.
A:
This seems like a job for flex, however I know better little about it to give a better answer.
|
JavaFX or RIA desktop app (on dvd) also available on the web?
|
Is it possible to develop an application easily available on the web that also can be distributed on DVD (installer or started from the dvd)?
For the moment, we use static html (frameset!) pages (generated by xml files), with one difference: pdf's are only on the DVD version, the web version only shows a preview of these files.
Can this be done with JavaFX, OpenLaszlo or are there better options?
(for example: turbogears, and using tg2exe for DVD version)
|
[
"I think if you design it correctly to begin with, a JavaFX app can be interchanged between web-app and desktop-app relatively easily. However, I've only done this with very simple apps (specifically, Tic-Tac-Toe!), so I'm sure there might exist some caveats that I am unaware of (thus the \"design it correctly\" catch-all). ;)\nWhy don't you just provide the PDFs in your current web version, rather than redeveloping everything? I'm not aware of any browsers that don't support in-browser PDF reading anymore.\n",
"Yes JavaFX or Flash applications can be used to develop applications that run in different contexts. \nHowever, it's not clear from your question why these would be preferable over your current solution. \nIf the information your sharing is primarily text and you're using DVD because your audience is primarily located in area with bad Internet connectivity, then you're current approach probably makes more sense. JavaFX or Flash might be more fun to write for developers but maybe doesn't serve your audience. \nI would suggest that if you are shipping DVD and are looking for ways to make the DVD more useful than as a PDF delivery system would be to add video to the DVDs. And then maybe it would make more sense to use JavaFX or Flash to drive the UI. \n",
"Yes, it is possible. If you use JavaFX you will be allowed use multiple deployments. For example, NetBeans 6.7.1 with JavaFX creates several possible deployments from one project. Then you can publish this application on web, DVD, etc. You will need to slightly customize standalone deployment for DVD to be able e.g. start it as autorun if necessary. JavaFX is good choice. \n",
"This seems like a job for flex, however I know better little about it to give a better answer.\n"
] |
[
0,
0,
0,
0
] |
[] |
[] |
[
"java",
"javafx",
"python",
"turbogears",
"web_applications"
] |
stackoverflow_0001346723_java_javafx_python_turbogears_web_applications.txt
|
Q:
Doing CRUD in Turbogears
Are there any good packages or methods for doing extensive CRUD (create-retrieve-update-delete) interfaces in the Turbogears framework. The FastDataGrid widget is too much of a black box to be useful and CRUDTemplate looks like more trouble than rolling my own. Ideas? Suggestions?
A:
You should really take a look at sprox ( http://sprox.org/ ).
It builds on RESTController, is very straight forward, well documented (imo), generates forms and validation "magically" from your database and leaves you with a minimum of code to write. I really enjoy working with it.
Hope that helps you :)
A:
So you need CRUD. The best way to accomplish this is with a tool that takes all the lame code away. This tool is called tgext.admin. However you can use it at several levels.
Catwalk2, a default config of tgext.admin that is aware of your quickstarted model.
AdminController, this will take all your models (or a list of them) and create CRUD for all of them.
CrudRestController, will take one object and create CRUD for it.
RestController, will take one object and give you only the REST API, that is no forms or data display.
plain Sprox, you will give it an object and and depending on the base class you use you will get the neww form or the edit for or the table view or the single record view.
A:
While CRUDTemplate looks mildly complex, I'd say that you can implement CRUD/ABCD using just about any ORM that you choose. It just depends on how much of it you with to automate (which generally means defining models/schemas ahead of time). You may learn more and have better control if you put together your own using SQLAlchemy or SQLObject, woth of which work great with TurboGears.
A:
After doing some more digging and hacking it turns out to not be terribly hard to drop the Cakewalk interface into an application. It's not pretty without a lot of work, but it works right away.
|
Doing CRUD in Turbogears
|
Are there any good packages or methods for doing extensive CRUD (create-retrieve-update-delete) interfaces in the Turbogears framework. The FastDataGrid widget is too much of a black box to be useful and CRUDTemplate looks like more trouble than rolling my own. Ideas? Suggestions?
|
[
"You should really take a look at sprox ( http://sprox.org/ ).\nIt builds on RESTController, is very straight forward, well documented (imo), generates forms and validation \"magically\" from your database and leaves you with a minimum of code to write. I really enjoy working with it.\nHope that helps you :)\n",
"So you need CRUD. The best way to accomplish this is with a tool that takes all the lame code away. This tool is called tgext.admin. However you can use it at several levels.\n\nCatwalk2, a default config of tgext.admin that is aware of your quickstarted model.\nAdminController, this will take all your models (or a list of them) and create CRUD for all of them.\nCrudRestController, will take one object and create CRUD for it.\nRestController, will take one object and give you only the REST API, that is no forms or data display.\nplain Sprox, you will give it an object and and depending on the base class you use you will get the neww form or the edit for or the table view or the single record view.\n\n",
"While CRUDTemplate looks mildly complex, I'd say that you can implement CRUD/ABCD using just about any ORM that you choose. It just depends on how much of it you with to automate (which generally means defining models/schemas ahead of time). You may learn more and have better control if you put together your own using SQLAlchemy or SQLObject, woth of which work great with TurboGears.\n",
"After doing some more digging and hacking it turns out to not be terribly hard to drop the Cakewalk interface into an application. It's not pretty without a lot of work, but it works right away.\n"
] |
[
3,
2,
0,
0
] |
[] |
[] |
[
"crud",
"python",
"turbogears"
] |
stackoverflow_0000128689_crud_python_turbogears.txt
|
Q:
Why is it that my thumbnail PIL function won't work the 2nd time?
def create_thumbnail(f, width=200, height=100):
im = Image.open(f)
im.thumbnail((width, height), Image.ANTIALIAS)
thumbnail_file = StringIO()
im.save(thumbnail_file, 'JPEG')
thumbnail_file.seek(0)
return thumbnail_file
It seems that my error is "IOError: cannot identify image file"...based on my traceback log.
A:
The only thing I can think of is that you are running on Windows, in which case Image.open() will open a file handler but does not close it. (That behaviour does not occur on Linux/Unix - the file is closed by the end of your code, and it doesn't matter if it isn't anyway).
|
Why is it that my thumbnail PIL function won't work the 2nd time?
|
def create_thumbnail(f, width=200, height=100):
im = Image.open(f)
im.thumbnail((width, height), Image.ANTIALIAS)
thumbnail_file = StringIO()
im.save(thumbnail_file, 'JPEG')
thumbnail_file.seek(0)
return thumbnail_file
It seems that my error is "IOError: cannot identify image file"...based on my traceback log.
|
[
"The only thing I can think of is that you are running on Windows, in which case Image.open() will open a file handler but does not close it. (That behaviour does not occur on Linux/Unix - the file is closed by the end of your code, and it doesn't matter if it isn't anyway).\n"
] |
[
2
] |
[] |
[] |
[
"python",
"python_imaging_library"
] |
stackoverflow_0001508355_python_python_imaging_library.txt
|
Q:
distutils setup.py and %post %postun
I am newbie.
I am buidling rpm package for my own app and decided to use distutils to do achieve it. I managed to create some substitue of %post by using advice from this website, which i really am thankfull for, but i am having problems with %postun.
Let me describe what i have done. In setup.py i run command that creates symbolic link which is needed to run application. It works good but problem is when i want to remove rpm, link stays there. So i figured that i should use %postun in spec file. My question is: is there a way to do this in setup.py or do i have to manually edit spec file?
Please advise or point me some manuals or anything.
Thank you
A:
Yes, you can specify a post install script, all you need is to declare in the bdist_rpm in the options arg the file you want to use:
setup(
...
options = {'bdist_rpm':{'post_install' : 'post_install',
'post_uninstall' : 'post_uninstall'}},
...)
In the post_uninstall file, put he code you need to remove the link, somethink like:
rm -f /var/lib/mylink
A:
Neither distutils nor setuptools have uninstall functionality.
At some point, the python community agreed that uninstall should be handled by the packaging system. In this case you want to use rpm, so there is probably a way inside of rpm system to remove packages, but you will not find that in distutils or setuptools.
@ pycon2009, there was a presentation on distutils and setuptools. You can find all of the videos here
Eggs and Buildout Deployment in Python - Part 1
Eggs and Buildout Deployment in Python - Part 2
Eggs and Buildout Deployment in Python - Part 3
There is a video called How to Build Applications Linux Distributions will Package. I have not seen it, but it seems to be appropriate.
|
distutils setup.py and %post %postun
|
I am newbie.
I am buidling rpm package for my own app and decided to use distutils to do achieve it. I managed to create some substitue of %post by using advice from this website, which i really am thankfull for, but i am having problems with %postun.
Let me describe what i have done. In setup.py i run command that creates symbolic link which is needed to run application. It works good but problem is when i want to remove rpm, link stays there. So i figured that i should use %postun in spec file. My question is: is there a way to do this in setup.py or do i have to manually edit spec file?
Please advise or point me some manuals or anything.
Thank you
|
[
"Yes, you can specify a post install script, all you need is to declare in the bdist_rpm in the options arg the file you want to use:\nsetup(\n...\noptions = {'bdist_rpm':{'post_install' : 'post_install',\n 'post_uninstall' : 'post_uninstall'}},\n...)\n\nIn the post_uninstall file, put he code you need to remove the link, somethink like:\nrm -f /var/lib/mylink\n\n",
"Neither distutils nor setuptools have uninstall functionality.\nAt some point, the python community agreed that uninstall should be handled by the packaging system. In this case you want to use rpm, so there is probably a way inside of rpm system to remove packages, but you will not find that in distutils or setuptools.\n@ pycon2009, there was a presentation on distutils and setuptools. You can find all of the videos here\nEggs and Buildout Deployment in Python - Part 1\nEggs and Buildout Deployment in Python - Part 2 \nEggs and Buildout Deployment in Python - Part 3\nThere is a video called How to Build Applications Linux Distributions will Package. I have not seen it, but it seems to be appropriate.\n"
] |
[
2,
0
] |
[] |
[] |
[
"distutils",
"python",
"rpm",
"specifications"
] |
stackoverflow_0001407021_distutils_python_rpm_specifications.txt
|
Q:
KeyError with dict.fromkeys() and dict-like object
In Python, you can use a dictionary as the first argument to dict.fromkeys(), e.g.:
In [1]: d = {'a': 1, 'b': 2}
In [2]: dict.fromkeys(d)
Out[2]: {'a': None, 'b': None}
I tried to do the same with a dict-like object, but that always raises a KeyError, e.g.:
In [1]: class SemiDict:
...: def __init__(self):
...: self.d = {}
...:
...: def __getitem__(self, key):
...: return self.d[key]
...:
...: def __setitem__(self, key, value):
...: self.d[key] = value
...:
...:
In [2]: sd = SemiDict()
In [3]: sd['a'] = 1
In [4]: dict.fromkeys(sd)
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
C:\bin\Console2\<ipython console> in <module>()
C:\bin\Console2\<ipython console> in __getitem__(self, key)
KeyError: 0
What exactly is happening here? And can it be resolved, other than using something like dict.fromkeys(sd.d)?
A:
To create the dict, fromkeys iterates over its argument. So it must be an iterator. One way to make it work is to add an __iter__ method to your dict-like:
def __iter__(self):
return iter(self.d)
A:
instance of SemiDict is not a sequence. I'd imagine the most obvious solution would be to inherit from dict, why don't you do it?
|
KeyError with dict.fromkeys() and dict-like object
|
In Python, you can use a dictionary as the first argument to dict.fromkeys(), e.g.:
In [1]: d = {'a': 1, 'b': 2}
In [2]: dict.fromkeys(d)
Out[2]: {'a': None, 'b': None}
I tried to do the same with a dict-like object, but that always raises a KeyError, e.g.:
In [1]: class SemiDict:
...: def __init__(self):
...: self.d = {}
...:
...: def __getitem__(self, key):
...: return self.d[key]
...:
...: def __setitem__(self, key, value):
...: self.d[key] = value
...:
...:
In [2]: sd = SemiDict()
In [3]: sd['a'] = 1
In [4]: dict.fromkeys(sd)
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
C:\bin\Console2\<ipython console> in <module>()
C:\bin\Console2\<ipython console> in __getitem__(self, key)
KeyError: 0
What exactly is happening here? And can it be resolved, other than using something like dict.fromkeys(sd.d)?
|
[
"To create the dict, fromkeys iterates over its argument. So it must be an iterator. One way to make it work is to add an __iter__ method to your dict-like:\ndef __iter__(self):\n return iter(self.d)\n\n",
"instance of SemiDict is not a sequence. I'd imagine the most obvious solution would be to inherit from dict, why don't you do it?\n"
] |
[
6,
1
] |
[] |
[] |
[
"dictionary",
"python"
] |
stackoverflow_0001509153_dictionary_python.txt
|
Q:
How to search XPath inside Python ClientForm object?
I've got a form, returned by Python mechanize Browser and got via forms() method. How can I perform XPath search inside form node, that is, among descendant nodes of the HTML form node? TIA
Upd:
How to save html code of the form?
A:
By parsing the browser contents with lxml, which has xpath support.
|
How to search XPath inside Python ClientForm object?
|
I've got a form, returned by Python mechanize Browser and got via forms() method. How can I perform XPath search inside form node, that is, among descendant nodes of the HTML form node? TIA
Upd:
How to save html code of the form?
|
[
"By parsing the browser contents with lxml, which has xpath support.\n"
] |
[
1
] |
[] |
[] |
[
"mechanize",
"python"
] |
stackoverflow_0001509404_mechanize_python.txt
|
Q:
How do I get fluent in Python?
Once you have learned the basic commands in Python, you are often able to solve most programming problem you face. But the way in which this is done is not really Python-ic.
What is common is to use the classical c++ or Java mentality to solve problems. But Python is more than that. It has functional programming incorporated; many libraries available; object oriented, and its own ways. In short there are often better, shorter, faster, more elegant ways to the same thing.
It is a little bit like learning a new language. First you learn the words and the grammar, but then you need to get fluent.
Once you have learned the language, how do you get fluent in Python? How have you done it? What books mostly helped?
A:
Read other people's code. Write some of your own code. Repeat for a year or two.
Study the Python documentation and learn the built-in modules.
Read Python in a Nutshell.
Subscribe your RSS reader to the Python tag on Stack Overflow.
A:
Have you read the Python Cookbook? It's a pretty good source for Pythonic.
Plus you'll find much more from Alex Martelli on Stack Overflow.
A:
I can tell you what I've done.
Idiomatic Python
Bookmark SO with the python keyword.
Read other's good python code.
The Python Challenge
That order is probably good, too. This is where things get fun.
A:
More Pythonic? Start with a simple import.
import this
And add practice.
A:
The same way you get fluent in any language - program a lot.
I'd recommend working on a project (hopefully something you'll actually use later). While working on the project, every time you need some basic piece of functionality, try writing it yourself, and then checking online how other people did it.
This both lets you learn how to actually get stuff done in Python, but will also allow you to see what are the "Pythonic" counterparts to common coding cases.
A:
There are some Python textbooks that not only teach you the language, they teach you the philosophy of the language (why it is the way it is) and teach you common idioms. I learned from the book Learning Python by Mark Lutz and I recommend it.
If you already know the basics of the language, you can Google search for "Python idioms" and you will find some gems. Here are a few:
http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html
http://docs.python.org/dev/howto/doanddont.html
http://jaynes.colorado.edu/PythonIdioms.html
If you read some good Python code and get a feel for why it was written the way it was, you can learn some cool things. Here is a recent discussion of modules worth reading to improve your Pythonic coding skills.
Good luck!
EDIT: Oh, I should add: +1 for Python Cookbook and Alex Martelli. I didn't mention these because Jon-Eric already did.
A:
I guess becoming fluent in any programming language is the same as becoming fluent in a spoken/written language. You do that by speaking and listening to the language, a lot.
So my advice is to do some projects using python, and you will soon become fluent in it. You can complement this by reading other people's code who are more experienced in the language to see how they solve certain problems.
A:
Read existing projects known for technical excelence.
Some of the ones I'd recommend are:
Django
SQLAlchemy
Python's /lib/json
Python's /lib/test
Visit http://pythonsource.com/ for many other modules written in Python.
|
How do I get fluent in Python?
|
Once you have learned the basic commands in Python, you are often able to solve most programming problem you face. But the way in which this is done is not really Python-ic.
What is common is to use the classical c++ or Java mentality to solve problems. But Python is more than that. It has functional programming incorporated; many libraries available; object oriented, and its own ways. In short there are often better, shorter, faster, more elegant ways to the same thing.
It is a little bit like learning a new language. First you learn the words and the grammar, but then you need to get fluent.
Once you have learned the language, how do you get fluent in Python? How have you done it? What books mostly helped?
|
[
"Read other people's code. Write some of your own code. Repeat for a year or two.\nStudy the Python documentation and learn the built-in modules.\nRead Python in a Nutshell.\nSubscribe your RSS reader to the Python tag on Stack Overflow.\n",
"Have you read the Python Cookbook? It's a pretty good source for Pythonic.\nPlus you'll find much more from Alex Martelli on Stack Overflow.\n",
"I can tell you what I've done.\n\nIdiomatic Python\nBookmark SO with the python keyword.\nRead other's good python code.\nThe Python Challenge\n\nThat order is probably good, too. This is where things get fun.\n",
"More Pythonic? Start with a simple import.\nimport this\n\nAnd add practice.\n",
"The same way you get fluent in any language - program a lot.\nI'd recommend working on a project (hopefully something you'll actually use later). While working on the project, every time you need some basic piece of functionality, try writing it yourself, and then checking online how other people did it.\nThis both lets you learn how to actually get stuff done in Python, but will also allow you to see what are the \"Pythonic\" counterparts to common coding cases.\n",
"There are some Python textbooks that not only teach you the language, they teach you the philosophy of the language (why it is the way it is) and teach you common idioms. I learned from the book Learning Python by Mark Lutz and I recommend it.\nIf you already know the basics of the language, you can Google search for \"Python idioms\" and you will find some gems. Here are a few:\nhttp://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html\nhttp://docs.python.org/dev/howto/doanddont.html\nhttp://jaynes.colorado.edu/PythonIdioms.html\nIf you read some good Python code and get a feel for why it was written the way it was, you can learn some cool things. Here is a recent discussion of modules worth reading to improve your Pythonic coding skills.\nGood luck!\nEDIT: Oh, I should add: +1 for Python Cookbook and Alex Martelli. I didn't mention these because Jon-Eric already did.\n",
"I guess becoming fluent in any programming language is the same as becoming fluent in a spoken/written language. You do that by speaking and listening to the language, a lot.\nSo my advice is to do some projects using python, and you will soon become fluent in it. You can complement this by reading other people's code who are more experienced in the language to see how they solve certain problems. \n",
"Read existing projects known for technical excelence.\nSome of the ones I'd recommend are:\n\nDjango\nSQLAlchemy\nPython's /lib/json\nPython's /lib/test\nVisit http://pythonsource.com/ for many other modules written in Python.\n\n"
] |
[
11,
8,
7,
5,
3,
3,
2,
2
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001507041_python.txt
|
Q:
How to arrange the source code of an application made with SQLAlchemy and a graphic interface?
I'm developing an application using SQLAlchemy and wxPython that I'm trying to keep distributed in separated modules consisting of Business logic, ORM and GUI.
I'm not completely sure how to do this in a pythonic way.
Given that mapping() has to be called in orther for the objects to be used, I thought of putting it on the __init__.py of the bussiness logic, but keeping all the table definitions within a separate orm.py module.
Should I keep something like:
/Business
/__init__.py
| mapping (module1.Class1, orm.table1)
|
/module1.py
Class1
/orm.py
import
table1 = Table()
/GUI
/main.py
| import business
/crud.py
or something like
/Business
/__init__.py
| import
|
/module1.py
Class1
table1 = Table()
mapping (module1.Class1, orm.table1)
/GUI
/main.py
| import business
/crud.py
Is the first approach recommended? Is there any other option? I've seen the second way, but I don't like putting the database handling code and the bussiness logic code within the same module. Am I overthinking it? Is really not that big a problem?
A:
I find this document by Jp Calderone to be a great tip on how to (not) structure your python project. Following it you won't have issues. I'll reproduce the entire text here:
Filesystem structure of a Python project
Do:
name the directory something
related to your project. For example,
if your project is named "Twisted",
name the top-level directory for its
source files Twisted. When you do
releases, you should include a version
number suffix: Twisted-2.5.
create a directory Twisted/bin and
put your executables there, if you
have any. Don't give them a .py
extension, even if they are Python
source files. Don't put any code in
them except an import of and call to a
main function defined somewhere else
in your projects.
If your project
is expressable as a single Python
source file, then put it into the
directory and name it something
related to your project. For example,
Twisted/twisted.py. If you need
multiple source files, create a
package instead (Twisted/twisted/,
with an empty
Twisted/twisted/__init__.py) and
place your source files in it. For
example,
Twisted/twisted/internet.py.
put
your unit tests in a sub-package of
your package (note - this means that
the single Python source file option
above was a trick - you always need at
least one other file for your unit
tests). For example,
Twisted/twisted/test/. Of course,
make it a package with
Twisted/twisted/test/__init__.py.
Place tests in files like
Twisted/twisted/test/test_internet.py.
add Twisted/README and Twisted/setup.py to explain and
install your software, respectively,
if you're feeling nice.
Don't:
put your source in a directory
called src or lib. This makes it
hard to run without installing.
put
your tests outside of your Python
package. This makes it hard to run the
tests against an installed version.
create a package that only has a
__init__.py and then put all your
code into __init__.py. Just make a
module instead of a package, it's
simpler.
try to come up with
magical hacks to make Python able to
import your module or package without
having the user add the directory
containing it to their import path
(either via PYTHONPATH or some other
mechanism). You will not correctly
handle all cases and users will get
angry at you when your software
doesn't work in their environment.
|
How to arrange the source code of an application made with SQLAlchemy and a graphic interface?
|
I'm developing an application using SQLAlchemy and wxPython that I'm trying to keep distributed in separated modules consisting of Business logic, ORM and GUI.
I'm not completely sure how to do this in a pythonic way.
Given that mapping() has to be called in orther for the objects to be used, I thought of putting it on the __init__.py of the bussiness logic, but keeping all the table definitions within a separate orm.py module.
Should I keep something like:
/Business
/__init__.py
| mapping (module1.Class1, orm.table1)
|
/module1.py
Class1
/orm.py
import
table1 = Table()
/GUI
/main.py
| import business
/crud.py
or something like
/Business
/__init__.py
| import
|
/module1.py
Class1
table1 = Table()
mapping (module1.Class1, orm.table1)
/GUI
/main.py
| import business
/crud.py
Is the first approach recommended? Is there any other option? I've seen the second way, but I don't like putting the database handling code and the bussiness logic code within the same module. Am I overthinking it? Is really not that big a problem?
|
[
"I find this document by Jp Calderone to be a great tip on how to (not) structure your python project. Following it you won't have issues. I'll reproduce the entire text here:\n\nFilesystem structure of a Python project\nDo:\n\nname the directory something\n related to your project. For example,\n if your project is named \"Twisted\",\n name the top-level directory for its\n source files Twisted. When you do\n releases, you should include a version\n number suffix: Twisted-2.5. \ncreate a directory Twisted/bin and\n put your executables there, if you\n have any. Don't give them a .py\n extension, even if they are Python\n source files. Don't put any code in\n them except an import of and call to a\n main function defined somewhere else\n in your projects. \nIf your project\n is expressable as a single Python\n source file, then put it into the\n directory and name it something\n related to your project. For example,\n Twisted/twisted.py. If you need\n multiple source files, create a\n package instead (Twisted/twisted/,\n with an empty\n Twisted/twisted/__init__.py) and\n place your source files in it. For\n example,\n Twisted/twisted/internet.py. \nput\n your unit tests in a sub-package of\n your package (note - this means that\n the single Python source file option\n above was a trick - you always need at\n least one other file for your unit\n tests). For example,\n Twisted/twisted/test/. Of course,\n make it a package with\n Twisted/twisted/test/__init__.py.\n Place tests in files like\n Twisted/twisted/test/test_internet.py.\nadd Twisted/README and Twisted/setup.py to explain and\n install your software, respectively,\n if you're feeling nice.\n\nDon't:\n\nput your source in a directory\n called src or lib. This makes it\n hard to run without installing. \nput\n your tests outside of your Python\n package. This makes it hard to run the\n tests against an installed version. \ncreate a package that only has a\n __init__.py and then put all your\n code into __init__.py. Just make a\n module instead of a package, it's\n simpler. \ntry to come up with\n magical hacks to make Python able to\n import your module or package without\n having the user add the directory\n containing it to their import path\n (either via PYTHONPATH or some other\n mechanism). You will not correctly\n handle all cases and users will get\n angry at you when your software\n doesn't work in their environment.\n\n\n"
] |
[
6
] |
[] |
[] |
[
"directory_structure",
"python",
"sqlalchemy",
"version_control"
] |
stackoverflow_0001506887_directory_structure_python_sqlalchemy_version_control.txt
|
Q:
Is there anything like Project Sprouts but implemented in Python?
Since our entire build system is written in Python, I'm wondering if there is anything like Sprouts that I could leverage to integrate Flex builds / development into our codebase?
Sprouts looks nice and all, but I don't want to introduce another build-time dependency to our projects (namely Ruby).
Thanks
A:
No, I don't believe there is anything similar for Python.
It looks like Sprouts is made up of a lot of Rake build files, so you could try to roll your own build system in Python using tools like Paver, fabric, Buildout recipes, paster templates, etc. That would give you the ability to generate new projects like Sprouts does.
Overall I wouldn't worry too much about adding Ruby as a dependency. If it does what you need then you'll save a lot of time by using it instead of hunting for something written in Python (or worse yet rewriting its functionality).
|
Is there anything like Project Sprouts but implemented in Python?
|
Since our entire build system is written in Python, I'm wondering if there is anything like Sprouts that I could leverage to integrate Flex builds / development into our codebase?
Sprouts looks nice and all, but I don't want to introduce another build-time dependency to our projects (namely Ruby).
Thanks
|
[
"No, I don't believe there is anything similar for Python.\nIt looks like Sprouts is made up of a lot of Rake build files, so you could try to roll your own build system in Python using tools like Paver, fabric, Buildout recipes, paster templates, etc. That would give you the ability to generate new projects like Sprouts does.\nOverall I wouldn't worry too much about adding Ruby as a dependency. If it does what you need then you'll save a lot of time by using it instead of hunting for something written in Python (or worse yet rewriting its functionality).\n"
] |
[
3
] |
[] |
[] |
[
"apache_flex",
"python",
"ruby"
] |
stackoverflow_0001508743_apache_flex_python_ruby.txt
|
Q:
How to tell when nosetest is running programmatically
nosetest is the default test framework in Turbogeras 2.0. The application has a websetup.py module that initialise the database. I use mysql for my development and production environment and websetup works fine, but nosetest uses sqlite on memory and when it tries to initialise the DB sends an error:
TypeError: SQLite Date, Time, and
DateTime types only accept Python
datetime objects as input.
I've detected when this happens and is in the import fase:
csvreader = csv.reader(open('res/products.csv'), delimiter=",", quotechar="'")
for row in csvreader:
p = model.Product(row[1], row[2], row[3], row[4] + ".jpg")
# Even tried to convert the date to a sqlalchemy type
# need to put a conditional here, when testing I don't care this date
import sqlalchemy
dateadded = sqlalchemy.types.DateTime(row[5])
p.dateAdded = dateadded
p.brand_id = row[6]
p.code = row[3]
ccat = model.DBSession.query(model.Category)\
.filter(model.Category.id==int(row[8]) + 3).one()
p.categories.append(ccat)
p.normalPrice = row[9]
p.specialPrice = row[10]
p.discountPrice = row[11]
model.DBSession.add(p)
How can I tell when nosetest is running? I've try:
if globals().has_key('ModelTest'):
and
if vars().has_key('ModelTest'):
The first one with no results and the second one with error
A:
I don't use TurboGears, but is there not a setting or global somewhere that indicates that the tests are running? In large systems, there are often small changes that need to be made when running tests. Switching between SQLite and MySQL is just one example.
A:
Presumably if nose is running, the 'nose' top-level module will have been imported. You should be able to test that with
if 'nose' in sys.modules:
print "Nose is running, or at least has been imported!"
#or whatever you need to do if nose is running
Of course this is not a foolproof test, which assumes that there's no other reason why nose would be imported.
|
How to tell when nosetest is running programmatically
|
nosetest is the default test framework in Turbogeras 2.0. The application has a websetup.py module that initialise the database. I use mysql for my development and production environment and websetup works fine, but nosetest uses sqlite on memory and when it tries to initialise the DB sends an error:
TypeError: SQLite Date, Time, and
DateTime types only accept Python
datetime objects as input.
I've detected when this happens and is in the import fase:
csvreader = csv.reader(open('res/products.csv'), delimiter=",", quotechar="'")
for row in csvreader:
p = model.Product(row[1], row[2], row[3], row[4] + ".jpg")
# Even tried to convert the date to a sqlalchemy type
# need to put a conditional here, when testing I don't care this date
import sqlalchemy
dateadded = sqlalchemy.types.DateTime(row[5])
p.dateAdded = dateadded
p.brand_id = row[6]
p.code = row[3]
ccat = model.DBSession.query(model.Category)\
.filter(model.Category.id==int(row[8]) + 3).one()
p.categories.append(ccat)
p.normalPrice = row[9]
p.specialPrice = row[10]
p.discountPrice = row[11]
model.DBSession.add(p)
How can I tell when nosetest is running? I've try:
if globals().has_key('ModelTest'):
and
if vars().has_key('ModelTest'):
The first one with no results and the second one with error
|
[
"I don't use TurboGears, but is there not a setting or global somewhere that indicates that the tests are running? In large systems, there are often small changes that need to be made when running tests. Switching between SQLite and MySQL is just one example.\n",
"Presumably if nose is running, the 'nose' top-level module will have been imported. You should be able to test that with\nif 'nose' in sys.modules:\n print \"Nose is running, or at least has been imported!\"\n #or whatever you need to do if nose is running\n\nOf course this is not a foolproof test, which assumes that there's no other reason why nose would be imported.\n"
] |
[
0,
0
] |
[] |
[] |
[
"nosetests",
"python",
"sqlalchemy"
] |
stackoverflow_0001500214_nosetests_python_sqlalchemy.txt
|
Q:
Catch contents of PHP session under Apache with Python (mod_wsgi)?
Is there a way to catch the contents of the PHP session variable $_SESSION['user_id'] with a mod_wsgi Python script? I'm running a script in the background that will decide whether or not the user may proceed to view the document.
I would like to do something like this:
def allow_access(environ, host):
allow_access = False
if environ['SCRIPT_NAME'] == 'forbidden_dir':
if session['user_id'] == '1':
allow_access = True
if allow_access:
return True
else:
return False
Is it possible?
A:
If it's possible, it's not easy; apache stores session variables in files in a special format.
Your best option might be to write a php page that prints all session variables. (Hard-code it to only serve to localhost.) Open the url to that page from within your python script. Add a header to the url request with the session info. Then, once the php page is loaded in Python, parse the input.
A:
please don't do this:
if allow_access:
return True
else:
return False
when you can do: return allow_access.
|
Catch contents of PHP session under Apache with Python (mod_wsgi)?
|
Is there a way to catch the contents of the PHP session variable $_SESSION['user_id'] with a mod_wsgi Python script? I'm running a script in the background that will decide whether or not the user may proceed to view the document.
I would like to do something like this:
def allow_access(environ, host):
allow_access = False
if environ['SCRIPT_NAME'] == 'forbidden_dir':
if session['user_id'] == '1':
allow_access = True
if allow_access:
return True
else:
return False
Is it possible?
|
[
"If it's possible, it's not easy; apache stores session variables in files in a special format.\nYour best option might be to write a php page that prints all session variables. (Hard-code it to only serve to localhost.) Open the url to that page from within your python script. Add a header to the url request with the session info. Then, once the php page is loaded in Python, parse the input.\n",
"please don't do this:\nif allow_access:\n return True\nelse:\n return False\n\nwhen you can do: return allow_access.\n"
] |
[
3,
3
] |
[] |
[] |
[
"apache",
"mod_wsgi",
"python"
] |
stackoverflow_0001509641_apache_mod_wsgi_python.txt
|
Q:
match an alternative url - regular expression django urls
I want a Django URL with just 2 alternatives /module/in/ or /module/out/
I'm using
url(r'^(?P<status>\w+[in|out])/$',
'by_status',
name='module_by-status'),
But it matches other patterns like /module/i/, /module/n/ and /module/ou/.
Any hint is appreciated :)
A:
Try r'^(?P<status>in|out)/$'
You need to remove \w+, which matches one or more alphanumeric characters or underscores. The regular expression suggested in bstpierre's answer, '^(?P<status>\w+(in|out))/$' will match helloin, good_byeout and so on.
Note that if you use the vertical bar (pipe) character | in your url patterns, Django cannot reverse the regular expression. If you need to use the url tag in your templates, you would need to write two url patterns, one for in and one for out.
A:
You want (in|out), the [] you are using indicate a character class containing the characters 'i', 'n', '|', 'o', 'u', 't'.
|
match an alternative url - regular expression django urls
|
I want a Django URL with just 2 alternatives /module/in/ or /module/out/
I'm using
url(r'^(?P<status>\w+[in|out])/$',
'by_status',
name='module_by-status'),
But it matches other patterns like /module/i/, /module/n/ and /module/ou/.
Any hint is appreciated :)
|
[
"Try r'^(?P<status>in|out)/$'\nYou need to remove \\w+, which matches one or more alphanumeric characters or underscores. The regular expression suggested in bstpierre's answer, '^(?P<status>\\w+(in|out))/$' will match helloin, good_byeout and so on.\nNote that if you use the vertical bar (pipe) character | in your url patterns, Django cannot reverse the regular expression. If you need to use the url tag in your templates, you would need to write two url patterns, one for in and one for out.\n",
"You want (in|out), the [] you are using indicate a character class containing the characters 'i', 'n', '|', 'o', 'u', 't'.\n"
] |
[
10,
1
] |
[] |
[] |
[
"django",
"django_urls",
"python",
"regex"
] |
stackoverflow_0001506646_django_django_urls_python_regex.txt
|
Q:
Can seek and tell work with UTF-8 encoded documents in Python?
I have an application that generates some large log files > 500MB.
I have written some utilities in Python that allows me to quickly browse the log file and find data of interest. But I now get some datasets where the file is too big to load it all into memory.
I thus want to scan the document once, build an index and then only load the section of the document into memory that I want to look at at a time.
This works for me when I open a 'file' read it one line at a time and store the offset with from file.tell().
I can then come back to that section of the file later with file.seek( offset, 0 ).
My problem is however that I may have UTF-8 in the log files so I need to open them with the codecs module (codecs.open(<filename>, 'r', 'utf-8')). With the resulting object I can call seek and tell but they do not match up.
I assume that codecs needs to do some buffering or maybe it returns character counts instead of bytes from tell?
Is there a way around this?
A:
If true, this sounds like a bug or limitation of the codecs module, as it's probably confusing byte and character offsets.
I would use the regular open() function for opening the file, then seek()/tell() will give you byte offsets that are always consistent. Whenever you want to read, use f.readline().decode('utf-8').
Beware though, that using the f.read() function can land you in the middle of a multi-byte character, thus producing an UTF-8 decode error. readline() will always work.
This doesn't transparently handle the byte-order mark for you, but chances are your log files do not have BOMs anyway.
A:
For UTF-8, you don't actually need to open the file with codecs.open. Instead, it is reliable to read the file as a byte string first, and only then decode an individual section (invoking the .decode method on the string). Breaking the file at line boundaries is safe; the only unsafe way to split it would be in the middle of a multi-byte character (which you can recognize from its byte value > 128).
A:
Much of what goes on with UTF8 in python makes sense if you look at how it was done in Python 3. In your case, it'll make quite a bit more sense if you read the Files chapter in Dive into Python 3: http://diveintopython3.org/files.html
The short of it, though, is that file.seek and file.tell work with byte positions, whereas unicode characters can take up multiple bytes. Thus, if you do:
f.seek(10)
f.read(1)
f.tell()
You can easily get something other than 17, depending on what length the one character you read was.
A:
Update: You can't do seek/tell on the object returned by codec.open(). You need to use a normal file, and decode the strings to unicode after reading.
I do not know why it doesn't work but I can't make it work. The seek seems to only work once, for example. Then you need to close and reopen the file, which is of course not useful.
The tell does not use character positions, but doesn't show you where your position in the stream is (but probably where the underlying file object is in reading from disk).
So probably because of some sort of underlying buffering, you can't do it. But deocding after reading works just fine, so go for that.
|
Can seek and tell work with UTF-8 encoded documents in Python?
|
I have an application that generates some large log files > 500MB.
I have written some utilities in Python that allows me to quickly browse the log file and find data of interest. But I now get some datasets where the file is too big to load it all into memory.
I thus want to scan the document once, build an index and then only load the section of the document into memory that I want to look at at a time.
This works for me when I open a 'file' read it one line at a time and store the offset with from file.tell().
I can then come back to that section of the file later with file.seek( offset, 0 ).
My problem is however that I may have UTF-8 in the log files so I need to open them with the codecs module (codecs.open(<filename>, 'r', 'utf-8')). With the resulting object I can call seek and tell but they do not match up.
I assume that codecs needs to do some buffering or maybe it returns character counts instead of bytes from tell?
Is there a way around this?
|
[
"If true, this sounds like a bug or limitation of the codecs module, as it's probably confusing byte and character offsets.\nI would use the regular open() function for opening the file, then seek()/tell() will give you byte offsets that are always consistent. Whenever you want to read, use f.readline().decode('utf-8').\nBeware though, that using the f.read() function can land you in the middle of a multi-byte character, thus producing an UTF-8 decode error. readline() will always work.\nThis doesn't transparently handle the byte-order mark for you, but chances are your log files do not have BOMs anyway.\n",
"For UTF-8, you don't actually need to open the file with codecs.open. Instead, it is reliable to read the file as a byte string first, and only then decode an individual section (invoking the .decode method on the string). Breaking the file at line boundaries is safe; the only unsafe way to split it would be in the middle of a multi-byte character (which you can recognize from its byte value > 128).\n",
"Much of what goes on with UTF8 in python makes sense if you look at how it was done in Python 3. In your case, it'll make quite a bit more sense if you read the Files chapter in Dive into Python 3: http://diveintopython3.org/files.html\nThe short of it, though, is that file.seek and file.tell work with byte positions, whereas unicode characters can take up multiple bytes. Thus, if you do:\nf.seek(10)\nf.read(1)\nf.tell()\n\nYou can easily get something other than 17, depending on what length the one character you read was.\n",
"Update: You can't do seek/tell on the object returned by codec.open(). You need to use a normal file, and decode the strings to unicode after reading.\nI do not know why it doesn't work but I can't make it work. The seek seems to only work once, for example. Then you need to close and reopen the file, which is of course not useful.\nThe tell does not use character positions, but doesn't show you where your position in the stream is (but probably where the underlying file object is in reading from disk).\nSo probably because of some sort of underlying buffering, you can't do it. But deocding after reading works just fine, so go for that.\n"
] |
[
3,
2,
1,
0
] |
[] |
[] |
[
"codec",
"python",
"seek",
"utf_8"
] |
stackoverflow_0001510188_codec_python_seek_utf_8.txt
|
Q:
How to use Sphinx auto-documentation when Python file won't compile
This question is even harder today because I haven't had any luck using the search function on the Sphinx homepage today.
I have a group of modules that I want to be documented from the docstrings. However, these are not pure Python scripts. They won't compile as is, because they are run from a C# application that creates a new variable in the executing scope.
To the Python compiler, it looks like I have an undefined method (which, technically I do, until C# creates the IronPython script engine and creates the method).
When I run:
sphinx-build -b html output/html
I get:
NameError: name 'injected_method' is not defined
How do I get Sphinx to ignore compilation errors and just generate my documentation?
EDIT:
If anybody knows if an alternative to Sphinx (like Epydoc) does not have to compile the Python script to get the function signatures and docstrings, that would be helpful as well. Sphinx is the best looking documentation generator, but I'll abandon it if I have to.
A:
Well, you could try:
Wrapping the usage of injected_method in a try/except.
Writing a script that filters out all python-code that is run on import time, and feeds the result into Sphinx.
You could....ok, I have no more ideas. :)
A:
Perhaps you could define injected_method as a empty function so that the documentation will work. You'll need to make sure that the definition of injected_method that you're injecting happens after the new injected_method stub.
#By empty function I mean a function that looks like this
def injected_method():
pass
A:
Okay, I found a way to get around the Errors.
When setting up the embedded scripting environment, instead of using:
ScriptScope.SetVariable("injected_method", myMethod);
I am now using:
ScriptRuntime.Globals.SetVariable("injected_method", myMethod);
And then, in the script:
import injected_method
Then I created a dummy injected_method.py file in my search path, which is blank. I delete the dummy file during the build of my C# project to avoid any conflicts.
|
How to use Sphinx auto-documentation when Python file won't compile
|
This question is even harder today because I haven't had any luck using the search function on the Sphinx homepage today.
I have a group of modules that I want to be documented from the docstrings. However, these are not pure Python scripts. They won't compile as is, because they are run from a C# application that creates a new variable in the executing scope.
To the Python compiler, it looks like I have an undefined method (which, technically I do, until C# creates the IronPython script engine and creates the method).
When I run:
sphinx-build -b html output/html
I get:
NameError: name 'injected_method' is not defined
How do I get Sphinx to ignore compilation errors and just generate my documentation?
EDIT:
If anybody knows if an alternative to Sphinx (like Epydoc) does not have to compile the Python script to get the function signatures and docstrings, that would be helpful as well. Sphinx is the best looking documentation generator, but I'll abandon it if I have to.
|
[
"Well, you could try:\n\nWrapping the usage of injected_method in a try/except.\nWriting a script that filters out all python-code that is run on import time, and feeds the result into Sphinx.\nYou could....ok, I have no more ideas. :)\n\n",
"Perhaps you could define injected_method as a empty function so that the documentation will work. You'll need to make sure that the definition of injected_method that you're injecting happens after the new injected_method stub.\n#By empty function I mean a function that looks like this\ndef injected_method():\n pass\n\n",
"Okay, I found a way to get around the Errors.\nWhen setting up the embedded scripting environment, instead of using:\nScriptScope.SetVariable(\"injected_method\", myMethod);\n\nI am now using:\nScriptRuntime.Globals.SetVariable(\"injected_method\", myMethod);\n\nAnd then, in the script:\nimport injected_method\n\nThen I created a dummy injected_method.py file in my search path, which is blank. I delete the dummy file during the build of my C# project to avoid any conflicts.\n"
] |
[
3,
0,
0
] |
[] |
[] |
[
"ironpython",
"python",
"python_sphinx"
] |
stackoverflow_0001506673_ironpython_python_python_sphinx.txt
|
Q:
fast string modification in python
This is partially a theoretical question:
I have a string (say UTF-8), and I need to modify it so that each character (not byte) becomes 2 characters, for instance:
"Nissim" becomes "N-i-s-s-i-m-"
"01234" becomes "0a1b2c3d4e"
and so on.
I would suspect that naive concatenation in a loop would be too expensive (it IS the bottleneck, this is supposed to happen all the time).
I would either use an array (pre-allocated) or try to make my own C module to handle this.
Anyone has better ideas for this kind of thing?
(Note that the problem is always about multibyte encodings, and must be solved for UTF-8 as well),
Oh and its Python 2.5, so no shiny Python 3 thingies are available here.
Thanks
A:
@gnosis, beware of all the well-intentioned responders saying you should measure the times: yes, you should (because programmers' instincts are often off-base about performance), but measuring a single case, as in all the timeit examples proffered so far, misses a crucial consideration -- big-O.
Your instincts are correct: in general (with a very few special cases where recent Python releases can optimize things a bit, but they don't stretch very far), building a string by a loop of += over the pieces (or a reduce and so on) must be O(N**2) due to the many intermediate object allocations and the inevitable repeated copying of those object's content; joining, regular expressions, and the third option that was not mentioned in the above answers (write method of cStringIO.StringIO instances) are the O(N) solutions and therefore the only ones worth considering unless you happen to know for sure that the strings you'll be operating on have modest upper bounds on their length.
So what, if any, are the upper bounds in length on the strings you're processing? If you can give us an idea, benchmarks can be run on representative ranges of lengths of interest (for example, say, "most often less than 100 characters but some % of the time maybe a couple thousand characters" would be an excellent spec for this performance evaluation: IOW, it doesn't need to be extremely precise, just indicative of your problem space).
I also notice that nobody seems to follow one crucial and difficult point in your specs: that the strings are Python 2.5 multibyte, UTF-8 encoded, strs, and the insertions must happen only after each "complete character", not after each byte. Everybody seems to be "looping on the str", which give each byte, not each character as you so clearly specify.
There's really no good, fast way to "loop over characters" in a multibyte-encoded byte str; the best one can do is to .decode('utf-8'), giving a unicode object -- process the unicode object (where loops do correctly go over characters!), then .encode it back at the end. By far the best approach in general is to only, exclusively use unicode objects, not encoded strs, throughout the heart of your code; encode and decode to/from byte strings only upon I/O (if and when you must because you need to communicate with subsystems that only support byte strings and not proper Unicode).
So I would strongly suggest that you consider this "best approach" and restructure your code accordingly: unicode everywhere, except at the boundaries where it may be encoded/decoded if and when necessary only. For the "processing" part, you'll be MUCH happier with unicode objects than you would be lugging around balky multibyte-encoded strings!-)
Edit: forgot to comment on a possible approach you mention -- array.array. That's indeed O(N) if you are only appending to the end of the new array you're constructing (some appends will make the array grow beyond previously allocated capacity and therefore require a reallocation and copying of data, but, just like for list, a midly exponential overallocation strategy allows append to be amortized O(1), and therefore N appends to be O(N)).
However, to build an array (again, just like a list) by repeated insert operations in the middle of it is O(N**2), because each of the O(N) insertions must shift all the O(N) following items (assuming the number of previously existing items and the number of newly inserted ones are proportional to each other, as seems to be the case for your specific requirements).
So, an array.array('u'), with repeated appends to it (not inserts!-), is a fourth O(N) approach that can solve your problem (in addition to the three I already mentioned: join, re, and cStringIO) -- those are the ones worth benchmarking once you clarify the ranges of lengths that are of interest, as I mentioned above.
A:
Try to build the result with the re module. It will do the nasty concatenation under the hood, so performance should be OK. Example:
import re
re.sub(r'(.)', r'\1-', u'Nissim')
count = 1
def repl(m):
global count
s = m.group(1) + unicode(count)
count += 1
return s
re.sub(r'(.)', repl, u'Nissim')
A:
this might be a python effective solution:
s1="Nissim"
s2="------"
s3=''.join([''.join(list(x)) for x in zip(s1,s2)])
A:
have you tested how slow it is or how fast you need, i think something like this will be fast enough
s = u"\u0960\u0961"
ss = ''.join(sum(map(list,zip(s,"anurag")),[]))
So try with simplest and if it doesn't suffice then try to improve upon it, C module should be last option
Edit: This is also the fastest
import timeit
s1="Nissim"
s2="------"
timeit.f1=lambda s1,s2:''.join(sum(map(list,zip(s1,s2)),[]))
timeit.f2=lambda s1,s2:''.join([''.join(list(x)) for x in zip(s1,s2)])
timeit.f3=lambda s1,s2:''.join(i+j for i, j in zip(s1, s2))
N=100000
print "anurag",timeit.Timer("timeit.f1('Nissim', '------')","import timeit").timeit(N)
print "dweeves",timeit.Timer("timeit.f2('Nissim', '------')","import timeit").timeit(N)
print "SilentGhost",timeit.Timer("timeit.f3('Nissim', '------')","import timeit").timeit(N)
output is
anurag 1.95547590546
dweeves 2.36131184271
SilentGhost 3.10855625505
A:
here are my timings. Note, it's py3.1
>>> s1
'Nissim'
>>> s2 = '-' * len(s1)
>>> timeit.timeit("''.join(i+j for i, j in zip(s1, s2))", "from __main__ import s1, s2")
3.5249209707199043
>>> timeit.timeit("''.join(sum(map(list,zip(s1,s2)),[]))", "from __main__ import s1, s2")
5.903614027402
>>> timeit.timeit("''.join([''.join(list(x)) for x in zip(s1,s2)])", "from __main__ import s1, s2")
6.04072124013328
>>> timeit.timeit("''.join(i+'-' for i in s1)", "from __main__ import s1, s2")
2.484378367653335
>>> timeit.timeit("reduce(lambda x, y : x+y+'-', s1, '')", "from __main__ import s1; from functools import reduce")
2.290644129319844
A:
Use Reduce.
>>> str = "Nissim"
>>> reduce(lambda x, y : x+y+'-', str, '')
'N-i-s-s-i-m-'
The same with numbers too as long as you know which char maps to which. [dict can be handy]
>>> mapper = dict([(repr(i), chr(i+ord('a'))) for i in range(9)])
>>> str1 = '0123'
>>> reduce(lambda x, y : x+y+mapper[y], str1, '')
'0a1b2c3d'
A:
string="™¡™©€"
unicode(string,"utf-8")
s2='-'*len(s1)
''.join(sum(map(list,zip(s1,s2)),[])).encode("utf-8")
|
fast string modification in python
|
This is partially a theoretical question:
I have a string (say UTF-8), and I need to modify it so that each character (not byte) becomes 2 characters, for instance:
"Nissim" becomes "N-i-s-s-i-m-"
"01234" becomes "0a1b2c3d4e"
and so on.
I would suspect that naive concatenation in a loop would be too expensive (it IS the bottleneck, this is supposed to happen all the time).
I would either use an array (pre-allocated) or try to make my own C module to handle this.
Anyone has better ideas for this kind of thing?
(Note that the problem is always about multibyte encodings, and must be solved for UTF-8 as well),
Oh and its Python 2.5, so no shiny Python 3 thingies are available here.
Thanks
|
[
"@gnosis, beware of all the well-intentioned responders saying you should measure the times: yes, you should (because programmers' instincts are often off-base about performance), but measuring a single case, as in all the timeit examples proffered so far, misses a crucial consideration -- big-O.\nYour instincts are correct: in general (with a very few special cases where recent Python releases can optimize things a bit, but they don't stretch very far), building a string by a loop of += over the pieces (or a reduce and so on) must be O(N**2) due to the many intermediate object allocations and the inevitable repeated copying of those object's content; joining, regular expressions, and the third option that was not mentioned in the above answers (write method of cStringIO.StringIO instances) are the O(N) solutions and therefore the only ones worth considering unless you happen to know for sure that the strings you'll be operating on have modest upper bounds on their length.\nSo what, if any, are the upper bounds in length on the strings you're processing? If you can give us an idea, benchmarks can be run on representative ranges of lengths of interest (for example, say, \"most often less than 100 characters but some % of the time maybe a couple thousand characters\" would be an excellent spec for this performance evaluation: IOW, it doesn't need to be extremely precise, just indicative of your problem space).\nI also notice that nobody seems to follow one crucial and difficult point in your specs: that the strings are Python 2.5 multibyte, UTF-8 encoded, strs, and the insertions must happen only after each \"complete character\", not after each byte. Everybody seems to be \"looping on the str\", which give each byte, not each character as you so clearly specify.\nThere's really no good, fast way to \"loop over characters\" in a multibyte-encoded byte str; the best one can do is to .decode('utf-8'), giving a unicode object -- process the unicode object (where loops do correctly go over characters!), then .encode it back at the end. By far the best approach in general is to only, exclusively use unicode objects, not encoded strs, throughout the heart of your code; encode and decode to/from byte strings only upon I/O (if and when you must because you need to communicate with subsystems that only support byte strings and not proper Unicode).\nSo I would strongly suggest that you consider this \"best approach\" and restructure your code accordingly: unicode everywhere, except at the boundaries where it may be encoded/decoded if and when necessary only. For the \"processing\" part, you'll be MUCH happier with unicode objects than you would be lugging around balky multibyte-encoded strings!-)\nEdit: forgot to comment on a possible approach you mention -- array.array. That's indeed O(N) if you are only appending to the end of the new array you're constructing (some appends will make the array grow beyond previously allocated capacity and therefore require a reallocation and copying of data, but, just like for list, a midly exponential overallocation strategy allows append to be amortized O(1), and therefore N appends to be O(N)).\nHowever, to build an array (again, just like a list) by repeated insert operations in the middle of it is O(N**2), because each of the O(N) insertions must shift all the O(N) following items (assuming the number of previously existing items and the number of newly inserted ones are proportional to each other, as seems to be the case for your specific requirements).\nSo, an array.array('u'), with repeated appends to it (not inserts!-), is a fourth O(N) approach that can solve your problem (in addition to the three I already mentioned: join, re, and cStringIO) -- those are the ones worth benchmarking once you clarify the ranges of lengths that are of interest, as I mentioned above.\n",
"Try to build the result with the re module. It will do the nasty concatenation under the hood, so performance should be OK. Example:\n import re\n re.sub(r'(.)', r'\\1-', u'Nissim')\n\n count = 1\n def repl(m):\n global count\n s = m.group(1) + unicode(count)\n count += 1\n return s\n re.sub(r'(.)', repl, u'Nissim')\n\n",
"this might be a python effective solution:\ns1=\"Nissim\"\ns2=\"------\"\ns3=''.join([''.join(list(x)) for x in zip(s1,s2)])\n\n",
"have you tested how slow it is or how fast you need, i think something like this will be fast enough\ns = u\"\\u0960\\u0961\"\nss = ''.join(sum(map(list,zip(s,\"anurag\")),[]))\n\nSo try with simplest and if it doesn't suffice then try to improve upon it, C module should be last option\nEdit: This is also the fastest\nimport timeit\n\ns1=\"Nissim\"\ns2=\"------\"\n\ntimeit.f1=lambda s1,s2:''.join(sum(map(list,zip(s1,s2)),[]))\ntimeit.f2=lambda s1,s2:''.join([''.join(list(x)) for x in zip(s1,s2)])\ntimeit.f3=lambda s1,s2:''.join(i+j for i, j in zip(s1, s2))\n\nN=100000\n\nprint \"anurag\",timeit.Timer(\"timeit.f1('Nissim', '------')\",\"import timeit\").timeit(N)\nprint \"dweeves\",timeit.Timer(\"timeit.f2('Nissim', '------')\",\"import timeit\").timeit(N)\nprint \"SilentGhost\",timeit.Timer(\"timeit.f3('Nissim', '------')\",\"import timeit\").timeit(N)\n\noutput is\nanurag 1.95547590546\ndweeves 2.36131184271\nSilentGhost 3.10855625505\n\n",
"here are my timings. Note, it's py3.1\n>>> s1\n'Nissim'\n>>> s2 = '-' * len(s1)\n>>> timeit.timeit(\"''.join(i+j for i, j in zip(s1, s2))\", \"from __main__ import s1, s2\")\n3.5249209707199043\n>>> timeit.timeit(\"''.join(sum(map(list,zip(s1,s2)),[]))\", \"from __main__ import s1, s2\")\n5.903614027402\n>>> timeit.timeit(\"''.join([''.join(list(x)) for x in zip(s1,s2)])\", \"from __main__ import s1, s2\")\n6.04072124013328\n>>> timeit.timeit(\"''.join(i+'-' for i in s1)\", \"from __main__ import s1, s2\")\n2.484378367653335\n>>> timeit.timeit(\"reduce(lambda x, y : x+y+'-', s1, '')\", \"from __main__ import s1; from functools import reduce\")\n2.290644129319844\n\n",
"Use Reduce.\n>>> str = \"Nissim\"\n>>> reduce(lambda x, y : x+y+'-', str, '')\n'N-i-s-s-i-m-'\n\nThe same with numbers too as long as you know which char maps to which. [dict can be handy]\n>>> mapper = dict([(repr(i), chr(i+ord('a'))) for i in range(9)])\n>>> str1 = '0123'\n>>> reduce(lambda x, y : x+y+mapper[y], str1, '')\n'0a1b2c3d'\n\n",
"string=\"™¡™©€\"\nunicode(string,\"utf-8\")\ns2='-'*len(s1)\n''.join(sum(map(list,zip(s1,s2)),[])).encode(\"utf-8\")\n\n"
] |
[
12,
2,
2,
1,
1,
0,
0
] |
[] |
[] |
[
"python",
"string"
] |
stackoverflow_0001508464_python_string.txt
|
Q:
RESTful APIs in Django for GETTING information from a server
Any idea of a RESTful APIs in Django for GETTING information from a server?
What I want to do is fetch the errors from the server into a database.
For example:
The Live server has examplewebsite.com, any thing goes wrong with that website should POST the error, where the Django app GET the errors and insert them into the database.
A:
Take a look at Piston, which calls itself a 'mini-framework for Django for creating RESTful APIs.'
A:
Are you just trying to fetch data over HTTP? If so, what about urllib2?
|
RESTful APIs in Django for GETTING information from a server
|
Any idea of a RESTful APIs in Django for GETTING information from a server?
What I want to do is fetch the errors from the server into a database.
For example:
The Live server has examplewebsite.com, any thing goes wrong with that website should POST the error, where the Django app GET the errors and insert them into the database.
|
[
"Take a look at Piston, which calls itself a 'mini-framework for Django for creating RESTful APIs.'\n",
"Are you just trying to fetch data over HTTP? If so, what about urllib2?\n"
] |
[
2,
0
] |
[] |
[] |
[
"api",
"django",
"python",
"rest"
] |
stackoverflow_0001509621_api_django_python_rest.txt
|
Q:
How to make this Python program compile?
I have this Python code:
import re
s = "aa67bc54c9"
for t, n in re.findall(r"([a-z]+)([0-9]+)", s)
And I get this error message when I try to run it:
File "<stdin>", line 1
for t, n in re.findall(r"([a-z]+)([0-9]+)", s)
^
SyntaxError: invalid syntax
How can I solve this? I am new to Python.
A:
for starts a loop, so you need to end the line with a :, and put the loop body, indented, on the following lines.
EDIT:
For further information I suggest you go to the main documentation.
A:
You need a colon (:) on the end of the line.
And after that line, you will need an indented statement(s) of what to actually do in the loop. If you don't want to do anything in the loop (perhaps until you get more code written) you can use the statement pass to indicate basically a no-op.
In Python, you need a colon at the end of
for statements
while statements
if/elif/else statements
try/except statements
class statements
def (function) statements
|
How to make this Python program compile?
|
I have this Python code:
import re
s = "aa67bc54c9"
for t, n in re.findall(r"([a-z]+)([0-9]+)", s)
And I get this error message when I try to run it:
File "<stdin>", line 1
for t, n in re.findall(r"([a-z]+)([0-9]+)", s)
^
SyntaxError: invalid syntax
How can I solve this? I am new to Python.
|
[
"for starts a loop, so you need to end the line with a :, and put the loop body, indented, on the following lines.\nEDIT:\nFor further information I suggest you go to the main documentation.\n",
"You need a colon (:) on the end of the line.\nAnd after that line, you will need an indented statement(s) of what to actually do in the loop. If you don't want to do anything in the loop (perhaps until you get more code written) you can use the statement pass to indicate basically a no-op.\nIn Python, you need a colon at the end of \n\nfor statements\nwhile statements\nif/elif/else statements\ntry/except statements\nclass statements\ndef (function) statements\n\n"
] |
[
7,
4
] |
[] |
[] |
[
"python",
"syntax_error"
] |
stackoverflow_0001510609_python_syntax_error.txt
|
Q:
Follow-up on iterating over a graph using XML minidom
This is a follow-up to the question (Link)
What I intend on doing is using the XML to create a graph using NetworkX. Looking at the DOM structure below, all nodes within the same node should have an edge between them, and all nodes that have attended the same conference should have a node to that conference. To summarize, all authors that worked together on a paper should be connected to each other, and all authors who have attended a particular conference should be connected to that conference.
<conference name="CONF 2009">
<paper>
<author>Yih-Chun Hu(UIUC)</author>
<author>David McGrew(Cisco Systems)</author>
<author>Adrian Perrig(CMU)</author>
<author>Brian Weis(Cisco Systems)</author>
<author>Dan Wendlandt(CMU)</author>
</paper>
<paper>
<author>Dan Wendlandt(CMU)</author>
<author>Ioannis Avramopoulos(Princeton)</author>
<author>David G. Andersen(CMU)</author>
<author>Jennifer Rexford(Princeton)</author>
</paper>
</conference>
I've figured out how to connect authors to conferences, but I'm unsure about how to connect authors to each other. The thing that I'm having difficulty with is how to iterate over the authors that have worked on the same paper and connect them together.
dom = parse(filepath)
conference=dom.getElementsByTagName('conference')
for node in conference:
conf_name=node.getAttribute('name')
print conf_name
G.add_node(conf_name)
#The nodeValue is split in order to get the name of the author
#and to exclude the university they are part of
plist=node.getElementsByTagName('paper')
for p in plist:
author=str(p.childNodes[0].nodeValue)
author= author.split("(")
#Figure out a way to create edges between authors in the same <paper> </paper>
alist=node.getElementsByTagName('author')
for a in alist:
authortext= str(a.childNodes[0].nodeValue).split("(")
if authortext[0] in dict:
edgeQuantity=dict[authortext[0]]
edgeQuantity+=1
dict[authortext[0]]=edgeQuantity
G.add_edge(authortext[0],conf_name)
#Otherwise, add it to the dictionary and create an edge to the conference.
else:
dict[authortext[0]]= 1
G.add_node(authortext[0])
G.add_edge(authortext[0],conf_name)
i+=1
A:
I'm unsure about how to connect authors to each other.
You need to generate (author, otherauthor) pairs so you can add them as edges. The typical way to do that would be a nested iteration:
for thing in things:
for otherthing in things:
add_edge(thing, otherthing)
This is a naïve implementation that includes self-loops (giving an author an edge connecting himself to himself), which you may or may not want; it also includes both (1,2) and (2,1), which if you're doing an undirected graph is redundant. (In Python 2.6, the built-in permutations generator also does this.) Here's a generator that fixes these things:
def pairs(l):
for i in range(len(l)-1):
for j in range(i+1, len(l)):
yield l[i], l[j]
I've not used NetworkX, but looking at the doc it seems to say you can call add_node on the same node twice (with nothing happening the second time). If so, you can discard the dict you were using to try to keep track of what nodes you'd inserted. Also, it seems to say that if you add an edge to an unknown node, it'll add that node for you automatically. So it should be possible to make the code much shorter:
for conference in dom.getElementsByTagName('conference'):
var conf_name= node.getAttribute('name')
for paper in conference.getElementsByTagName('paper'):
authors= paper.getElementsByTagName('author')
auth_names= [author.firstChild.data.split('(')[0] for author in authors]
# Note author's conference attendance
#
for auth_name in auth_names:
G.add_edge(auth_name, conf_name)
# Note combinations of authors working on same paper
#
for auth_name, other_name in pairs(auth_names):
G.add_edge(auth_name, otherauth_name)
A:
im not entirely sure what you're looking for, but based on your description i threw together a graph which I think encapsulates the relationships you describe.
http://imgur.com/o2HvT.png
i used openfst to do this. i find it much easier to clearly layout the graphical relationships before plunging into the code for something like this.
also, do you actually need to generate an explicit edge between authors? this seems like a traversal issue.
|
Follow-up on iterating over a graph using XML minidom
|
This is a follow-up to the question (Link)
What I intend on doing is using the XML to create a graph using NetworkX. Looking at the DOM structure below, all nodes within the same node should have an edge between them, and all nodes that have attended the same conference should have a node to that conference. To summarize, all authors that worked together on a paper should be connected to each other, and all authors who have attended a particular conference should be connected to that conference.
<conference name="CONF 2009">
<paper>
<author>Yih-Chun Hu(UIUC)</author>
<author>David McGrew(Cisco Systems)</author>
<author>Adrian Perrig(CMU)</author>
<author>Brian Weis(Cisco Systems)</author>
<author>Dan Wendlandt(CMU)</author>
</paper>
<paper>
<author>Dan Wendlandt(CMU)</author>
<author>Ioannis Avramopoulos(Princeton)</author>
<author>David G. Andersen(CMU)</author>
<author>Jennifer Rexford(Princeton)</author>
</paper>
</conference>
I've figured out how to connect authors to conferences, but I'm unsure about how to connect authors to each other. The thing that I'm having difficulty with is how to iterate over the authors that have worked on the same paper and connect them together.
dom = parse(filepath)
conference=dom.getElementsByTagName('conference')
for node in conference:
conf_name=node.getAttribute('name')
print conf_name
G.add_node(conf_name)
#The nodeValue is split in order to get the name of the author
#and to exclude the university they are part of
plist=node.getElementsByTagName('paper')
for p in plist:
author=str(p.childNodes[0].nodeValue)
author= author.split("(")
#Figure out a way to create edges between authors in the same <paper> </paper>
alist=node.getElementsByTagName('author')
for a in alist:
authortext= str(a.childNodes[0].nodeValue).split("(")
if authortext[0] in dict:
edgeQuantity=dict[authortext[0]]
edgeQuantity+=1
dict[authortext[0]]=edgeQuantity
G.add_edge(authortext[0],conf_name)
#Otherwise, add it to the dictionary and create an edge to the conference.
else:
dict[authortext[0]]= 1
G.add_node(authortext[0])
G.add_edge(authortext[0],conf_name)
i+=1
|
[
"\nI'm unsure about how to connect authors to each other.\n\nYou need to generate (author, otherauthor) pairs so you can add them as edges. The typical way to do that would be a nested iteration:\nfor thing in things:\n for otherthing in things:\n add_edge(thing, otherthing)\n\nThis is a naïve implementation that includes self-loops (giving an author an edge connecting himself to himself), which you may or may not want; it also includes both (1,2) and (2,1), which if you're doing an undirected graph is redundant. (In Python 2.6, the built-in permutations generator also does this.) Here's a generator that fixes these things:\ndef pairs(l):\n for i in range(len(l)-1):\n for j in range(i+1, len(l)):\n yield l[i], l[j]\n\nI've not used NetworkX, but looking at the doc it seems to say you can call add_node on the same node twice (with nothing happening the second time). If so, you can discard the dict you were using to try to keep track of what nodes you'd inserted. Also, it seems to say that if you add an edge to an unknown node, it'll add that node for you automatically. So it should be possible to make the code much shorter:\nfor conference in dom.getElementsByTagName('conference'):\n var conf_name= node.getAttribute('name')\n for paper in conference.getElementsByTagName('paper'):\n authors= paper.getElementsByTagName('author')\n auth_names= [author.firstChild.data.split('(')[0] for author in authors]\n\n # Note author's conference attendance\n #\n for auth_name in auth_names:\n G.add_edge(auth_name, conf_name)\n\n # Note combinations of authors working on same paper\n #\n for auth_name, other_name in pairs(auth_names):\n G.add_edge(auth_name, otherauth_name)\n\n",
"im not entirely sure what you're looking for, but based on your description i threw together a graph which I think encapsulates the relationships you describe.\nhttp://imgur.com/o2HvT.png\ni used openfst to do this. i find it much easier to clearly layout the graphical relationships before plunging into the code for something like this.\nalso, do you actually need to generate an explicit edge between authors? this seems like a traversal issue.\n"
] |
[
0,
0
] |
[] |
[] |
[
"parsing",
"python",
"xmldom"
] |
stackoverflow_0001510447_parsing_python_xmldom.txt
|
Q:
Django model refactoring and migration
I'd like to refactor a number of django apps in a way which involves moving Models from one app into another where they can be more readily reused.
A number of these models have either ForeignKey relationships or M2M relationships to other models (such as User). For example:
class Department(models.Model):
name = models.CharField(max_length=255)
reviewers = models.ManyToManyField(User)
In most cases, the models do not change, so I've currently just redefined them in the new app. This of course causes problems with related_name, since I have the same model defined in two separate apps, and manage.py syncdb gives the following error:
new_app.department: Accessor for m2m field 'reviewers' clashes with related m2m field 'User.department_set'. Add a related_name argument to the definition for 'reviewers'.
old_app.department: Accessor for m2m field 'reviewers' clashes with related m2m field 'User.department_set'. Add a related_name argument to the definition for 'reviewers'.
When doing this, I also need to migrate the data keeping any automatically generated database ids. I'd planned on using the ORM to do the migration, thinking something like the following would work:
from newapp.models import Department
import oldapp.models as old
for obj in old.Department.objects.all():
new_obj = Department(id=obj.id, name=obj.name)
new_obj.save()
for r in obj.reviewers.all():
new_obj.reviewers.add(r)
new_obj.save()
Of course, the related_name problem prevents me from doing this.
How have others made this sort of code refactor and migration possible? Thanks!
A:
Have you looked at using a migration tool such as South or django-evolution?
A:
You can very easily solve the immediate problem by just providing a related_name argument to the ForeignKey in either the new or old model, exactly as the error message tells you to. Not confident that will solve all your problems with this migration, but it will take you one step forward.
|
Django model refactoring and migration
|
I'd like to refactor a number of django apps in a way which involves moving Models from one app into another where they can be more readily reused.
A number of these models have either ForeignKey relationships or M2M relationships to other models (such as User). For example:
class Department(models.Model):
name = models.CharField(max_length=255)
reviewers = models.ManyToManyField(User)
In most cases, the models do not change, so I've currently just redefined them in the new app. This of course causes problems with related_name, since I have the same model defined in two separate apps, and manage.py syncdb gives the following error:
new_app.department: Accessor for m2m field 'reviewers' clashes with related m2m field 'User.department_set'. Add a related_name argument to the definition for 'reviewers'.
old_app.department: Accessor for m2m field 'reviewers' clashes with related m2m field 'User.department_set'. Add a related_name argument to the definition for 'reviewers'.
When doing this, I also need to migrate the data keeping any automatically generated database ids. I'd planned on using the ORM to do the migration, thinking something like the following would work:
from newapp.models import Department
import oldapp.models as old
for obj in old.Department.objects.all():
new_obj = Department(id=obj.id, name=obj.name)
new_obj.save()
for r in obj.reviewers.all():
new_obj.reviewers.add(r)
new_obj.save()
Of course, the related_name problem prevents me from doing this.
How have others made this sort of code refactor and migration possible? Thanks!
|
[
"Have you looked at using a migration tool such as South or django-evolution?\n",
"You can very easily solve the immediate problem by just providing a related_name argument to the ForeignKey in either the new or old model, exactly as the error message tells you to. Not confident that will solve all your problems with this migration, but it will take you one step forward.\n"
] |
[
6,
0
] |
[] |
[] |
[
"django",
"django_models",
"python"
] |
stackoverflow_0001510215_django_django_models_python.txt
|
Q:
Retrieving tags for a specific queryset with django-tagging
I'm using django-tagging, and am trying to retrieve a list of tags for a specific queryset. Here's what I've got:
tag = Tag.objects.get(name='tag_name')
queryset = TaggedItem.objects.get_by_model(Article, tag)
tags = Tag.objects.usage_for_queryset(queryset, counts=True)
"queryset" appropriately returns a number of articles that have been tagged with the tag 'tag_name', but when I attempt to retrieve all of the tags for that queryset, "tags" returns a complete list of all tags for that model.
Anyone else run into this before, or is this a bug in django-tagging?
A:
This appears to be a bug in django-tagging. A patch has been written, but it has not yet been committed to trunk. Find the patch here:
http://code.google.com/p/django-tagging/issues/detail?id=44
|
Retrieving tags for a specific queryset with django-tagging
|
I'm using django-tagging, and am trying to retrieve a list of tags for a specific queryset. Here's what I've got:
tag = Tag.objects.get(name='tag_name')
queryset = TaggedItem.objects.get_by_model(Article, tag)
tags = Tag.objects.usage_for_queryset(queryset, counts=True)
"queryset" appropriately returns a number of articles that have been tagged with the tag 'tag_name', but when I attempt to retrieve all of the tags for that queryset, "tags" returns a complete list of all tags for that model.
Anyone else run into this before, or is this a bug in django-tagging?
|
[
"This appears to be a bug in django-tagging. A patch has been written, but it has not yet been committed to trunk. Find the patch here:\nhttp://code.google.com/p/django-tagging/issues/detail?id=44\n"
] |
[
1
] |
[] |
[] |
[
"django",
"django_tagging",
"python"
] |
stackoverflow_0001510936_django_django_tagging_python.txt
|
Q:
Parsing large pseudo-xml files in python
I'm trying to parse* a large file (> 5GB) of structured markup data. The data format is essentially XML but there is no explicit root element. What's the most efficient way to do that?
The problem with SAX parsers is that they require a root element, so either I've to add a pseudo element to the data stream (is there an equivalent to Java's SequenceInputStream in Python?) or I've to switch to a non-SAX conform event-based parser (is there a successor of sgmllib?)
The structure of the data is quite simple. Basically a listing of elements:
<Document>
<docid>1</docid>
<text>foo</text>
</Document>
<Document>
<docid>2</docid>
<text>bar</text>
</Document>
*actually to iterate
A:
http://docs.python.org/library/xml.sax.html
Note, that you can pass a 'stream' object to xml.sax.parse. This means you can probably pass any object that has file-like methods (like read) to the parse call... Make your own object, which will firstly put your virtual root start-tag, then the contents of file, then virtual root end-tag. I guess that you only need to implement read method... but this might depend on the sax parser you'll use.
Example that works for me:
import xml.sax
import xml.sax.handler
class PseudoStream(object):
def read_iterator(self):
yield '<foo>'
yield '<bar>'
for line in open('test.xml'):
yield line
yield '</bar>'
yield '</foo>'
def __init__(self):
self.ri = self.read_iterator()
def read(self, *foo):
try:
return self.ri.next()
except StopIteration:
return ''
class SAXHandler(xml.sax.handler.ContentHandler):
def startElement(self, name, attrs):
print name, attrs
d = xml.sax.parse(PseudoStream(), SAXHandler())
A:
The quick and dirty answer would be adding a root element (as String) so it would be a valid XML.
Regards.
A:
Add root element and use SAX, STax or VTD-XML ..
A:
xml.parsers.expat -- Fast XML parsing using Expat
The xml.parsers.expat module is a Python interface to the Expat non-validating XML parser. The module provides a single extension type, xmlparser, that represents the current state of an XML parser. After an xmlparser object has been created, various attributes of the object can be set to handler functions. When an XML document is then fed to the parser, the handler functions are called for the character data and markup in the XML document.
More info : http://www.python.org/doc/2.5/lib/module-xml.parsers.expat.html
|
Parsing large pseudo-xml files in python
|
I'm trying to parse* a large file (> 5GB) of structured markup data. The data format is essentially XML but there is no explicit root element. What's the most efficient way to do that?
The problem with SAX parsers is that they require a root element, so either I've to add a pseudo element to the data stream (is there an equivalent to Java's SequenceInputStream in Python?) or I've to switch to a non-SAX conform event-based parser (is there a successor of sgmllib?)
The structure of the data is quite simple. Basically a listing of elements:
<Document>
<docid>1</docid>
<text>foo</text>
</Document>
<Document>
<docid>2</docid>
<text>bar</text>
</Document>
*actually to iterate
|
[
"http://docs.python.org/library/xml.sax.html\nNote, that you can pass a 'stream' object to xml.sax.parse. This means you can probably pass any object that has file-like methods (like read) to the parse call... Make your own object, which will firstly put your virtual root start-tag, then the contents of file, then virtual root end-tag. I guess that you only need to implement read method... but this might depend on the sax parser you'll use.\nExample that works for me:\nimport xml.sax\nimport xml.sax.handler\n\nclass PseudoStream(object):\n def read_iterator(self):\n yield '<foo>'\n yield '<bar>'\n for line in open('test.xml'):\n yield line\n yield '</bar>'\n yield '</foo>'\n\n def __init__(self):\n self.ri = self.read_iterator()\n\n def read(self, *foo):\n try:\n return self.ri.next()\n except StopIteration:\n return ''\n\nclass SAXHandler(xml.sax.handler.ContentHandler):\n def startElement(self, name, attrs):\n print name, attrs\n\nd = xml.sax.parse(PseudoStream(), SAXHandler())\n\n",
"The quick and dirty answer would be adding a root element (as String) so it would be a valid XML.\nRegards.\n",
"Add root element and use SAX, STax or VTD-XML ..\n",
"xml.parsers.expat -- Fast XML parsing using Expat\nThe xml.parsers.expat module is a Python interface to the Expat non-validating XML parser. The module provides a single extension type, xmlparser, that represents the current state of an XML parser. After an xmlparser object has been created, various attributes of the object can be set to handler functions. When an XML document is then fed to the parser, the handler functions are called for the character data and markup in the XML document.\nMore info : http://www.python.org/doc/2.5/lib/module-xml.parsers.expat.html\n"
] |
[
11,
1,
1,
1
] |
[] |
[] |
[
"python",
"xml"
] |
stackoverflow_0001508938_python_xml.txt
|
Q:
Can Django/Javascript handle conditional "Ajax" responses to HTTP POST requests?
How do I design a Django/Javascript application to provide for conditional Ajax responses to conventional HTTP requests?
On the server, I have a custom-built Form object. When the browser POSTS the form's data, the server checks the submitted data against existing data and rules (eg, if the form adds some entity to a database, does that entity already exist in the database?). If the data passes, the server saves, generates an ID number and adds it to the form's data, and passes the form and data back to the browser.
if request.method == 'POST':
formClass = form_code.getCustomForm()
thisForm = formClass(data=request.POST)
if thisForm.isvalid():
saveCheck = thisForm.saveData()
t = loader.get_template("CustomerForm.html")
c = Context({ 'expectedFormObj': thisForm })
(Note that my custom logic checking is in saveData() and is separate from the html validation done by isvalid().)
So far, standard Django (I hope). But if the data doesn't pass, I want to send a message to the browser. I suppose saveData() could put the message in an attribute of the form, and the template could check for that attribute, embed its data as javascript variable and include a javascript function to display the message. But passing all that form html back, just to add one message, seems inelegant (as does the standard Django form submission process, but never mind). In that case I'd like to just pass back the message.
Now I suppose I could tie a Javascript function to the html form's onsubmit event, and have that issue an XMLHttpRequest, and have the server respond to that based on the output of the saveData() call. But then the browser has two requests to the server outstanding (POST and XHR). Maybe a successful saveData() would rewrite the whole page and erase any potential for conflict. But I'd also have to get the server to sequence its response to the XHR to follow the response to the POST, and figure out how to communicate the saveData outcome to the response to the XHR. I suppose that is doable, even without the thread programming I don't know, but it seems messy.
I speculate that I might use javascript to make the browser's response conditional to something in the response to the POST request (either rewrite the whole page, or just display a message). But I suspect that the page's javascript hands control over the browser with the POST request, and that any response to the POST would just rewrite the page.
So can I design a process to pass back the whole form only if the server-side saveData() works, and a message that is displayed without rewriting the entire form if saveData() doesn't? If so, how?
A:
Although you can arrange for your views to examine the request data to decide if the response should be an AJAXish or plain HTML, I don't really recommend it. Put AJAX request handlers in a separate URL structure, for instance all your regular html views have urls like /foo/bar and a corresponding api call for the same info would be /ajax/foo/bar.
Since most views will examine the request data, then do some processing, then create a python dictionary and pass that to the template engine, you can factor out the common parts to make this a little easier. the first few steps could be a generic sort of function that just returns the python dictionary, and then actual responses are composed by wrapping the handler functions in a template renderer or json encoder.
My usual workflow is to initially assume that the client has no javascript, (which is still a valid assumption; many mobile browsers have no JS) and implement the app as static GET and POST handlers. From there I start looking for the places where my app can benefit from a little client side scripting. For instance I'll usually redesign the forms to submit via AJAX type calls without reloading a page. These will not send their requests to the same URL/django view as the plain html form version would, since the response needs to be a simple success message in plain text or html fragment.
Similarly, getting data from the server is also redesigned to respond with a concise JSoN document to be processed into the page on the client. This also would be a separate URL/django view as the corresponding plain html for that resource.
A:
When dealing with AJAX, I use this:
from django.utils import simplejson
...
status = simplejson.dumps({'status': "success"})
return HttpResponse(status, mimetype="application/json")
Then, AJAX (jQuery) can do what it wants based on the return value of 'status'.
I'm not sure exactly what you want with regards to forms. If you want an easier, and better form experience, I suggest checking out uni-form. Pinax has a good implementation of this in their voting app.
A:
FYI, this isn't an answer...but it might help you think about it a different way
Here's the problem I'm running into...Google App Engine + jQuery Ajax = 405 Method Not Allowed.
So basically I get the thing to work using the outlined code, then I can't make the AJAX request :(.
|
Can Django/Javascript handle conditional "Ajax" responses to HTTP POST requests?
|
How do I design a Django/Javascript application to provide for conditional Ajax responses to conventional HTTP requests?
On the server, I have a custom-built Form object. When the browser POSTS the form's data, the server checks the submitted data against existing data and rules (eg, if the form adds some entity to a database, does that entity already exist in the database?). If the data passes, the server saves, generates an ID number and adds it to the form's data, and passes the form and data back to the browser.
if request.method == 'POST':
formClass = form_code.getCustomForm()
thisForm = formClass(data=request.POST)
if thisForm.isvalid():
saveCheck = thisForm.saveData()
t = loader.get_template("CustomerForm.html")
c = Context({ 'expectedFormObj': thisForm })
(Note that my custom logic checking is in saveData() and is separate from the html validation done by isvalid().)
So far, standard Django (I hope). But if the data doesn't pass, I want to send a message to the browser. I suppose saveData() could put the message in an attribute of the form, and the template could check for that attribute, embed its data as javascript variable and include a javascript function to display the message. But passing all that form html back, just to add one message, seems inelegant (as does the standard Django form submission process, but never mind). In that case I'd like to just pass back the message.
Now I suppose I could tie a Javascript function to the html form's onsubmit event, and have that issue an XMLHttpRequest, and have the server respond to that based on the output of the saveData() call. But then the browser has two requests to the server outstanding (POST and XHR). Maybe a successful saveData() would rewrite the whole page and erase any potential for conflict. But I'd also have to get the server to sequence its response to the XHR to follow the response to the POST, and figure out how to communicate the saveData outcome to the response to the XHR. I suppose that is doable, even without the thread programming I don't know, but it seems messy.
I speculate that I might use javascript to make the browser's response conditional to something in the response to the POST request (either rewrite the whole page, or just display a message). But I suspect that the page's javascript hands control over the browser with the POST request, and that any response to the POST would just rewrite the page.
So can I design a process to pass back the whole form only if the server-side saveData() works, and a message that is displayed without rewriting the entire form if saveData() doesn't? If so, how?
|
[
"Although you can arrange for your views to examine the request data to decide if the response should be an AJAXish or plain HTML, I don't really recommend it. Put AJAX request handlers in a separate URL structure, for instance all your regular html views have urls like /foo/bar and a corresponding api call for the same info would be /ajax/foo/bar.\nSince most views will examine the request data, then do some processing, then create a python dictionary and pass that to the template engine, you can factor out the common parts to make this a little easier. the first few steps could be a generic sort of function that just returns the python dictionary, and then actual responses are composed by wrapping the handler functions in a template renderer or json encoder.\nMy usual workflow is to initially assume that the client has no javascript, (which is still a valid assumption; many mobile browsers have no JS) and implement the app as static GET and POST handlers. From there I start looking for the places where my app can benefit from a little client side scripting. For instance I'll usually redesign the forms to submit via AJAX type calls without reloading a page. These will not send their requests to the same URL/django view as the plain html form version would, since the response needs to be a simple success message in plain text or html fragment. \nSimilarly, getting data from the server is also redesigned to respond with a concise JSoN document to be processed into the page on the client. This also would be a separate URL/django view as the corresponding plain html for that resource.\n",
"When dealing with AJAX, I use this:\nfrom django.utils import simplejson\n...\nstatus = simplejson.dumps({'status': \"success\"})\nreturn HttpResponse(status, mimetype=\"application/json\")\n\nThen, AJAX (jQuery) can do what it wants based on the return value of 'status'.\nI'm not sure exactly what you want with regards to forms. If you want an easier, and better form experience, I suggest checking out uni-form. Pinax has a good implementation of this in their voting app.\n",
"FYI, this isn't an answer...but it might help you think about it a different way\nHere's the problem I'm running into...Google App Engine + jQuery Ajax = 405 Method Not Allowed.\nSo basically I get the thing to work using the outlined code, then I can't make the AJAX request :(. \n"
] |
[
3,
3,
0
] |
[] |
[] |
[
"ajax",
"django",
"javascript",
"python"
] |
stackoverflow_0001511049_ajax_django_javascript_python.txt
|
Q:
Venn Diagram from a list of sentences
I have a list of many sentences in Excel on each row in a column. I have like 3 or more columns with such sentences. There are some common sentences in these. Is it possible to create a script to create a Venn diagram and get the common ones between all.
Example: These are sentences in a column. Similarly there are different columns.
Blood lymphocytes from cancer
Blood lymphocytes from patients
Ovarian tumor_Grade III
Peritoneum tumor_Grade IV
Hormone resistant PCA
Is it possible to write a script in python?
A:
This is my interpretation of the question...
Give the data file z.csv (export your data from excel into a csv file)
"Blood lymphocytes from cancer","Blood lymphocytes from sausages","Ovarian tumor_Grade III"
"Blood lymphocytes from patients","Ovarian tumor_Grade III","Peritoneum tumor_Grade IV"
"Ovarian tumor_Grade III","Peritoneum tumor_Grade IV","Hormone resistant PCA"
"Peritoneum tumor_Grade XV","Hormone resistant PCA","Blood lymphocytes from cancer"
"Hormone resistant PCA",,"Blood lymphocytes from patients"
This program finds the sentences common to all the columns
import csv
# Open the csv file
rows = csv.reader(open("z.csv"))
# A list of 3 sets of sentences
results = [set(), set(), set()]
# Read the csv file into the 3 sets
for row in rows:
for i, data in enumerate(row):
results[i].add(data)
# Work out the sentences common to all rows
intersection = results[0]
for result in results[1:]:
intersection = intersection.intersection(result)
print "Common to all rows :-"
for data in intersection:
print data
And it prints this answer
Common to all rows :-
Hormone resistant PCA
Ovarian tumor_Grade III
Not 100% sure that is what you are looking for but hopefully it gets you started!
It could be generalised easily to as many columns as you like, but I didn't want to make it more complicated
A:
Your question is not fully clear, so I might be misunderstanding what you're looking for.
A Venn diagram is just a few simple Set operations. Python has this stuff built into the Set datatype. Basically, take your two groups of items and use set operations (e.g. use intersection to find the common items).
To read in the data, your best bet is probably to save the file in CSV format and just parse it with the string split method.
|
Venn Diagram from a list of sentences
|
I have a list of many sentences in Excel on each row in a column. I have like 3 or more columns with such sentences. There are some common sentences in these. Is it possible to create a script to create a Venn diagram and get the common ones between all.
Example: These are sentences in a column. Similarly there are different columns.
Blood lymphocytes from cancer
Blood lymphocytes from patients
Ovarian tumor_Grade III
Peritoneum tumor_Grade IV
Hormone resistant PCA
Is it possible to write a script in python?
|
[
"This is my interpretation of the question...\nGive the data file z.csv (export your data from excel into a csv file)\n\"Blood lymphocytes from cancer\",\"Blood lymphocytes from sausages\",\"Ovarian tumor_Grade III\"\n\"Blood lymphocytes from patients\",\"Ovarian tumor_Grade III\",\"Peritoneum tumor_Grade IV\"\n\"Ovarian tumor_Grade III\",\"Peritoneum tumor_Grade IV\",\"Hormone resistant PCA\"\n\"Peritoneum tumor_Grade XV\",\"Hormone resistant PCA\",\"Blood lymphocytes from cancer\"\n\"Hormone resistant PCA\",,\"Blood lymphocytes from patients\"\n\nThis program finds the sentences common to all the columns\nimport csv\n\n# Open the csv file\nrows = csv.reader(open(\"z.csv\"))\n\n# A list of 3 sets of sentences\nresults = [set(), set(), set()]\n\n# Read the csv file into the 3 sets\nfor row in rows:\n for i, data in enumerate(row):\n results[i].add(data)\n\n# Work out the sentences common to all rows\nintersection = results[0]\nfor result in results[1:]:\n intersection = intersection.intersection(result)\n\nprint \"Common to all rows :-\"\nfor data in intersection:\n print data\n\nAnd it prints this answer\nCommon to all rows :-\nHormone resistant PCA\nOvarian tumor_Grade III\n\nNot 100% sure that is what you are looking for but hopefully it gets you started!\nIt could be generalised easily to as many columns as you like, but I didn't want to make it more complicated\n",
"Your question is not fully clear, so I might be misunderstanding what you're looking for.\nA Venn diagram is just a few simple Set operations. Python has this stuff built into the Set datatype. Basically, take your two groups of items and use set operations (e.g. use intersection to find the common items).\nTo read in the data, your best bet is probably to save the file in CSV format and just parse it with the string split method.\n"
] |
[
2,
0
] |
[] |
[] |
[
"python",
"venn_diagram"
] |
stackoverflow_0001510972_python_venn_diagram.txt
|
Q:
Pretty graphs and charts in Python
What are the available libraries for creating pretty charts and graphs in a Python application?
A:
I'm the one supporting CairoPlot and I'm very proud it came up here.
Surely matplotlib is great, but I believe CairoPlot is better looking.
So, for presentations and websites, it's a very good choice.
Today I released version 1.1. If interested, check it out at CairoPlot v1.1
EDIT: After a long and cold winter, CairoPlot is being developed again. Check out the new version on GitHub.
A:
For interactive work, Matplotlib is the mature standard. It provides an OO-style API as well as a Matlab-style interactive API.
Chaco is a more modern plotting library from the folks at Enthought. It uses Enthought's Kiva vector drawing library and currently works only with Wx and Qt with OpenGL on the way (Matplotlib has backends for Tk, Qt, Wx, Cocoa, and many image types such as PDF, EPS, PNG, etc.). The main advantages of Chaco are its speed relative to Matplotlib and its integration with Enthought's Traits API for interactive applications.
A:
You can also use pygooglechart, which uses the Google Chart API. This isn't something you'd always want to use, but if you want a small number of good, simple, charts, and are always online, and especially if you're displaying in a browser anyway, it's a good choice.
A:
You didn't mention what output format you need but reportlab is good at creating charts both in pdf and bitmap (e.g. png) format.
Here is a simple example of a barchart in png and pdf format:
from reportlab.graphics.shapes import Drawing
from reportlab.graphics.charts.barcharts import VerticalBarChart
d = Drawing(300, 200)
chart = VerticalBarChart()
chart.width = 260
chart.height = 160
chart.x = 20
chart.y = 20
chart.data = [[1,2], [3,4]]
chart.categoryAxis.categoryNames = ['foo', 'bar']
chart.valueAxis.valueMin = 0
d.add(chart)
d.save(fnRoot='test', formats=['png', 'pdf'])
alt text http://i40.tinypic.com/2j677tl.jpg
Note: the image has been converted to jpg by the image host.
A:
CairoPlot
A:
I used pychart and thought it was very straightforward.
http://home.gna.org/pychart/
It's all native python and does not have a busload of dependencies. I'm sure matplotlib is lovely but I'd be downloading and installing for days and I just want one measley bar chart!
It doesn't seem to have been updated in a few years but hey it works!
A:
Have you looked into ChartDirector for Python?
I can't speak about this one, but I've used ChartDirector for PHP and it's pretty good.
A:
NodeBox is awesome for raw graphics creation.
A:
If you like to use gnuplot for plotting, you should consider Gnuplot.py. It provides an object-oriented interface to gnuplot, and also allows you to pass commands directly to gnuplot. Unfortunately, it is no longer being actively developed.
A:
Chaco from enthought is another option
A:
You should also consider PyCha
http://www.lorenzogil.com/projects/pycha/
A:
I am a fan on PyOFC2 : http://btbytes.github.com/pyofc2/
It just just a package that makes it easy to generate the JSON data needed for Open Flash Charts 2, which are very beautiful. Check out the examples on the link above.
A:
Please look at the Open Flash Chart embedding for WHIFF
http://aaron.oirt.rutgers.edu/myapp/docs/W1100_1600.openFlashCharts
and the amCharts embedding for WHIFF too http://aaron.oirt.rutgers.edu/myapp/amcharts/doc. Thanks.
A:
You could also consider google charts.
Not technically a python API, but you can use it from python, it's reasonably fast to code for, and the results tend to look nice. If you happen to be using your plots online, then this would be an even better solution.
A:
PLplot is a cross-platform software package for creating scientific plots. They aren't very pretty (eye catching), but they look good enough. Have a look at some examples (both source code and pictures).
The PLplot core library can be used to create standard x-y plots, semi-log plots, log-log plots, contour plots, 3D surface plots, mesh plots, bar charts and pie charts. It runs on Windows (2000, XP and Vista), Linux, Mac OS X, and other Unices.
|
Pretty graphs and charts in Python
|
What are the available libraries for creating pretty charts and graphs in a Python application?
|
[
"I'm the one supporting CairoPlot and I'm very proud it came up here.\nSurely matplotlib is great, but I believe CairoPlot is better looking.\nSo, for presentations and websites, it's a very good choice.\nToday I released version 1.1. If interested, check it out at CairoPlot v1.1\nEDIT: After a long and cold winter, CairoPlot is being developed again. Check out the new version on GitHub.\n",
"For interactive work, Matplotlib is the mature standard. It provides an OO-style API as well as a Matlab-style interactive API. \nChaco is a more modern plotting library from the folks at Enthought. It uses Enthought's Kiva vector drawing library and currently works only with Wx and Qt with OpenGL on the way (Matplotlib has backends for Tk, Qt, Wx, Cocoa, and many image types such as PDF, EPS, PNG, etc.). The main advantages of Chaco are its speed relative to Matplotlib and its integration with Enthought's Traits API for interactive applications.\n",
"You can also use pygooglechart, which uses the Google Chart API. This isn't something you'd always want to use, but if you want a small number of good, simple, charts, and are always online, and especially if you're displaying in a browser anyway, it's a good choice.\n",
"You didn't mention what output format you need but reportlab is good at creating charts both in pdf and bitmap (e.g. png) format.\nHere is a simple example of a barchart in png and pdf format:\nfrom reportlab.graphics.shapes import Drawing\nfrom reportlab.graphics.charts.barcharts import VerticalBarChart\n\nd = Drawing(300, 200)\n\nchart = VerticalBarChart()\nchart.width = 260\nchart.height = 160\nchart.x = 20\nchart.y = 20\nchart.data = [[1,2], [3,4]]\nchart.categoryAxis.categoryNames = ['foo', 'bar']\nchart.valueAxis.valueMin = 0\n\nd.add(chart)\nd.save(fnRoot='test', formats=['png', 'pdf'])\n\nalt text http://i40.tinypic.com/2j677tl.jpg\nNote: the image has been converted to jpg by the image host.\n",
"CairoPlot\n",
"I used pychart and thought it was very straightforward.\nhttp://home.gna.org/pychart/\nIt's all native python and does not have a busload of dependencies. I'm sure matplotlib is lovely but I'd be downloading and installing for days and I just want one measley bar chart!\nIt doesn't seem to have been updated in a few years but hey it works!\n",
"Have you looked into ChartDirector for Python?\nI can't speak about this one, but I've used ChartDirector for PHP and it's pretty good.\n",
"NodeBox is awesome for raw graphics creation.\n",
"If you like to use gnuplot for plotting, you should consider Gnuplot.py. It provides an object-oriented interface to gnuplot, and also allows you to pass commands directly to gnuplot. Unfortunately, it is no longer being actively developed.\n",
"Chaco from enthought is another option\n",
"You should also consider PyCha\nhttp://www.lorenzogil.com/projects/pycha/\n",
"I am a fan on PyOFC2 : http://btbytes.github.com/pyofc2/\nIt just just a package that makes it easy to generate the JSON data needed for Open Flash Charts 2, which are very beautiful. Check out the examples on the link above.\n",
"Please look at the Open Flash Chart embedding for WHIFF\nhttp://aaron.oirt.rutgers.edu/myapp/docs/W1100_1600.openFlashCharts\nand the amCharts embedding for WHIFF too http://aaron.oirt.rutgers.edu/myapp/amcharts/doc. Thanks.\n",
"You could also consider google charts.\nNot technically a python API, but you can use it from python, it's reasonably fast to code for, and the results tend to look nice. If you happen to be using your plots online, then this would be an even better solution.\n",
"PLplot is a cross-platform software package for creating scientific plots. They aren't very pretty (eye catching), but they look good enough. Have a look at some examples (both source code and pictures).\nThe PLplot core library can be used to create standard x-y plots, semi-log plots, log-log plots, contour plots, 3D surface plots, mesh plots, bar charts and pie charts. It runs on Windows (2000, XP and Vista), Linux, Mac OS X, and other Unices.\n"
] |
[
50,
38,
18,
15,
6,
6,
4,
4,
4,
3,
3,
3,
1,
0,
0
] |
[] |
[] |
[
"graphics",
"python"
] |
stackoverflow_0000052652_graphics_python.txt
|
Q:
Queue for producers and consumers in a tree
I am reading up on how to utilize Python Queues to send and receive short messages between nodes. I am simulating a set of nodes that are in a nice tree structure. I want some of these nodes to send a fixed-size data to its parent. Once this parent receives data from some of its child-nodes, it will "process" it and send a "aggregate" packet to its parent...and so on.
To do this, I was told that queues would be useful to pass messages and a quick readup on it tells me that it will suit my needs. However, I am finding it a bit difficult to implement a basic setup and test my understanding -- 1 producer (that generates a message packet) and 1 consumer (the worker thread that dequeues the task and processes it).
I have searched and read many posts here and elsewhere...and I understand all the queue methods. What I still do not understand is how to associate or bind a queue to 2 given nodes.
I want node-1 and node-2 to send messages to node-3. For this basic scenario, I have to somehow create one (or 2) queues and "associate" it with node-1 and node-2 which uses it to place messages into and node-3. And, node-3 must also be "listening"/"associated" with this queue to "get" or dequeue a task.
If node-1 and node-2 are to be 'producers', I should have them as 2 separate threads and node-3 being the 3rd thread. Then, I must create one queue Q. Then, I should have node-1 and node-2 create messages, 'put' them into the queue. Node-3 will have to be 'somehow' notified /waken-up to 'get' these messages from Q and process it.
I have seen
http://docs.python.org/library/queue.html#module-Queue
Here is the (untested) code I have so far:
=================================================================
import threading
import queue
q = Queue.Queue(2) # create a queue of size 2.
# worker is node-3 which received msgs from 1 and 2 and fuses them.
def worker():
while True:
msg = q.get()
fuse_msgs(msg)
q.task_done()
# generate 3 worker threads. Node-3 could be both a producer and consumer. Each
# thread represents each node that will be a potential producer/consumer or both.
# need something like t1 - thread-1 for node-1 ...
for i in range(3):
t = Thread(target=worker)
t.setDaemon(True)
t.start()
# How can I make node-1 and node-2 to put items into a specified queue???
for msg in source():
q.put(item)
q.join()
=========================================
Am I going in the right direction? Please let me know where I am wrong and what I am misunderstanding...
Any help is appreciated. I have hundreds of nodes and links. If I get this fundamentals straight, I will be able to move on smoothly...
Thanks,
B.R.Srini.
A:
I'm not commenting on the python code in particular, but as far as your queues are designed, it seems you just need one queue in the node 1,2,3 scenario you were describing. Basically, you have one queue, where you have node 1 and node 2 putting messages to, and node 3 reading from.
You should be able to tell node-3 to do a "blocking" get on the queue, so it will just wait until it sees a message for it to process, and leave nodes 1 and 2 to produce their output as fast as possible.
Depending on the processing speed of each node, and the traffic patterns you expect, you will probably want a queue deeper than 2 messages, so that the producing nodes don't have to wait for the queue to be drained.
|
Queue for producers and consumers in a tree
|
I am reading up on how to utilize Python Queues to send and receive short messages between nodes. I am simulating a set of nodes that are in a nice tree structure. I want some of these nodes to send a fixed-size data to its parent. Once this parent receives data from some of its child-nodes, it will "process" it and send a "aggregate" packet to its parent...and so on.
To do this, I was told that queues would be useful to pass messages and a quick readup on it tells me that it will suit my needs. However, I am finding it a bit difficult to implement a basic setup and test my understanding -- 1 producer (that generates a message packet) and 1 consumer (the worker thread that dequeues the task and processes it).
I have searched and read many posts here and elsewhere...and I understand all the queue methods. What I still do not understand is how to associate or bind a queue to 2 given nodes.
I want node-1 and node-2 to send messages to node-3. For this basic scenario, I have to somehow create one (or 2) queues and "associate" it with node-1 and node-2 which uses it to place messages into and node-3. And, node-3 must also be "listening"/"associated" with this queue to "get" or dequeue a task.
If node-1 and node-2 are to be 'producers', I should have them as 2 separate threads and node-3 being the 3rd thread. Then, I must create one queue Q. Then, I should have node-1 and node-2 create messages, 'put' them into the queue. Node-3 will have to be 'somehow' notified /waken-up to 'get' these messages from Q and process it.
I have seen
http://docs.python.org/library/queue.html#module-Queue
Here is the (untested) code I have so far:
=================================================================
import threading
import queue
q = Queue.Queue(2) # create a queue of size 2.
# worker is node-3 which received msgs from 1 and 2 and fuses them.
def worker():
while True:
msg = q.get()
fuse_msgs(msg)
q.task_done()
# generate 3 worker threads. Node-3 could be both a producer and consumer. Each
# thread represents each node that will be a potential producer/consumer or both.
# need something like t1 - thread-1 for node-1 ...
for i in range(3):
t = Thread(target=worker)
t.setDaemon(True)
t.start()
# How can I make node-1 and node-2 to put items into a specified queue???
for msg in source():
q.put(item)
q.join()
=========================================
Am I going in the right direction? Please let me know where I am wrong and what I am misunderstanding...
Any help is appreciated. I have hundreds of nodes and links. If I get this fundamentals straight, I will be able to move on smoothly...
Thanks,
B.R.Srini.
|
[
"I'm not commenting on the python code in particular, but as far as your queues are designed, it seems you just need one queue in the node 1,2,3 scenario you were describing. Basically, you have one queue, where you have node 1 and node 2 putting messages to, and node 3 reading from.\nYou should be able to tell node-3 to do a \"blocking\" get on the queue, so it will just wait until it sees a message for it to process, and leave nodes 1 and 2 to produce their output as fast as possible.\nDepending on the processing speed of each node, and the traffic patterns you expect, you will probably want a queue deeper than 2 messages, so that the producing nodes don't have to wait for the queue to be drained.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"queue"
] |
stackoverflow_0001511359_python_queue.txt
|
Q:
Using lambda for a constraint function
import numpy
from numpy import asarray
Initial = numpy.asarray [2.0, 4.0, 5.0, 3.0, 5.0, 6.0] # Initial values to start with
bounds = [(1, 5000), (1, 6000), (2, 100000), (1, 50000), (1.0, 5000), (2, 1000000)]
# actual passed bounds
b1 = lambda x: numpy.asarray([1.4*x[0] - x[0]])
b2 = lambda x: numpy.asarray([1.4*x[1] - x[1]])
b3 = lambda x: numpy.asarray([x[2] - x[3]])
constraints = numpy.asarray([b1, b2, b3])
opt= optimize.fmin_slsqp(func,Initial,ieqcons=constraints,bounds=bounds, full_output=True,iter=200,iprint=2, acc=0.01)
Problem:
I want to pass in inequality constraints. Consider that I have 6 parameters
[ a, b, c, d, e, f]
in the Initial values, and my constraints are:
a<=e<=1.4*a ('e' varies from a to 1.4*a)
b<=f<=1.4*b ('f' varies from b to 1.4*b)
c>d ('c' must always be greater than d)
But this is not working properly. I don't know what the mistake is.
Is there any better way to pass my constraints as a function?
Please help me.
A:
Based on the comment from Robert Kern, I have removed my previous answer. Here are the constraints as continuous functions:
b1 = lambda x: x[4]-x[0] if x[4]<1.2*x[0] else 1.4*x[0]-x[4]
b2 = lambda x: x[5]-x[1] if x[5]<1.2*x[1] else 1.4*x[1]-x[5]
b3 = lambda x: x[2]-x[3]
Note: Python 2.5 or greater is required for this syntax.1
To get the constraint a<=e<=1.4*a, note that 1.2*a is the halfway point between a and 1.4*a.
Below this point, that is, all e<1.2*a, we use the continuous function e-a. Thus the overall constraint function is negative when e<a, handling the lower out-of-bounds condition, zero on the lower boundary e==a, and then positive for e>a up to the halfway point.
Above the halfway point, that is, all e>1.2*a, we use instead the continuous function 1.4*a-e. This means the overall constraint function is is negative when e>1.4*a, handling the upper out-of-bounds condition, zero on the upper boundary e==1.4*a, and then positive when e<1.4*a, down to the halfway point.
At the halfway point, where e==1.2*a, both functions have the same value. This means that the overall function is continuous.
Reference: documentation for ieqcons.
1 - Here is pre-Python 2.5 syntax: b1 = lambda x: (1.4*x[0]-x[4], x[4]-x[0])[x[4]<1.2*x[0]]
|
Using lambda for a constraint function
|
import numpy
from numpy import asarray
Initial = numpy.asarray [2.0, 4.0, 5.0, 3.0, 5.0, 6.0] # Initial values to start with
bounds = [(1, 5000), (1, 6000), (2, 100000), (1, 50000), (1.0, 5000), (2, 1000000)]
# actual passed bounds
b1 = lambda x: numpy.asarray([1.4*x[0] - x[0]])
b2 = lambda x: numpy.asarray([1.4*x[1] - x[1]])
b3 = lambda x: numpy.asarray([x[2] - x[3]])
constraints = numpy.asarray([b1, b2, b3])
opt= optimize.fmin_slsqp(func,Initial,ieqcons=constraints,bounds=bounds, full_output=True,iter=200,iprint=2, acc=0.01)
Problem:
I want to pass in inequality constraints. Consider that I have 6 parameters
[ a, b, c, d, e, f]
in the Initial values, and my constraints are:
a<=e<=1.4*a ('e' varies from a to 1.4*a)
b<=f<=1.4*b ('f' varies from b to 1.4*b)
c>d ('c' must always be greater than d)
But this is not working properly. I don't know what the mistake is.
Is there any better way to pass my constraints as a function?
Please help me.
|
[
"Based on the comment from Robert Kern, I have removed my previous answer. Here are the constraints as continuous functions:\nb1 = lambda x: x[4]-x[0] if x[4]<1.2*x[0] else 1.4*x[0]-x[4]\nb2 = lambda x: x[5]-x[1] if x[5]<1.2*x[1] else 1.4*x[1]-x[5]\nb3 = lambda x: x[2]-x[3]\n\nNote: Python 2.5 or greater is required for this syntax.1\nTo get the constraint a<=e<=1.4*a, note that 1.2*a is the halfway point between a and 1.4*a.\nBelow this point, that is, all e<1.2*a, we use the continuous function e-a. Thus the overall constraint function is negative when e<a, handling the lower out-of-bounds condition, zero on the lower boundary e==a, and then positive for e>a up to the halfway point.\nAbove the halfway point, that is, all e>1.2*a, we use instead the continuous function 1.4*a-e. This means the overall constraint function is is negative when e>1.4*a, handling the upper out-of-bounds condition, zero on the upper boundary e==1.4*a, and then positive when e<1.4*a, down to the halfway point.\nAt the halfway point, where e==1.2*a, both functions have the same value. This means that the overall function is continuous.\nReference: documentation for ieqcons.\n1 - Here is pre-Python 2.5 syntax: b1 = lambda x: (1.4*x[0]-x[4], x[4]-x[0])[x[4]<1.2*x[0]]\n"
] |
[
1
] |
[] |
[] |
[
"lambda",
"numpy",
"python",
"scipy"
] |
stackoverflow_0001511354_lambda_numpy_python_scipy.txt
|
Q:
Link Checker (Spider Crawler)
I am looking for a link checker to spider my website and log invalid links, the problem is that I have a Login page at the start which is required. What i want is a link checker to run through the command post login details then spider the rest of the website.
Any ideas guys will be appreciated.
A:
I've just recently solved a similar problem like this:
import urllib
import urllib2
import cookielib
login = '[email protected]'
password = 'secret'
cookiejar = cookielib.CookieJar()
urlOpener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar))
# adjust this to match the form's field names
values = {'username': login, 'password': password}
data = urllib.urlencode(values)
request = urllib2.Request('http://target.of.POST-method', data)
url = urlOpener.open(request)
# from now on, we're authenticated and we can access the rest of the site
url = urlOpener.open('http://rest.of.user.area')
A:
You want to look at the cookielib module: http://docs.python.org/library/cookielib.html. It implements a full implementation of cookies, which will let you store login details. Once you're using a CookieJar, you just have to get login details from the user (say, from the console) and submit a proper POST request.
|
Link Checker (Spider Crawler)
|
I am looking for a link checker to spider my website and log invalid links, the problem is that I have a Login page at the start which is required. What i want is a link checker to run through the command post login details then spider the rest of the website.
Any ideas guys will be appreciated.
|
[
"I've just recently solved a similar problem like this:\nimport urllib\nimport urllib2\nimport cookielib\n\nlogin = '[email protected]'\npassword = 'secret'\n\ncookiejar = cookielib.CookieJar()\nurlOpener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar))\n\n# adjust this to match the form's field names\nvalues = {'username': login, 'password': password}\ndata = urllib.urlencode(values)\nrequest = urllib2.Request('http://target.of.POST-method', data)\nurl = urlOpener.open(request)\n# from now on, we're authenticated and we can access the rest of the site\nurl = urlOpener.open('http://rest.of.user.area')\n\n",
"You want to look at the cookielib module: http://docs.python.org/library/cookielib.html. It implements a full implementation of cookies, which will let you store login details. Once you're using a CookieJar, you just have to get login details from the user (say, from the console) and submit a proper POST request.\n"
] |
[
3,
2
] |
[] |
[] |
[
"hyperlink",
"python",
"web_crawler"
] |
stackoverflow_0001510211_hyperlink_python_web_crawler.txt
|
Q:
Please help me with this program to parse a file into an XML file
To parse an input text file and generate a) an XML file and b) an SVG (also XML) file.
The input text file (input.txt) contains the description of a number of produce distribution centers and storage centers around the country. Each line describes either a single distribution center (dcenter) or a storage center, each with a number of properties; each property name (code for example) is separated by its value with a =.
Example (input.txt)
dcenter: code=d1, loc=San Jose, x=100, y=100, ctype=ct1
dcenter: code=d2, loc=San Ramon, x=300, y=200, ctype=ct2
storage: code=s1, locFrom=d1, x=50, y=50, rtype=rt1
storage: code=s2, locFrom=d1, x=-50,y=100, rtype=rt1
The desired Output of the program:
Output 1
<?xml version="1.0"?>
<dcenters>
<dcenter code="d1">
<loc> San Jose </loc>
<x> 100 </x>
<y> 100 </y>
<ctype> ct1 </ctype>
</dcenter>
<storage code="S1">
<locFrom> d1 </locFrom>
<x> 150 </x>
<y> 150 </y>
<rtype> rt1 </rtype>
</storage>
<storage code="S2">
<locFrom> d1 </locFrom>
<x> 50 </x>
<y> 200 </y>
<rtype> rt1 </rtype>
</storage>
Please help me with the program. I will really appreciate.
A:
Suppose the input is in string s; either from direct assignment or from file.read:
s="""dcenter: code=d1, loc=San Jose, x=100, y=100, ctype=ct1
dcenter: code=d2, loc=San Ramon, x=300, y=200, ctype=ct2
storage: code=s1, locFrom=d1, x=50, y=50, rtype=rt1
storage: code=s2, locFrom=d1, x=-50,y=100, rtype=rt1"""
Then you can this:
print '<?xml version="1.0"?>'
print "<dcenters>"
for line in s.splitlines():
type, fields = line.split(":")
params = fields.split(",")
code = params[0].split("=")[1].strip()
print '<%s code="%s">' % (type, code)
for p in params[1:]:
ptype, pvalue = p.strip().split("=")
print '<%s> %s </%s>' % (ptype, pvalue, ptype)
print '</%s>' % type
print "</dcenters>"
Not sure why d2 is missing from your sample output; I assume that's by mistake.
|
Please help me with this program to parse a file into an XML file
|
To parse an input text file and generate a) an XML file and b) an SVG (also XML) file.
The input text file (input.txt) contains the description of a number of produce distribution centers and storage centers around the country. Each line describes either a single distribution center (dcenter) or a storage center, each with a number of properties; each property name (code for example) is separated by its value with a =.
Example (input.txt)
dcenter: code=d1, loc=San Jose, x=100, y=100, ctype=ct1
dcenter: code=d2, loc=San Ramon, x=300, y=200, ctype=ct2
storage: code=s1, locFrom=d1, x=50, y=50, rtype=rt1
storage: code=s2, locFrom=d1, x=-50,y=100, rtype=rt1
The desired Output of the program:
Output 1
<?xml version="1.0"?>
<dcenters>
<dcenter code="d1">
<loc> San Jose </loc>
<x> 100 </x>
<y> 100 </y>
<ctype> ct1 </ctype>
</dcenter>
<storage code="S1">
<locFrom> d1 </locFrom>
<x> 150 </x>
<y> 150 </y>
<rtype> rt1 </rtype>
</storage>
<storage code="S2">
<locFrom> d1 </locFrom>
<x> 50 </x>
<y> 200 </y>
<rtype> rt1 </rtype>
</storage>
Please help me with the program. I will really appreciate.
|
[
"Suppose the input is in string s; either from direct assignment or from file.read:\ns=\"\"\"dcenter: code=d1, loc=San Jose, x=100, y=100, ctype=ct1\ndcenter: code=d2, loc=San Ramon, x=300, y=200, ctype=ct2\nstorage: code=s1, locFrom=d1, x=50, y=50, rtype=rt1\nstorage: code=s2, locFrom=d1, x=-50,y=100, rtype=rt1\"\"\"\n\nThen you can this:\nprint '<?xml version=\"1.0\"?>'\nprint \"<dcenters>\"\nfor line in s.splitlines():\n type, fields = line.split(\":\")\n params = fields.split(\",\")\n code = params[0].split(\"=\")[1].strip()\n print '<%s code=\"%s\">' % (type, code)\n for p in params[1:]:\n ptype, pvalue = p.strip().split(\"=\")\n print '<%s> %s </%s>' % (ptype, pvalue, ptype)\n print '</%s>' % type\nprint \"</dcenters>\"\n\nNot sure why d2 is missing from your sample output; I assume that's by mistake.\n"
] |
[
0
] |
[] |
[] |
[
"fileparsing",
"python",
"xml"
] |
stackoverflow_0001511950_fileparsing_python_xml.txt
|
Q:
Python/Ruby as mobile OS
I was wondering why smartphone/mobile device OSs are not written to allow dynamic languages as the language of choice? iPhone uses Objective-C, Google Android uses Java, Windows Mobile uses any manner of .NET language.
What would be the reasoning behind a mobile OS being written in Python, Ruby, or any other dynamic language? I understand that at a low level they would not cut it but C or C++ would be fine for that and Python, for example, could be the layer on top to interact with it. I mean, there is Jython or CPython.
I was just wondering why we do not see more dynamic language support in today's mobile OS's.
A:
In general it's all of these things. Memory, speed, and probably most importantly programmer familiarity. Apple has a huge investment in Objective C, Java is known by basically everyone, and C# is very popular as well. If you're trying for mass programmer appeal it makes sense to start with something popular, even if it's sort of boring.
There aren't really any technical requirements stopping it. We could write a whole Ruby stack and let the programmer re-implement the slow bits in C and it wouldn't be that big of a deal. It would be an investment for whatever company is making the mobile OS, and at the end of the day I'm not sure they gain as much from this.
Finally, it's the very beginning of mobile devices. In 5 years I wouldn't be at all surprised to see a much wider mobile stack.
A:
Contrary to the premise of the question: One of the first mainstream mobile devices was the Newton, which was designed to use a specialized dynamic language called NewtonScript for application development. The Newton development environment and language made it especially easy for applications to work together and share information - almost the polar opposite of the current iPhone experience. Although many developers writing new Newton applications from scratch liked it a lot - NewtonScript "feels" a lot like Ruby - the Newton had some performance issues and porting of existing code was not easy, even after Apple later added the ability to incorporate C code into a NewtonScript program. Also, it was very hard to protect one's intellectual property on the Newton - other developers could in most cases look inside your code and even override bits of it at a whim - a security nightmare.
The Newton was a commercial failure.
Palm took a few of Apple's best ideas - and improved upon them - but tossed dynamic language support as part of an overall simplification that eventually led to PalmOS gaining a majority of the mobile market share (for many years) as independent mobile software developers flocked to the new platform.
There were many reasons why the Newton was a failure, but some probably blame NewtonScript. Apple is "thinking different" with the iPhone, and one of the early decisions they seem to have made is to leverage as much as possible off their existing core developer base and make it easy for people to develop in Objective C. If iPhone gets official support for dynamic languages, that will be a later addition after long and careful consideration about how best to do it while still providing a secure and high-performance platform.
And 5 minutes after they do, others will follow. :-)
A:
The situation for multiple languages on mobile devices is better than the question implies. Java (in its J2ME incarnation) is available these days even in fairly cheap phones. Symbian S60 officially supports Python, and Javascript for widgets, and there's a Ruby port although it's still fairly experimental. Charles Nutter has experimented with getting JRuby running on Android. rhomobile claims to allow developing an app in Ruby which will then run on all the major smartphone OSes, although that kind of portability claim implies restrictions on what those apps can achieve.
It's important to distinguish between the mobile OS (which does operating system stuff like sharing and protecting resources) and the runtime platform (which provides a working environment and a set of APIs to user-written applications). An OS can support multiple runtimes, such as how you can run both C++ and Java apps in Windows, even though Windows itself is written in C++.
Runtimes will have different performance characteristics, and expose the capabilities of the OS and hardware to a greater or lesser degree. For example, J2ME is available on tons of devices, but on many devices the J2ME runtime doesn't provide access to the camera or the ability to make calls. The "native" runtime (i.e. the one where apps are written in the same language as the OS) is no different in this respect: what "native" apps can do depends on what the runtime allows.
A:
Jailbroken iPhones can have python installed, and I actually use python very frequently on mine.
A:
I think that performance concerns may be part of, but not all of, the reason. Mobile devices do not have very powerful hardware to work with.
I am partly unsure about this, though.
A:
One of the most pressing matters is garbage collection. Garbage collection often times introduce unpredictable pauses in embedded machines which sometimes need real time performance.
This is why there is a Java Micro Edition which has a different garbage collector which reduces pauses in exchange for a slower program.
Refcounting garbage collectors (like the one in CPython) are also less prone to pauses but can explode when data with many nested pointers (like a linked list) get deleted.
A:
I suspect the basic reason is a combination of security and reliability. You don't want someone to be easily able to hack the phone, and you want to have some control over what's being installed.
A:
Memory is also a significant factor. It's easy to eat memory in Python, unfortunately.
A:
There are many reasons. Among them:
business reasons, such as software lock-in strategies,
efficiency: dynamic languages are usually perceived to be slower (and in some cases really are slower, or at least provide a limit to the amount of optimsation you can do. On a mobile device, optimising code is necessary much more often than on a PC), and tend to use more memory, which is a significant issue on portable devices with limited memory and little cache,,
keeping development simple: a platform that supports say Python and Ruby and Java out of the box:
means thrice the work to write documentation and provide support,
divides development effort into three; it takes longer for helpful material to appear on the web and there are less developers who use the same language as you on your platform,
requires more storage on the device to support all these languages,
management need to be convinced. I've always felt that the merits of Java are easily explained to a non-technical audience. .Net and Obj-C also seem a very natural choice for a Microsoft and Apple platform, respectively.
A:
webOS -- the new OS from Palm, which will debut on the Pre -- has you write apps against a webkit runtime in JavaScript. Time will tell how successful it is, but I suspect it will not be the first to go down this path. As mobile devices become more powerful, you'll see dynamic languages become more prevalent.
A:
My Palm has a Lua implementation that allows you to do reasonable GUIs, a fairly useless old Python 1.5, a superb Forth (which allows you to produce compiled apps) and a Scheme that allows for copmlete GUI dev.
At the recent Apple WWDC 2009, the Symbian alliance hosted an event the first day in an adjacent building with the teaser of a free Nokia 5800 for everyone coming even just for the lunch with marketing pitch - a US$350 phone. The event was to pitch developing for the Ovi Store and they had developers there and a programming competition on the afternoon.
The three languages they were emphasizing for development for Symbian were Java, Flash (lite) and Python. Python is the only option that allows you to work on the device or a PC and includes samples with OpenGL ES and other phone features.
With a utility to bundle Python apps into standalones that can be hosted on the store, I'd say Python on S60 is right up there as a contender for serious dynamic language on the (still) dominant platform.
A:
There is a linux distribution for OpenMoko Freerunner called SHR. Most of its settings and framework code is written in python and... well, it isn't very fast. It is bearable, but it was planned from the beginning to rewrite it in Vala.
On the other side, my few smallish apps work fast enough (with the only drawback having big startup time) to consider python to develop user applications.
For the record: Freerunner has ARM-something 400MHz and 128MB of RAM. I guess that once mobile devices cross 1GHz, languages like Python will be fast enough for middle-level stuff too (the low-level being the kernel).
A:
Rhomobile's open source Rhodes framework offers this today. The world's first Ruby implementations for all smartphones.
|
Python/Ruby as mobile OS
|
I was wondering why smartphone/mobile device OSs are not written to allow dynamic languages as the language of choice? iPhone uses Objective-C, Google Android uses Java, Windows Mobile uses any manner of .NET language.
What would be the reasoning behind a mobile OS being written in Python, Ruby, or any other dynamic language? I understand that at a low level they would not cut it but C or C++ would be fine for that and Python, for example, could be the layer on top to interact with it. I mean, there is Jython or CPython.
I was just wondering why we do not see more dynamic language support in today's mobile OS's.
|
[
"In general it's all of these things. Memory, speed, and probably most importantly programmer familiarity. Apple has a huge investment in Objective C, Java is known by basically everyone, and C# is very popular as well. If you're trying for mass programmer appeal it makes sense to start with something popular, even if it's sort of boring.\nThere aren't really any technical requirements stopping it. We could write a whole Ruby stack and let the programmer re-implement the slow bits in C and it wouldn't be that big of a deal. It would be an investment for whatever company is making the mobile OS, and at the end of the day I'm not sure they gain as much from this.\nFinally, it's the very beginning of mobile devices. In 5 years I wouldn't be at all surprised to see a much wider mobile stack.\n",
"Contrary to the premise of the question: One of the first mainstream mobile devices was the Newton, which was designed to use a specialized dynamic language called NewtonScript for application development. The Newton development environment and language made it especially easy for applications to work together and share information - almost the polar opposite of the current iPhone experience. Although many developers writing new Newton applications from scratch liked it a lot - NewtonScript \"feels\" a lot like Ruby - the Newton had some performance issues and porting of existing code was not easy, even after Apple later added the ability to incorporate C code into a NewtonScript program. Also, it was very hard to protect one's intellectual property on the Newton - other developers could in most cases look inside your code and even override bits of it at a whim - a security nightmare.\nThe Newton was a commercial failure.\nPalm took a few of Apple's best ideas - and improved upon them - but tossed dynamic language support as part of an overall simplification that eventually led to PalmOS gaining a majority of the mobile market share (for many years) as independent mobile software developers flocked to the new platform.\nThere were many reasons why the Newton was a failure, but some probably blame NewtonScript. Apple is \"thinking different\" with the iPhone, and one of the early decisions they seem to have made is to leverage as much as possible off their existing core developer base and make it easy for people to develop in Objective C. If iPhone gets official support for dynamic languages, that will be a later addition after long and careful consideration about how best to do it while still providing a secure and high-performance platform.\nAnd 5 minutes after they do, others will follow. :-)\n",
"The situation for multiple languages on mobile devices is better than the question implies. Java (in its J2ME incarnation) is available these days even in fairly cheap phones. Symbian S60 officially supports Python, and Javascript for widgets, and there's a Ruby port although it's still fairly experimental. Charles Nutter has experimented with getting JRuby running on Android. rhomobile claims to allow developing an app in Ruby which will then run on all the major smartphone OSes, although that kind of portability claim implies restrictions on what those apps can achieve.\nIt's important to distinguish between the mobile OS (which does operating system stuff like sharing and protecting resources) and the runtime platform (which provides a working environment and a set of APIs to user-written applications). An OS can support multiple runtimes, such as how you can run both C++ and Java apps in Windows, even though Windows itself is written in C++.\nRuntimes will have different performance characteristics, and expose the capabilities of the OS and hardware to a greater or lesser degree. For example, J2ME is available on tons of devices, but on many devices the J2ME runtime doesn't provide access to the camera or the ability to make calls. The \"native\" runtime (i.e. the one where apps are written in the same language as the OS) is no different in this respect: what \"native\" apps can do depends on what the runtime allows.\n",
"Jailbroken iPhones can have python installed, and I actually use python very frequently on mine. \n",
"I think that performance concerns may be part of, but not all of, the reason. Mobile devices do not have very powerful hardware to work with.\nI am partly unsure about this, though.\n",
"One of the most pressing matters is garbage collection. Garbage collection often times introduce unpredictable pauses in embedded machines which sometimes need real time performance.\nThis is why there is a Java Micro Edition which has a different garbage collector which reduces pauses in exchange for a slower program.\nRefcounting garbage collectors (like the one in CPython) are also less prone to pauses but can explode when data with many nested pointers (like a linked list) get deleted.\n",
"I suspect the basic reason is a combination of security and reliability. You don't want someone to be easily able to hack the phone, and you want to have some control over what's being installed.\n",
"Memory is also a significant factor. It's easy to eat memory in Python, unfortunately.\n",
"There are many reasons. Among them:\n\nbusiness reasons, such as software lock-in strategies,\nefficiency: dynamic languages are usually perceived to be slower (and in some cases really are slower, or at least provide a limit to the amount of optimsation you can do. On a mobile device, optimising code is necessary much more often than on a PC), and tend to use more memory, which is a significant issue on portable devices with limited memory and little cache,,\nkeeping development simple: a platform that supports say Python and Ruby and Java out of the box:\n\n\nmeans thrice the work to write documentation and provide support,\ndivides development effort into three; it takes longer for helpful material to appear on the web and there are less developers who use the same language as you on your platform,\nrequires more storage on the device to support all these languages,\n\nmanagement need to be convinced. I've always felt that the merits of Java are easily explained to a non-technical audience. .Net and Obj-C also seem a very natural choice for a Microsoft and Apple platform, respectively.\n\n",
"webOS -- the new OS from Palm, which will debut on the Pre -- has you write apps against a webkit runtime in JavaScript. Time will tell how successful it is, but I suspect it will not be the first to go down this path. As mobile devices become more powerful, you'll see dynamic languages become more prevalent.\n",
"My Palm has a Lua implementation that allows you to do reasonable GUIs, a fairly useless old Python 1.5, a superb Forth (which allows you to produce compiled apps) and a Scheme that allows for copmlete GUI dev.\nAt the recent Apple WWDC 2009, the Symbian alliance hosted an event the first day in an adjacent building with the teaser of a free Nokia 5800 for everyone coming even just for the lunch with marketing pitch - a US$350 phone. The event was to pitch developing for the Ovi Store and they had developers there and a programming competition on the afternoon.\nThe three languages they were emphasizing for development for Symbian were Java, Flash (lite) and Python. Python is the only option that allows you to work on the device or a PC and includes samples with OpenGL ES and other phone features.\nWith a utility to bundle Python apps into standalones that can be hosted on the store, I'd say Python on S60 is right up there as a contender for serious dynamic language on the (still) dominant platform.\n",
"There is a linux distribution for OpenMoko Freerunner called SHR. Most of its settings and framework code is written in python and... well, it isn't very fast. It is bearable, but it was planned from the beginning to rewrite it in Vala.\nOn the other side, my few smallish apps work fast enough (with the only drawback having big startup time) to consider python to develop user applications.\nFor the record: Freerunner has ARM-something 400MHz and 128MB of RAM. I guess that once mobile devices cross 1GHz, languages like Python will be fast enough for middle-level stuff too (the low-level being the kernel).\n",
"Rhomobile's open source Rhodes framework offers this today. The world's first Ruby implementations for all smartphones.\n"
] |
[
14,
2,
2,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"dynamic_languages",
"mobile",
"operating_system",
"python",
"ruby"
] |
stackoverflow_0000816212_dynamic_languages_mobile_operating_system_python_ruby.txt
|
Q:
Python loop | "do-while" over a tree
Is there a more Pythonic way to put this loop together?:
while True:
children = tree.getChildren()
if not children:
break
tree = children[0]
UPDATE:
I think this syntax is probably what I'm going to go with:
while tree.getChildren():
tree = tree.getChildren()[0]
A:
children = tree.getChildren()
while children:
tree = children[0]
children = tree.getChildren()
It would be easier to suggest something if I knew what kind of collection api you're working with. In a good api, you could probably do something like
while tree.hasChildren():
children = tree.getChildren()
tree = children[0]
A:
(My first answer suggested to use iter(tree.getChildren, None) directly, but that won't work as we are not calling the same tree.getChildren function all the time.)
To fix this up I propose a solution using lambda's non-binding of its variables as a possible workaround. I think at this point this solution is not better than any other previously posted:
You can use iter() in it's second sentinel form, using lamda's strange binding:
for children in iter((lambda : tree.getChildren()), None):
tree = children[0]
(Here it assumes getChildren() returns None when there are no children, but it has to be replaced with whatever value it returns ([]?).)
iter(function, sentinel) calls function repeatedly until it returns the sentinel value.
A:
Do you really only want the first branch? I'm gonna assume you don't and that you want the whole tree. First I'd do this:
def allitems(tree):
for child in tree.getChildren():
yield child
for grandchild in allitems(child):
yield grandchild
This will go through the whole tree. Then you can just:
for item in allitems(tree):
do_whatever_you_want(item)
Pythonic, simple, clean, and since it uses generators, will not use much memory even for huge trees.
A:
I think the code you have is fine. If you really wanted to, you could wrap it all up in a try/except:
while True:
try:
tree = tree.getChildren()[0]
except (IndexError, TypeError):
break
IndexError will work if getChildren() returns an empty list when there are no children. If it returns False or 0 or None or some other unsubscriptable false-like value, TypeError will handle the exception.
But that's just another way to do it. Again, I don't think the Pythonistas will hunt you down for the code you already have.
A:
Without further testing, I believe this should work:
try: while True: tree=tree.getChildren()[0]
except: pass
You might also want to override the __getitem__() (the brackets operator) in the Tree class, for further neatification.
try: while True: tree=tree[0]
except: pass
|
Python loop | "do-while" over a tree
|
Is there a more Pythonic way to put this loop together?:
while True:
children = tree.getChildren()
if not children:
break
tree = children[0]
UPDATE:
I think this syntax is probably what I'm going to go with:
while tree.getChildren():
tree = tree.getChildren()[0]
|
[
"children = tree.getChildren()\nwhile children:\n tree = children[0]\n children = tree.getChildren()\n\nIt would be easier to suggest something if I knew what kind of collection api you're working with. In a good api, you could probably do something like\nwhile tree.hasChildren():\n children = tree.getChildren()\n tree = children[0]\n\n",
"(My first answer suggested to use iter(tree.getChildren, None) directly, but that won't work as we are not calling the same tree.getChildren function all the time.)\nTo fix this up I propose a solution using lambda's non-binding of its variables as a possible workaround. I think at this point this solution is not better than any other previously posted:\nYou can use iter() in it's second sentinel form, using lamda's strange binding:\nfor children in iter((lambda : tree.getChildren()), None):\n tree = children[0]\n\n(Here it assumes getChildren() returns None when there are no children, but it has to be replaced with whatever value it returns ([]?).)\niter(function, sentinel) calls function repeatedly until it returns the sentinel value.\n",
"Do you really only want the first branch? I'm gonna assume you don't and that you want the whole tree. First I'd do this:\ndef allitems(tree):\n for child in tree.getChildren():\n yield child\n for grandchild in allitems(child):\n yield grandchild\n\nThis will go through the whole tree. Then you can just:\nfor item in allitems(tree):\n do_whatever_you_want(item)\n\nPythonic, simple, clean, and since it uses generators, will not use much memory even for huge trees.\n",
"I think the code you have is fine. If you really wanted to, you could wrap it all up in a try/except:\nwhile True:\n try: \n tree = tree.getChildren()[0]\n except (IndexError, TypeError):\n break\n\nIndexError will work if getChildren() returns an empty list when there are no children. If it returns False or 0 or None or some other unsubscriptable false-like value, TypeError will handle the exception.\nBut that's just another way to do it. Again, I don't think the Pythonistas will hunt you down for the code you already have.\n",
"Without further testing, I believe this should work:\ntry: while True: tree=tree.getChildren()[0]\nexcept: pass\n\nYou might also want to override the __getitem__() (the brackets operator) in the Tree class, for further neatification.\ntry: while True: tree=tree[0]\nexcept: pass\n\n"
] |
[
4,
2,
1,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001511506_python.txt
|
Q:
Generic Foreign Keys and get_or_create in django 1.0: Broken?
As you can see, create() works, but get_or_create() doesn't. Am I missing something obvious here?
In [7]: f = FeedItem.objects.create(source=u, dest=q, type="greata")
In [8]: f, created = FeedItem.objects.get_or_create(source=u, dest=q, type="greata")
---------------------------------------------------------------------------
FieldError Traceback (most recent call last)
/Users/andrew/clownfish/panda-repo/community-feed/<ipython console> in <module>()
/Library/Python/2.6/site-packages/django/db/models/manager.pyc in get_or_create(self, **kwargs)
/Library/Python/2.6/site-packages/django/db/models/query.pyc in get_or_create(self, **kwargs)
/Library/Python/2.6/site-packages/django/db/models/query.pyc in get(self, *args, **kwargs)
/Library/Python/2.6/site-packages/django/db/models/query.pyc in filter(self, *args, **kwargs)
/Library/Python/2.6/site-packages/django/db/models/query.pyc in _filter_or_exclude(self, negate, *args, **kwargs)
/Library/Python/2.6/site-packages/django/db/models/sql/query.pyc in add_q(self, q_object, used_aliases)
/Library/Python/2.6/site-packages/django/db/models/sql/query.pyc in add_filter(self, filter_expr, connector, negate, trim, can_reuse, process_extras)
/Library/Python/2.6/site-packages/django/db/models/sql/query.pyc in setup_joins(self, names, opts, alias, dupe_multis, allow_many, allow_explicit_fk, can_reuse, negate, process_extras)
FieldError: Cannot resolve keyword 'source' into field. Choices are: dest_content_type, dest_object_id, id, src_content_type, src_object_id, timestamp, type, weight
A:
Look like there's different logic used in create and get_or_create as in get_or_create there is no source argument but src_object_id and src_content_type but it is easy to resolve this - pass scr_object_id as u.id and src_content_type as u.content_type (same with dest).
Or use try/except & create.
|
Generic Foreign Keys and get_or_create in django 1.0: Broken?
|
As you can see, create() works, but get_or_create() doesn't. Am I missing something obvious here?
In [7]: f = FeedItem.objects.create(source=u, dest=q, type="greata")
In [8]: f, created = FeedItem.objects.get_or_create(source=u, dest=q, type="greata")
---------------------------------------------------------------------------
FieldError Traceback (most recent call last)
/Users/andrew/clownfish/panda-repo/community-feed/<ipython console> in <module>()
/Library/Python/2.6/site-packages/django/db/models/manager.pyc in get_or_create(self, **kwargs)
/Library/Python/2.6/site-packages/django/db/models/query.pyc in get_or_create(self, **kwargs)
/Library/Python/2.6/site-packages/django/db/models/query.pyc in get(self, *args, **kwargs)
/Library/Python/2.6/site-packages/django/db/models/query.pyc in filter(self, *args, **kwargs)
/Library/Python/2.6/site-packages/django/db/models/query.pyc in _filter_or_exclude(self, negate, *args, **kwargs)
/Library/Python/2.6/site-packages/django/db/models/sql/query.pyc in add_q(self, q_object, used_aliases)
/Library/Python/2.6/site-packages/django/db/models/sql/query.pyc in add_filter(self, filter_expr, connector, negate, trim, can_reuse, process_extras)
/Library/Python/2.6/site-packages/django/db/models/sql/query.pyc in setup_joins(self, names, opts, alias, dupe_multis, allow_many, allow_explicit_fk, can_reuse, negate, process_extras)
FieldError: Cannot resolve keyword 'source' into field. Choices are: dest_content_type, dest_object_id, id, src_content_type, src_object_id, timestamp, type, weight
|
[
"Look like there's different logic used in create and get_or_create as in get_or_create there is no source argument but src_object_id and src_content_type but it is easy to resolve this - pass scr_object_id as u.id and src_content_type as u.content_type (same with dest).\nOr use try/except & create.\n"
] |
[
2
] |
[] |
[] |
[
"django",
"foreign_keys",
"python"
] |
stackoverflow_0001512152_django_foreign_keys_python.txt
|
Q:
Mod_python produces no output
Just installed and configured mod_python 3.2.8 on a CentOS 5 (Apache 2.2.3) server with Python 2.4.3. It is loaded fine by Apache.
I activated the mpinfo test page and it works. So I wrote a simple "Hello World" with the following code:
from mod_python import apache
def handler(req):
req.content_type = 'text/plain'
req.write("Hello World!")
req.flush()
return apache.OK
It outputs a blank page, with no text and no source. If I consciously create a syntax error I get the error output on the URL, for example (when I put a space before "def"):
Mod_python error: "PythonHandler mod_python.cgihandler"
Traceback (most recent call last):
File "/usr/lib/python2.4/site-packages/mod_python/apache.py", line 299, in HandlerDispatch
result = object(req)
File "/usr/lib/python2.4/site-packages/mod_python/cgihandler.py", line 96, in handler
imp.load_module(module_name, fd, path, desc)
File "/var/www/vhosts/localhost/httpdocs/mptest.py", line 3
def handler(req):
^
SyntaxError: invalid syntax
I have spent about five hours browsing different tutorials, faqs and trouble shooting guides but can't find a description of this exakt issue.
What do you think could be the issue/cause?
EDIT: Here is the Apache configuration for the site...
<Directory />
Options FollowSymLinks
AllowOverride None
AddHandler mod_python .py
PythonHandler mptest
PythonDebug On
</Directory>
EDIT 2: Ah, another thing I forgot to mention is that I intend to use mod_python to write Apache extensions. The application itself is written in PHP but I need to make some security tweeks on the server :)
A:
Don't use mod_python.
A common mistake is to take mod_python as "mod_php, but for python" and that is not true. mod_python is more suited to writing apache extensions, not web applications.
The standartized protocol to use between python web applications and web servers (not only apache) is WSGI. Using it ensures that you can publish your application to any wsgi-compliant webserver (almost all modern web servers are wsgi-compliant)
On apache, use mod_wsgi instead.
Your example rewritten using the wsgi standard and mod_wsgi on apache:
mywebapp.py:
def application(environ, start_response):
start_response('200 OK', [('content-type', 'text/plain')])
return ['Hello World']
Apache configuration:
WSGIScriptAlias /myapp /usr/local/www/wsgi-scripts/mywebapp.py
<Directory /usr/local/www/wsgi-scripts>
Order allow,deny
Allow from all
</Directory>
Now just go to http://localhost/myapp and the script will run. Additionally, any access under this root (i.e. http://localhost/myapp/stuff/here) will be handled by this script.
It's a good idea to choose a web framework. CherryPy. Pylons. Django. They make things even easier.
A good website to look at is wsgi.org
A:
Your original problem is that mod_python.cgihandler is being called to handle the request. This means your Python script file is being interpreted as a CGI script. Thus, no wonder it doesn't return anything.
You likely have conflicting definition in your Apache configuration which is enabling the mod_python.cgihandler.
A:
I make a complete new answer for clarity...
I decided to install mod_wsgi instead. So I've set it up and when I go to my testfile I just see the page source. I haven't been spending any time on finding the issue yet, so I'll get back to you when I either solve the problem or decide that I need more help :)
Thank you :)
|
Mod_python produces no output
|
Just installed and configured mod_python 3.2.8 on a CentOS 5 (Apache 2.2.3) server with Python 2.4.3. It is loaded fine by Apache.
I activated the mpinfo test page and it works. So I wrote a simple "Hello World" with the following code:
from mod_python import apache
def handler(req):
req.content_type = 'text/plain'
req.write("Hello World!")
req.flush()
return apache.OK
It outputs a blank page, with no text and no source. If I consciously create a syntax error I get the error output on the URL, for example (when I put a space before "def"):
Mod_python error: "PythonHandler mod_python.cgihandler"
Traceback (most recent call last):
File "/usr/lib/python2.4/site-packages/mod_python/apache.py", line 299, in HandlerDispatch
result = object(req)
File "/usr/lib/python2.4/site-packages/mod_python/cgihandler.py", line 96, in handler
imp.load_module(module_name, fd, path, desc)
File "/var/www/vhosts/localhost/httpdocs/mptest.py", line 3
def handler(req):
^
SyntaxError: invalid syntax
I have spent about five hours browsing different tutorials, faqs and trouble shooting guides but can't find a description of this exakt issue.
What do you think could be the issue/cause?
EDIT: Here is the Apache configuration for the site...
<Directory />
Options FollowSymLinks
AllowOverride None
AddHandler mod_python .py
PythonHandler mptest
PythonDebug On
</Directory>
EDIT 2: Ah, another thing I forgot to mention is that I intend to use mod_python to write Apache extensions. The application itself is written in PHP but I need to make some security tweeks on the server :)
|
[
"Don't use mod_python. \nA common mistake is to take mod_python as \"mod_php, but for python\" and that is not true. mod_python is more suited to writing apache extensions, not web applications. \nThe standartized protocol to use between python web applications and web servers (not only apache) is WSGI. Using it ensures that you can publish your application to any wsgi-compliant webserver (almost all modern web servers are wsgi-compliant)\nOn apache, use mod_wsgi instead.\nYour example rewritten using the wsgi standard and mod_wsgi on apache:\nmywebapp.py:\ndef application(environ, start_response):\n start_response('200 OK', [('content-type', 'text/plain')])\n return ['Hello World']\n\nApache configuration:\nWSGIScriptAlias /myapp /usr/local/www/wsgi-scripts/mywebapp.py\n<Directory /usr/local/www/wsgi-scripts>\n Order allow,deny\n Allow from all\n</Directory>\n\nNow just go to http://localhost/myapp and the script will run. Additionally, any access under this root (i.e. http://localhost/myapp/stuff/here) will be handled by this script.\nIt's a good idea to choose a web framework. CherryPy. Pylons. Django. They make things even easier.\nA good website to look at is wsgi.org\n",
"Your original problem is that mod_python.cgihandler is being called to handle the request. This means your Python script file is being interpreted as a CGI script. Thus, no wonder it doesn't return anything.\nYou likely have conflicting definition in your Apache configuration which is enabling the mod_python.cgihandler.\n",
"I make a complete new answer for clarity...\nI decided to install mod_wsgi instead. So I've set it up and when I go to my testfile I just see the page source. I haven't been spending any time on finding the issue yet, so I'll get back to you when I either solve the problem or decide that I need more help :)\nThank you :)\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"apache",
"mod_python",
"python"
] |
stackoverflow_0001508406_apache_mod_python_python.txt
|
Q:
What is a good reference for Server side development?
I am more interested in the design of the code (i.e functional design vs object oriented design). What are the best practices and what is the communities thoughts on this subject?
Not that it should matter, but I am working with Apache and Python technology stack.
A:
If you are using Apache+Python, this sounds like you are using Python for dynamic web pages. In that case, I would strongly urge you to look into Django. There are also other Python web development environments, but Django is perhaps the most popular; and it has excellent documentation such as The Django Book. The Django Book describes best practices for setting up a robust web site: how to use multiple servers for redundancy, how to set up the database server, how to set up a cache to reduce the load on your database, etc.
Other than that tip, good Python server-side code would be just good Python code. There was a question asked recently about how to become a good Python developer, and I would suggest you read that: How do I get fluent in Python?
|
What is a good reference for Server side development?
|
I am more interested in the design of the code (i.e functional design vs object oriented design). What are the best practices and what is the communities thoughts on this subject?
Not that it should matter, but I am working with Apache and Python technology stack.
|
[
"If you are using Apache+Python, this sounds like you are using Python for dynamic web pages. In that case, I would strongly urge you to look into Django. There are also other Python web development environments, but Django is perhaps the most popular; and it has excellent documentation such as The Django Book. The Django Book describes best practices for setting up a robust web site: how to use multiple servers for redundancy, how to set up the database server, how to set up a cache to reduce the load on your database, etc.\nOther than that tip, good Python server-side code would be just good Python code. There was a question asked recently about how to become a good Python developer, and I would suggest you read that: How do I get fluent in Python?\n"
] |
[
2
] |
[] |
[] |
[
"apache",
"architecture",
"python"
] |
stackoverflow_0001512155_apache_architecture_python.txt
|
Q:
Is there any framework like RoR on Python 3000?
One of the feature I like in RoR is the db management, it can hide all the sql statement, also, it is very easy to change different db in RoR, is there any similar framework in Python 3000?
A:
This answer was awfully outdated. The current state of afairs is:
Django is close to supporting Python 3
CherryPy supports Python 3 since version 3.2
Pyramid has Python 3 support since 1.3
Bottle, which is a lightweight WSGI micro web-framework, supports Python 3
I'm sure this list will keep growing every coming month, specially considering that there will never be a Python 2.8.
2.7 will be the end of the line for Python 2 development, and now the official upgrade path from 2.7 is Python 3.x. I'm sure that with this state of affairs, Python 3 support from web frameworks is only going to get better and better.
[OUTDATED]
Python 3 is not yet in high deployment. It's still lacking a lot of third party libraries.
The recommended Python version is 2.6.x, as it's the most current, it's backwards compatible, and has many backported features from 3.1.
For Python 2.6 you will find quite a few frameworks:
Django
Turbogears
CherryPy
Zope
and many more
A:
I believe CherryPy is on the verge of being released for Python 3.X.
A:
Python 3 isn't ready for web applications right now. The WSGI 1.0 specification isn't suitable for Py3k and the related standard libraries are 2to3 hacks that don't work consistently faced with bytes vs. unicode. It's a real mess.
WEB-SIG are bashing out proposals for a WSGI revision; hopefully it can move forward soon, because although Python 3 isn't mainstream yet it's certainly heading that way, and the brokenness of webdev is rather embarrassing.
A:
Python 3 is not ready for practical use, because there is not yet enough libraries that have been updated to support Python 3. So the answer is: No.
But there are LOADS of them on Python 2. Tens, at least.
Django, Turbogears, BFG and of course the old man of the game: Zope. To tell which is best for you, you need to expand your requirements a lot.
|
Is there any framework like RoR on Python 3000?
|
One of the feature I like in RoR is the db management, it can hide all the sql statement, also, it is very easy to change different db in RoR, is there any similar framework in Python 3000?
|
[
"This answer was awfully outdated. The current state of afairs is:\n\nDjango is close to supporting Python 3\nCherryPy supports Python 3 since version 3.2\nPyramid has Python 3 support since 1.3\nBottle, which is a lightweight WSGI micro web-framework, supports Python 3\n\nI'm sure this list will keep growing every coming month, specially considering that there will never be a Python 2.8.\n2.7 will be the end of the line for Python 2 development, and now the official upgrade path from 2.7 is Python 3.x. I'm sure that with this state of affairs, Python 3 support from web frameworks is only going to get better and better.\n\n[OUTDATED]\nPython 3 is not yet in high deployment. It's still lacking a lot of third party libraries.\nThe recommended Python version is 2.6.x, as it's the most current, it's backwards compatible, and has many backported features from 3.1.\nFor Python 2.6 you will find quite a few frameworks:\n\nDjango\nTurbogears\nCherryPy\nZope\nand many more\n\n",
"I believe CherryPy is on the verge of being released for Python 3.X.\n",
"Python 3 isn't ready for web applications right now. The WSGI 1.0 specification isn't suitable for Py3k and the related standard libraries are 2to3 hacks that don't work consistently faced with bytes vs. unicode. It's a real mess.\nWEB-SIG are bashing out proposals for a WSGI revision; hopefully it can move forward soon, because although Python 3 isn't mainstream yet it's certainly heading that way, and the brokenness of webdev is rather embarrassing.\n",
"Python 3 is not ready for practical use, because there is not yet enough libraries that have been updated to support Python 3. So the answer is: No.\nBut there are LOADS of them on Python 2. Tens, at least.\nDjango, Turbogears, BFG and of course the old man of the game: Zope. To tell which is best for you, you need to expand your requirements a lot. \n"
] |
[
5,
2,
1,
0
] |
[
"There's Django but it works with Python 2.3+ only for now.\n"
] |
[
-1
] |
[
"frameworks",
"python",
"python_3.x",
"ruby_on_rails"
] |
stackoverflow_0001510084_frameworks_python_python_3.x_ruby_on_rails.txt
|
Q:
Python: how to write a data struct to a file as text (not pickled)
Is there a way to write python data structs to a file as text.
e.g. an app is running and has a variable/object: OPTIONS = ('ON', 'OFF', )
I need to write/merge the OPTIONS tuple into another file, not as a
pickled object, but as text, verbatim: OPTIONS = ('ON', 'OFF', )
I could traverse the tuple, and one by one write the elements into
the target file, but was wondering if there's an easier way.
note: if i do a "straight" write, i get the following:
fout.write(OPTIONS)
...
TypeError: argument 1 must be string or read-only character buffer, not tuple
A:
You could use repr (repr works well with things that have a __repr__() method):
>>> OPTIONS=('ON', 'OFF', )
>>> "OPTIONS="+repr(OPTIONS)
"OPTIONS=('ON', 'OFF')"
A:
fout.write(str(OPTIONS)) does what you want in this case, but no doubt in many others it won't; repr instead of str may be closer to your desires (but then again it might not be, as you express them so vaguely and generally, beyond that single example).
A:
I don't know your scope but you could use another serialisation/persistence system like JSON, or Twisted Jelly that are more human readable (there's others like YAML).
I used jelly in some project for preferences files. It's really easy to use but you have to use repr() to save the data in human readable form and then eval() to read it back. So don't do that on everything because there's a security risk by using eval().
Here's a code example that prettify the representation (add indentation):
VERSION = 'v1.1'
def read_data(filename):
return unjelly(eval(open(filename, 'r').read().replace('\n', '').replace('\t', '')))
def write_data(filename, obj):
dump = repr(jelly(obj))
level = 0
nice_dump = ['%s\n' % VERSION]
for char in dump:
if char == '[':
if level > 0:
nice_dump.append('\n' + '\t' * level)
level += 1
elif char == ']':
level -= 1
nice_dump.append(char)
open(filename, 'w').write(''.join(nice_dump))
|
Python: how to write a data struct to a file as text (not pickled)
|
Is there a way to write python data structs to a file as text.
e.g. an app is running and has a variable/object: OPTIONS = ('ON', 'OFF', )
I need to write/merge the OPTIONS tuple into another file, not as a
pickled object, but as text, verbatim: OPTIONS = ('ON', 'OFF', )
I could traverse the tuple, and one by one write the elements into
the target file, but was wondering if there's an easier way.
note: if i do a "straight" write, i get the following:
fout.write(OPTIONS)
...
TypeError: argument 1 must be string or read-only character buffer, not tuple
|
[
"You could use repr (repr works well with things that have a __repr__() method):\n>>> OPTIONS=('ON', 'OFF', )\n>>> \"OPTIONS=\"+repr(OPTIONS)\n\"OPTIONS=('ON', 'OFF')\"\n\n",
"fout.write(str(OPTIONS)) does what you want in this case, but no doubt in many others it won't; repr instead of str may be closer to your desires (but then again it might not be, as you express them so vaguely and generally, beyond that single example).\n",
"I don't know your scope but you could use another serialisation/persistence system like JSON, or Twisted Jelly that are more human readable (there's others like YAML).\nI used jelly in some project for preferences files. It's really easy to use but you have to use repr() to save the data in human readable form and then eval() to read it back. So don't do that on everything because there's a security risk by using eval().\nHere's a code example that prettify the representation (add indentation):\nVERSION = 'v1.1'\n\ndef read_data(filename):\n return unjelly(eval(open(filename, 'r').read().replace('\\n', '').replace('\\t', '')))\n\ndef write_data(filename, obj):\n dump = repr(jelly(obj))\n level = 0\n nice_dump = ['%s\\n' % VERSION]\n for char in dump:\n if char == '[':\n if level > 0:\n nice_dump.append('\\n' + '\\t' * level)\n level += 1\n elif char == ']':\n level -= 1\n nice_dump.append(char)\n open(filename, 'w').write(''.join(nice_dump))\n\n"
] |
[
4,
1,
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001512401_python.txt
|
Q:
Determining if stdout for a Python process is redirected
I've noticed that curl can tell whether or not I'm redirecting its output (in which case it puts up a progress bar).
Is there a reasonable way to do this in a Python script? So:
$ python my_script.py
Not redirected
$ python my_script.py > output.txt
Redirected!
A:
import sys
if sys.stdout.isatty():
print "Not redirected"
else:
sys.stderr.write("Redirected!\n")
A:
Actually, what you want to do here is find out if stdin and stdout are the same thing.
$ cat test.py
import os
print os.fstat(0) == os.fstat(1)
$ python test.py
True
$ python test.py > f
$ cat f
False
$
The longer but more traditional version of the are they the same file test just compares st_ino and st_dev. Typically, on windows these are faked up with a hash of something so that this exact design pattern will work.
A:
Look at
os.isatty(fd)
(I don't think this works on Windows, however)
|
Determining if stdout for a Python process is redirected
|
I've noticed that curl can tell whether or not I'm redirecting its output (in which case it puts up a progress bar).
Is there a reasonable way to do this in a Python script? So:
$ python my_script.py
Not redirected
$ python my_script.py > output.txt
Redirected!
|
[
"import sys\n\nif sys.stdout.isatty():\n print \"Not redirected\"\nelse:\n sys.stderr.write(\"Redirected!\\n\")\n\n",
"Actually, what you want to do here is find out if stdin and stdout are the same thing.\n$ cat test.py\nimport os\nprint os.fstat(0) == os.fstat(1)\n$ python test.py\nTrue\n$ python test.py > f\n$ cat f\nFalse\n$ \n\nThe longer but more traditional version of the are they the same file test just compares st_ino and st_dev. Typically, on windows these are faked up with a hash of something so that this exact design pattern will work.\n",
"Look at \nos.isatty(fd) \n\n(I don't think this works on Windows, however)\n"
] |
[
43,
14,
4
] |
[] |
[] |
[
"python"
] |
stackoverflow_0001512457_python.txt
|
Q:
How to access file metadata with Python?
I'm trying to write a program in python that retrieves and updates file metadata on windows. I've tried searching on Google regarding what modules to use but I haven't found anything very concrete or useful.
Some people suggested the stat module which can give you info such as file access and last modification. But I'm looking to retrieve other types of metadata available on Windows. For example tags, author, rating, artists etc.
How can I retrieve this information for a file using Python?
Thanks
A:
Essentially as I said here less than an hour ago,
Apparently, you need to use the
Windows Search API looking for
System.Keywords -- you can access the
API directly via ctypes, or indirectly
(needing win32 extensions) through the
API's COM Interop assembly. Sorry, I
have no vista installation on which to
check, but I hope these links are
useful!
Links don't preserve across copy and paste, please just visit the other SO questions for them;-).
|
How to access file metadata with Python?
|
I'm trying to write a program in python that retrieves and updates file metadata on windows. I've tried searching on Google regarding what modules to use but I haven't found anything very concrete or useful.
Some people suggested the stat module which can give you info such as file access and last modification. But I'm looking to retrieve other types of metadata available on Windows. For example tags, author, rating, artists etc.
How can I retrieve this information for a file using Python?
Thanks
|
[
"Essentially as I said here less than an hour ago,\n\nApparently, you need to use the\n Windows Search API looking for\n System.Keywords -- you can access the\n API directly via ctypes, or indirectly\n (needing win32 extensions) through the\n API's COM Interop assembly. Sorry, I\n have no vista installation on which to\n check, but I hope these links are\n useful!\n\nLinks don't preserve across copy and paste, please just visit the other SO questions for them;-).\n"
] |
[
2
] |
[] |
[] |
[
"metadata",
"python",
"windows"
] |
stackoverflow_0001512600_metadata_python_windows.txt
|
Q:
How to save data with Python?
I am working on a program in Python and want users to be able to save data they are working on. I have looked into cPickle; it seems like it would be a fast and easy way to save data, it seems insecure. Since entire functions, classes, etc can be pickled, I am worried that a rogue save file could inject harmful code into the program. Is there a way I can prevent that, or should I look into other methods of saving data, such as directly converting to a string (which also seems insecure,) or creating an XML hierarchy, and putting data in that.
I am new to python, so please bear with me.
Thanks in advance!
EDIT: As for the type of data I am storing, it is mainly dictionaries and lists. Information such as names, speeds, etc. It is fairly simple right now, but may get more complex in the future.
A:
From your description JSON encoding is the secure and fast solution. There is a json module in python2.6, you can use it like this:
import json
obj = {'key1': 'value1', 'key2': [1, 2, 3, 4], 'key3': 1322}
encoded = json.dumps(obj)
obj = json.loads(encoded)
JSON format is human readable and is very similar to the dictionary string representation in python. And doesn't have any security issues like pickle. If you don't have python2.6 you can install cjson or simplejson
You can't use JSON to save python objects like Pickle. But you can use it to save: strings, dictionaries, lists, ... It can be enough for most cases.
To explain why pickle is insecure. From python docs:
Most of the security issues
surrounding the pickle and cPickle
module involve unpickling. There are
no known security vulnerabilities
related to pickling because you (the
programmer) control the objects that
pickle will interact with, and all it
produces is a string.
However, for unpickling, it is never a
good idea to unpickle an untrusted
string whose origins are dubious, for
example, strings read from a socket.
This is because unpickling can create
unexpected objects and even
potentially run methods of those
objects, such as their class
constructor or destructor
... The moral of the story is that you
should be really careful about the
source of the strings your application
unpickles.
There are some ways to defend yourself but it is much easier to use JSON in your case.
A:
You could do something like:
to write
Pickle
Sign pickled file
Done
to read
Check pickled file's signature
Unpickle
Use
I wonder though what makes you think that the data files are going to be tampered but your application is not going to be?
A:
*****In this answer, I'm only concerned about accidental corruption of the application's integrity.*****
Pickle is "secure". What might be insecure is accessing code you didn't write, for example in plugins; that is not relevant to pickles though.
When you pickle an object, all its data is saved, but code and implementation is not. This means when unpickled, an updated object might find it has "old-style" data inside (if you update the implementation). This is something you must know and handle, if applicable.
Pickling strings, lists, numbers, dicts is very easy and works perfectly, and comparably to JSON. The Pickle magic is that -- sometimes without adjustment -- even complex python objects can be pickled. But only data is pickled; the instances are reconstructed simply by the saved module name and type name of the object.
A:
You need to give us more context before we can answer: what type of data are you saving, how much is there, how do you want to access it?
As for pickles: they do not store code. When you pickle a function or class, it is the name that is stored, not the actual code itself.
A:
You should use a database of some kind. Storing in pickle format isn't a good idea (in most cases). You may consider:
SQLite - (included in Python 2.5+) fast and simple, but requires knowledge of SQL and DB-API
buzhug - non-SQL, file based database with pythonic syntax
SQL database - you may use interface to some of DBMS (like MySQL, PostreSQL etc.), but it's only good for larger amount of data (thousands of records).
You may find some other solutions here.
A:
Who -- specifically -- is the sociopath who's going through the effort to break a program by hacking the pickled file?
It's Python. The sociopath has your source. They don't need to fool around hacking your pickle file. They can just edit your source and do all the "damage" they want.
Don't worry about "insecurity" unless you're involved in litigation with organized crime syndicates.
Don't worry about "a rogue save file could inject harmful code into the program". No one will bother with a rogue save file when they have the source.
A:
You might enjoy working with the y_serial module over at
http://yserial.sourceforge.net
which reads like a tutorial but operationally offers
working code for serialization and persistance.
The commentary discusses some of the pros and cons
relevant to issues raised here.
It's designed to be a general solution to
warehousing compressed Python objects with SQLite
(with almost no SQL fuss ;-)
Hope this helps.
|
How to save data with Python?
|
I am working on a program in Python and want users to be able to save data they are working on. I have looked into cPickle; it seems like it would be a fast and easy way to save data, it seems insecure. Since entire functions, classes, etc can be pickled, I am worried that a rogue save file could inject harmful code into the program. Is there a way I can prevent that, or should I look into other methods of saving data, such as directly converting to a string (which also seems insecure,) or creating an XML hierarchy, and putting data in that.
I am new to python, so please bear with me.
Thanks in advance!
EDIT: As for the type of data I am storing, it is mainly dictionaries and lists. Information such as names, speeds, etc. It is fairly simple right now, but may get more complex in the future.
|
[
"From your description JSON encoding is the secure and fast solution. There is a json module in python2.6, you can use it like this:\nimport json\nobj = {'key1': 'value1', 'key2': [1, 2, 3, 4], 'key3': 1322}\nencoded = json.dumps(obj)\nobj = json.loads(encoded)\n\nJSON format is human readable and is very similar to the dictionary string representation in python. And doesn't have any security issues like pickle. If you don't have python2.6 you can install cjson or simplejson\nYou can't use JSON to save python objects like Pickle. But you can use it to save: strings, dictionaries, lists, ... It can be enough for most cases.\nTo explain why pickle is insecure. From python docs:\n\nMost of the security issues\n surrounding the pickle and cPickle\n module involve unpickling. There are\n no known security vulnerabilities\n related to pickling because you (the\n programmer) control the objects that\n pickle will interact with, and all it\n produces is a string.\nHowever, for unpickling, it is never a\n good idea to unpickle an untrusted\n string whose origins are dubious, for\n example, strings read from a socket.\n This is because unpickling can create\n unexpected objects and even\n potentially run methods of those\n objects, such as their class\n constructor or destructor\n ... The moral of the story is that you\n should be really careful about the\n source of the strings your application\n unpickles.\n\nThere are some ways to defend yourself but it is much easier to use JSON in your case.\n",
"You could do something like:\nto write\n\nPickle\nSign pickled file\nDone\n\nto read\n\nCheck pickled file's signature\nUnpickle\nUse\n\nI wonder though what makes you think that the data files are going to be tampered but your application is not going to be?\n",
"*****In this answer, I'm only concerned about accidental corruption of the application's integrity.*****\nPickle is \"secure\". What might be insecure is accessing code you didn't write, for example in plugins; that is not relevant to pickles though.\nWhen you pickle an object, all its data is saved, but code and implementation is not. This means when unpickled, an updated object might find it has \"old-style\" data inside (if you update the implementation). This is something you must know and handle, if applicable.\nPickling strings, lists, numbers, dicts is very easy and works perfectly, and comparably to JSON. The Pickle magic is that -- sometimes without adjustment -- even complex python objects can be pickled. But only data is pickled; the instances are reconstructed simply by the saved module name and type name of the object.\n",
"You need to give us more context before we can answer: what type of data are you saving, how much is there, how do you want to access it?\nAs for pickles: they do not store code. When you pickle a function or class, it is the name that is stored, not the actual code itself.\n",
"You should use a database of some kind. Storing in pickle format isn't a good idea (in most cases). You may consider:\n\nSQLite - (included in Python 2.5+) fast and simple, but requires knowledge of SQL and DB-API\nbuzhug - non-SQL, file based database with pythonic syntax\nSQL database - you may use interface to some of DBMS (like MySQL, PostreSQL etc.), but it's only good for larger amount of data (thousands of records).\n\nYou may find some other solutions here.\n",
"Who -- specifically -- is the sociopath who's going through the effort to break a program by hacking the pickled file?\nIt's Python. The sociopath has your source. They don't need to fool around hacking your pickle file. They can just edit your source and do all the \"damage\" they want.\nDon't worry about \"insecurity\" unless you're involved in litigation with organized crime syndicates.\nDon't worry about \"a rogue save file could inject harmful code into the program\". No one will bother with a rogue save file when they have the source.\n",
"You might enjoy working with the y_serial module over at \nhttp://yserial.sourceforge.net \nwhich reads like a tutorial but operationally offers \nworking code for serialization and persistance. \nThe commentary discusses some of the pros and cons \nrelevant to issues raised here.\nIt's designed to be a general solution to \nwarehousing compressed Python objects with SQLite \n(with almost no SQL fuss ;-)\nHope this helps.\n"
] |
[
23,
3,
2,
1,
1,
1,
1
] |
[] |
[] |
[
"data_structures",
"python",
"save"
] |
stackoverflow_0001389738_data_structures_python_save.txt
|
Q:
Converting urls into lowercase?
Is there any straightforward to convert all incoming urls to lowercase before they get matched against urlpatterns in run_wsgi_app(webapp.WSGIApplication(urlpatterns))?
A:
You'd have to wrap the instance of WSGIApplication with your own WSGI app that lowercases the URL in the WSGI environment -- but then the environment would just stay modified, which may have other unpleasant effects. Why not just add (?i) to the regex patterns you use in urlpatterns instead?
A:
I wonder if you could modify your CGI environment variables before executing the WSGIApplication instance.
os.putenv(os.getenv('PATH_INFO').lower())
Something along those lines. I've done this myself for slight URL modifications, however I 301 redirected to the new URL; I didn't continue processing with WSGI.
|
Converting urls into lowercase?
|
Is there any straightforward to convert all incoming urls to lowercase before they get matched against urlpatterns in run_wsgi_app(webapp.WSGIApplication(urlpatterns))?
|
[
"You'd have to wrap the instance of WSGIApplication with your own WSGI app that lowercases the URL in the WSGI environment -- but then the environment would just stay modified, which may have other unpleasant effects. Why not just add (?i) to the regex patterns you use in urlpatterns instead?\n",
"I wonder if you could modify your CGI environment variables before executing the WSGIApplication instance.\nos.putenv(os.getenv('PATH_INFO').lower())\n\nSomething along those lines. I've done this myself for slight URL modifications, however I 301 redirected to the new URL; I didn't continue processing with WSGI.\n"
] |
[
3,
0
] |
[] |
[] |
[
"google_app_engine",
"python",
"web_applications"
] |
stackoverflow_0001512389_google_app_engine_python_web_applications.txt
|
Q:
wx.TextCtrl.LoadFile()
I am trying to display search result data quickly. I have all absolute file paths for files on my network drive(s) in a single, ~50MB text file. The python script makes a single pass over every line in this file [kept on the local drive] in a second or less, and that is acceptable. That is the time it takes to gather results.
However, the results are given in a wx.TextCtrl widget. Appending them line by line to a wx TextCtrl would be ridiculous. The best method I have come up with is to write the results to a text file, and call wx.TextCtrl's LoadFile native, which, depending on the number of results, loads the lines of text into the pane in between 0.1 to 5 seconds or so. However there must be a faster way for 10+MB of text inbound. The results are immediately calculated and available in the same process as the GUI... so please, tell me is there any way I can pipe/proxy/hack that data directly into the TextCtrl? Would mmap help this transfer?
A:
Are people really going to read (or need) all 10MB in a text control? Probably not.
Suggest that, you load on demand by paging in portions of the data.
Or better still, provide some user search functionality that narrows down the results to the information of interest.
A:
You can load all the data at once with AppendText, why you need to do it line by line, but still it will take seconds as 10MB is huge. If you use wx.RichTextCtrl it is bit faster
in my test it loaded 10 MB in 6 secs instead of 9 sec for TextCtrl.
I do not see the reason why you need to set all the data at once? and who is going to read 10MB?
So depending on the purpose there can be better ways.
If you need to display all data in super fast way, write a custom control which keeps a list of lines and only renders the lines visible in the view.
here is a test app where you can try various things
import wx
import wx.richtext
import string
import time
# create a big text
line = string.ascii_letters+"\n"
bigText = line*200000
app = wx.PySimpleApp()
myframe = wx.Frame(None)
#tc = wx.TextCtrl(myframe, style=wx.TE_MULTILINE)
tc = wx.richtext.RichTextCtrl(myframe)
def loadData():
s = time.time()
tc.SetMaxLength(len(bigText))
tc.AppendText(bigText)
print time.time()-s
# load big text after 5 secs
wx.CallLater(5000, loadData)
app.SetTopWindow(myframe)
myframe.Show()
app.MainLoop()
If you do not want to paint everything youself in a custom control, you can just use a textctrl with separate scrollbar and update text of textcntrl on scrolling, so at a time you will be loading few lines only.
Edit: as you said in your comment that data may be 1-2 MB, 1MB data with AppendText takes only .5 sec I think that is ok
|
wx.TextCtrl.LoadFile()
|
I am trying to display search result data quickly. I have all absolute file paths for files on my network drive(s) in a single, ~50MB text file. The python script makes a single pass over every line in this file [kept on the local drive] in a second or less, and that is acceptable. That is the time it takes to gather results.
However, the results are given in a wx.TextCtrl widget. Appending them line by line to a wx TextCtrl would be ridiculous. The best method I have come up with is to write the results to a text file, and call wx.TextCtrl's LoadFile native, which, depending on the number of results, loads the lines of text into the pane in between 0.1 to 5 seconds or so. However there must be a faster way for 10+MB of text inbound. The results are immediately calculated and available in the same process as the GUI... so please, tell me is there any way I can pipe/proxy/hack that data directly into the TextCtrl? Would mmap help this transfer?
|
[
"Are people really going to read (or need) all 10MB in a text control? Probably not.\nSuggest that, you load on demand by paging in portions of the data.\nOr better still, provide some user search functionality that narrows down the results to the information of interest.\n",
"You can load all the data at once with AppendText, why you need to do it line by line, but still it will take seconds as 10MB is huge. If you use wx.RichTextCtrl it is bit faster\nin my test it loaded 10 MB in 6 secs instead of 9 sec for TextCtrl.\nI do not see the reason why you need to set all the data at once? and who is going to read 10MB?\nSo depending on the purpose there can be better ways.\nIf you need to display all data in super fast way, write a custom control which keeps a list of lines and only renders the lines visible in the view.\nhere is a test app where you can try various things\nimport wx\nimport wx.richtext\nimport string\nimport time\n\n# create a big text\nline = string.ascii_letters+\"\\n\"\nbigText = line*200000\n\napp = wx.PySimpleApp()\nmyframe = wx.Frame(None)\n#tc = wx.TextCtrl(myframe, style=wx.TE_MULTILINE)\ntc = wx.richtext.RichTextCtrl(myframe)\ndef loadData():\n s = time.time()\n tc.SetMaxLength(len(bigText))\n tc.AppendText(bigText)\n print time.time()-s\n\n# load big text after 5 secs\nwx.CallLater(5000, loadData)\n\napp.SetTopWindow(myframe)\nmyframe.Show()\napp.MainLoop()\n\nIf you do not want to paint everything youself in a custom control, you can just use a textctrl with separate scrollbar and update text of textcntrl on scrolling, so at a time you will be loading few lines only.\nEdit: as you said in your comment that data may be 1-2 MB, 1MB data with AppendText takes only .5 sec I think that is ok\n"
] |
[
0,
0
] |
[] |
[] |
[
"file_io",
"mmap",
"python",
"wxpython",
"wxwidgets"
] |
stackoverflow_0001507075_file_io_mmap_python_wxpython_wxwidgets.txt
|
Q:
text diff on django/google appengine
I am developing a wiki using django which i plan to deploy later in google appengine. Is it possible to deploy textdiff like system in appengine?
A:
The difflib package can be useful for generating diffs. It's written in pure Python and it's in the standard Python library, so I'd expect it to be available in Google App Engine.
|
text diff on django/google appengine
|
I am developing a wiki using django which i plan to deploy later in google appengine. Is it possible to deploy textdiff like system in appengine?
|
[
"The difflib package can be useful for generating diffs. It's written in pure Python and it's in the standard Python library, so I'd expect it to be available in Google App Engine.\n"
] |
[
2
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0001512966_django_python.txt
|
Q:
Any python OpenID server available?
I'd like to host my own OpenID provider. Is there anything available in Python?
A:
poit is a standalone, single-user OpenID server implemented in Python, using python-openid. (It's a project I started)
|
Any python OpenID server available?
|
I'd like to host my own OpenID provider. Is there anything available in Python?
|
[
"poit is a standalone, single-user OpenID server implemented in Python, using python-openid. (It's a project I started)\n"
] |
[
7
] |
[] |
[] |
[
"openid",
"python"
] |
stackoverflow_0000941296_openid_python.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.