content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Can I store objects in Python class members? I'm writing a game in Python with the Pygame2 multimedia library, but I'm more accustomed to developing games with ActionScript 3. In AS3, I don't think it was possible to store an object in a static variable, because static variables were initialized before objects could be instantiated. However, in Python, I'm not sure if this holds true. Can I store an object instance in a Python class variable? When will it be instantiated? Will one be instantiated per class or per instance? class Test: counter = Counter() # A class variable counts its instantiations def __init__(self): counter.count() # A method that prints the number of instances of Counter test1 = Test() # Prints 1 test2 = Test() # Prints 1? 2? A: You can do this: class Test: counter = 0 def __init__(self): Test.counter += 1 print Test.counter And it works as expected. A: Yes. As with most python try it and see. It will be instantiated when a Test object is created. ie your assignment to test1 The counter object is created per class Run the following to see (to access the class variable you need the self class Counter: def __init__(self): self.c = 0 def count(self): self.c += 1 print 'in count() value is ' , self.c return self.c class Test: counter = Counter() # A class variable counts its instantiations print 'in class Test' def __init__(self): print 'in Testinit' self.counter.count() # A method that prints the number of instances of Counter test1 = Test() # Prints 1 test2 = Test()
Can I store objects in Python class members?
I'm writing a game in Python with the Pygame2 multimedia library, but I'm more accustomed to developing games with ActionScript 3. In AS3, I don't think it was possible to store an object in a static variable, because static variables were initialized before objects could be instantiated. However, in Python, I'm not sure if this holds true. Can I store an object instance in a Python class variable? When will it be instantiated? Will one be instantiated per class or per instance? class Test: counter = Counter() # A class variable counts its instantiations def __init__(self): counter.count() # A method that prints the number of instances of Counter test1 = Test() # Prints 1 test2 = Test() # Prints 1? 2?
[ "You can do this:\nclass Test:\n counter = 0\n def __init__(self):\n Test.counter += 1\n print Test.counter\n\nAnd it works as expected.\n", "Yes.\nAs with most python try it and see.\nIt will be instantiated when a Test object is created. ie your assignment to test1\nThe counter object is created per class\nRun the following to see (to access the class variable you need the self\nclass Counter:\n def __init__(self):\n self.c = 0\n\n def count(self):\n self.c += 1\n print 'in count() value is ' , self.c\n return self.c\n\nclass Test:\n counter = Counter() # A class variable counts its instantiations \n print 'in class Test'\n def __init__(self):\n print 'in Testinit'\n self.counter.count() # A method that prints the number of instances of Counter\n\ntest1 = Test() # Prints 1\ntest2 = Test()\n\n" ]
[ 3, 3 ]
[]
[]
[ "actionscript_3", "instantiation", "pygame", "python" ]
stackoverflow_0001290798_actionscript_3_instantiation_pygame_python.txt
Q: Django MVC pattern for non database driven models? I'm just working my way through Django, and really liking it so far, but I have an issue and I'm not sure what the typical way to solve it. Suppose I have a View which is supposed to be updated when some complex Python object is updated, but this object is not driven by the database, say it is driven by AJAX calls or directly by the user or something. Where does this code go? Should it still go in models.py???? A: Your models.py can be (and sometimes is) empty. You are not obligated to have a model which maps to a database. You should still have a models.py file, to make Django's admin happy. The models.py file name is important, and it's easier to have an empty file than to try and change the file expected by various admin commands. The "model" -- in general -- does not have to map to a database. The "model" -- as a general component of MVC design -- can be anything. You can -- and often do -- define your own "model" module that your views use. Just don't call it models.py because it will confuse Django admin. Call it something meaningful to your application: foo.py. This foo.py manipulates the real things that underpin your application -- not necessarily a Django Model.model subclass. Django MVC does not require a database mapping. It does explicitly expect that the module named models.py has a database mapping in it. So, use an empty models.py if you have no actual database mapping. Your views.py can use import foo def index( request ): objects = foo.somelistofobjects() *etc.* Django allows you to easily work with no database mapping. Your model can easily be anything. Just don't call it models.py. Edit. Are Views registered with Models? No. On update to the Model by the Controller the Views get notified? No. Is the Model strictly the data respresentation as this is really MVP? Yes. Read the Django docs. It's simple. Web Request -> URL mapping -> View function -> Template -> Response. The model can be used by the view function. The model can be a database mapping, or it can be any other thing.
Django MVC pattern for non database driven models?
I'm just working my way through Django, and really liking it so far, but I have an issue and I'm not sure what the typical way to solve it. Suppose I have a View which is supposed to be updated when some complex Python object is updated, but this object is not driven by the database, say it is driven by AJAX calls or directly by the user or something. Where does this code go? Should it still go in models.py????
[ "Your models.py can be (and sometimes is) empty. You are not obligated to have a model which maps to a database. \nYou should still have a models.py file, to make Django's admin happy. The models.py file name is important, and it's easier to have an empty file than to try and change the file expected by various admin commands.\nThe \"model\" -- in general -- does not have to map to a database. The \"model\" -- as a general component of MVC design -- can be anything.\nYou can -- and often do -- define your own \"model\" module that your views use. Just don't call it models.py because it will confuse Django admin. Call it something meaningful to your application: foo.py. This foo.py manipulates the real things that underpin your application -- not necessarily a Django Model.model subclass.\nDjango MVC does not require a database mapping. It does explicitly expect that the module named models.py has a database mapping in it. So, use an empty models.py if you have no actual database mapping.\nYour views.py can use\nimport foo\n\ndef index( request ):\n objects = foo.somelistofobjects()\n *etc.*\n\nDjango allows you to easily work with no database mapping. Your model can easily be anything. Just don't call it models.py.\n\nEdit.\nAre Views registered with Models? No.\nOn update to the Model by the Controller the Views get notified? No.\nIs the Model strictly the data respresentation as this is really MVP? Yes.\nRead the Django docs. It's simple.\nWeb Request -> URL mapping -> View function -> Template -> Response.\nThe model can be used by the view function. The model can be a database mapping, or it can be any other thing.\n" ]
[ 38 ]
[]
[]
[ "django", "model_view_controller", "models", "python" ]
stackoverflow_0001290891_django_model_view_controller_models_python.txt
Q: Python capture output from wget? Is it possible to capture output from wget and other command line programs that use curses? Here is what I have right now: p = subprocess.Popen(cmd, stdout=subprocess.PIPE, bufsize=0) for line in p.stdout: print "a" This works fine for programs that have simple output, but not for wget and other programs that use curses. A: I don't believe that wget is using curses. Normally when I want to use wget in a script I'd use the -O - option to force its output to stdout. I suspect you're trying to capture the text that you normally see on your console when you're running it, which would be stderr. From the command line, outside of Python, just run a command like: wget -O - http://www.somesite.org/ > /tmp/wget.out 2> /tmp/wget.err Then look at the two output files. If you see any output from wget on your console/terminal then you are running some different flavor of the command than I've seen. If, as I suspect, you're actually interested in the stderr messages then you have two choices. Change your command to add 2>&1 and add shell=True to your Popen() arguments Alternatively (and preferably) add stderr=subprocess.PIPE to your Popen() arguments The former is handy if you weren't using stdout anyway (assuming your using wget to fetch the data and write it into files). In the latter case you read from the stderr file option to get your data. BTW: if you really did need to capture curses data ... you could try to use the standard pty module but I wouldn't recommend that. You'd be far better off fetching the pexpect module from: PyPI: pexpect: And don't be scared off by the age or version numbering, it works on Python 2.5 and 2.6 as well as 2.4 and 2.3.
Python capture output from wget?
Is it possible to capture output from wget and other command line programs that use curses? Here is what I have right now: p = subprocess.Popen(cmd, stdout=subprocess.PIPE, bufsize=0) for line in p.stdout: print "a" This works fine for programs that have simple output, but not for wget and other programs that use curses.
[ "I don't believe that wget is using curses.\nNormally when I want to use wget in a script I'd use the -O - option to force its output to stdout. I suspect you're trying to capture the text that you normally see on your console when you're running it, which would be stderr.\nFrom the command line, outside of Python, just run a command like:\nwget -O - http://www.somesite.org/ > /tmp/wget.out 2> /tmp/wget.err\n\nThen look at the two output files. If you see any output from wget on your console/terminal then you are running some different flavor of the command than I've seen.\nIf, as I suspect, you're actually interested in the stderr messages then you have two choices.\n\nChange your command to add 2>&1 and add shell=True to your Popen() arguments\nAlternatively (and preferably) add stderr=subprocess.PIPE to your Popen() arguments\n\nThe former is handy if you weren't using stdout anyway (assuming your using\nwget to fetch the data and write it into files). In the latter case you read from the stderr file option to get your data.\nBTW: if you really did need to capture curses data ... you could try to use the standard pty module but I wouldn't recommend that. You'd be far better off fetching the pexpect module from:\n\nPyPI: pexpect:\n\nAnd don't be scared off by the age or version numbering, it works on Python 2.5 and 2.6 as well as 2.4 and 2.3.\n" ]
[ 7 ]
[]
[]
[ "python" ]
stackoverflow_0001290910_python.txt
Q: IsPointInsideSegment(pt, line) in Python Is there a nice way to determine if a point lies within a 3D line segment? I know there are algorithms that determine the distance between a point and line segment, but I'm wondering if there's something more compact or efficient. A: Given three points A, B, and C -- where AB is your line and C is your other point, you can also restate your problem as two lines, AB and AC. If the angle between AB and AC is zero (i.e. if the area of the triangle ABC is zero) then your point C is on the line. If you think about this in terms of angles, you can use the dot product of the two vectors (AB and AC) to find the angle between them. Some quick googling turns up a couple of useful links: http://knol.google.com/k/koen-samyn/dot-product-cross-product-in-3d/2lijysgth48w1/10# http://www.jtaylor1142001.net/calcjat/Solutions/VDotProduct/VDPTheta3D.htm The bonus of this method is that it's easy to define tolerances in terms of angles; e.g. you may want to consider any point that falls within 5 degrees of the line to be "on the line" even though there is strictly some distance between the line and the point. Depending on your application, this may be more useful than an actual linear measurement. A: One way to restate your question is to ask of the point is a solution to the equation which contains the line segment in question, and (if so) if it lies between the end points of the segment. Since you don't describe the details of how you're representing your line segments and don't mention any modules or libraries that you're using I guess you'd have to code up the algebra yourself. (It's not trivial and my algebra is too rusty in any event). You might consider looking at a library that does the work for you (either to incorporate into your project or to study its code). In this case I'd take a close look at: [PyEuclid]: http://code.google.com/p/pyeuclid/ ... claims to be compatible with PyGame and supports objects for 2D and 3D vectors, rays, segments, and circles/spheres. Basically if the distance between your point and your segment is less than epsilon (some threshold appropriate to your calculations) then you treat that as an intersection. (Although it doesn't explicitly say so in the docs that I read, I'm guessing that they must handle floating point rounding issues in their code somewhere. All of PyEuclid seems to be in a single 2200+ line .py file). A: First, find the distance from the point to the line. If the distance from the point to the line is zero, then it's on the line.
IsPointInsideSegment(pt, line) in Python
Is there a nice way to determine if a point lies within a 3D line segment? I know there are algorithms that determine the distance between a point and line segment, but I'm wondering if there's something more compact or efficient.
[ "Given three points A, B, and C -- where AB is your line and C is your other point, you can also restate your problem as two lines, AB and AC. If the angle between AB and AC is zero (i.e. if the area of the triangle ABC is zero) then your point C is on the line.\nIf you think about this in terms of angles, you can use the dot product of the two vectors (AB and AC) to find the angle between them. Some quick googling turns up a couple of useful links:\n\nhttp://knol.google.com/k/koen-samyn/dot-product-cross-product-in-3d/2lijysgth48w1/10#\nhttp://www.jtaylor1142001.net/calcjat/Solutions/VDotProduct/VDPTheta3D.htm\n\nThe bonus of this method is that it's easy to define tolerances in terms of angles; e.g. you may want to consider any point that falls within 5 degrees of the line to be \"on the line\" even though there is strictly some distance between the line and the point. Depending on your application, this may be more useful than an actual linear measurement.\n", "One way to restate your question is to ask of the point is a solution to the equation which contains the line segment in question, and (if so) if it lies between the end points of the segment.\nSince you don't describe the details of how you're representing your line segments and don't mention any modules or libraries that you're using I guess you'd have to code up the algebra yourself. (It's not trivial and my algebra is too rusty in any event).\nYou might consider looking at a library that does the work for you (either to incorporate into your project or to study its code). In this case I'd take a close look at:\n\n[PyEuclid]: http://code.google.com/p/pyeuclid/\n\n... claims to be compatible with PyGame and supports objects for 2D and 3D vectors, rays, segments, and circles/spheres. Basically if the distance between your point and your segment is less than epsilon (some threshold appropriate to your calculations) then you treat that as an intersection.\n(Although it doesn't explicitly say so in the docs that I read, I'm guessing that they must handle floating point rounding issues in their code somewhere. All of PyEuclid seems to be in a single 2200+ line .py file).\n", "First, find the distance from the point to the line. If the distance from the point to the line is zero, then it's on the line.\n" ]
[ 4, 1, 0 ]
[]
[]
[ "geometry", "python" ]
stackoverflow_0001290779_geometry_python.txt
Q: Checking for group membership (Many to Many in Django) I have two models in Django: groups and entries. Groups has a many-to-many field that connects it to entries. I want to select all entries that have a group (as not all do!) and be able to access their group.title field. I've tried something along the lines of: t = Entries.objects.select_related().exclude(group=None) and while this returns all entries that have groups, I can't do t[0].groups to get the title. Any ideas on how this could be done? Edit: more info When ever I use Django's shell to inspect what is returned in t (in this example), t[0].group does not exist. The only way I can access this is via t[0].group_set.all()[0].title, which seems inefficient and like I'm doing something incorrectly. A: You don't show the model code, so I can't be sure, but instead of t[0].groups, I think you want: for g in t[0].groups.all(): print g.title
Checking for group membership (Many to Many in Django)
I have two models in Django: groups and entries. Groups has a many-to-many field that connects it to entries. I want to select all entries that have a group (as not all do!) and be able to access their group.title field. I've tried something along the lines of: t = Entries.objects.select_related().exclude(group=None) and while this returns all entries that have groups, I can't do t[0].groups to get the title. Any ideas on how this could be done? Edit: more info When ever I use Django's shell to inspect what is returned in t (in this example), t[0].group does not exist. The only way I can access this is via t[0].group_set.all()[0].title, which seems inefficient and like I'm doing something incorrectly.
[ "You don't show the model code, so I can't be sure, but instead of t[0].groups, I think you want:\nfor g in t[0].groups.all():\n print g.title\n\n" ]
[ 3 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001291167_django_python.txt
Q: Confused by Django's claim to MVC, what is it exactly? So what exactly is Django implementing? Seems like there are Models Views Templates Models = Database mappings Views = Grab relevant data from the models and formats it via templates Templates = Display HTML depending on data given by Views EDIT: S. Lott cleared a lot up with this in an edit to a previous post, but I would still like to hear other feedback. Thanks! Is this correct? It really seems like Django is nowhere near the same as MVC and just confuses people by calling it that. A: Django's developers have a slightly non-traditional view on the MVC paradigm. They actually address this question in their FAQs, which you can read here. In their own words: In our interpretation of MVC, the “view” describes the data that gets presented to the user. It’s not necessarily how the data looks, but which data is presented. The view describes which data you see, not how you see it. It’s a subtle distinction. So, in our case, a “view” is the Python callback function for a particular URL, because that callback function describes which data is presented. Furthermore, it’s sensible to separate content from presentation – which is where templates come in. In Django, a “view” describes which data is presented, but a view normally delegates to a template, which describes how the data is presented. Where does the “controller” fit in, then? In Django’s case, it’s probably the framework itself: the machinery that sends a request to the appropriate view, according to the Django URL configuration.
Confused by Django's claim to MVC, what is it exactly?
So what exactly is Django implementing? Seems like there are Models Views Templates Models = Database mappings Views = Grab relevant data from the models and formats it via templates Templates = Display HTML depending on data given by Views EDIT: S. Lott cleared a lot up with this in an edit to a previous post, but I would still like to hear other feedback. Thanks! Is this correct? It really seems like Django is nowhere near the same as MVC and just confuses people by calling it that.
[ "Django's developers have a slightly non-traditional view on the MVC paradigm. They actually address this question in their FAQs, which you can read here. In their own words:\n\nIn our interpretation of MVC, the “view” describes the data that gets presented to the user. It’s not necessarily how the data looks, but which data is presented. The view describes which data you see, not how you see it. It’s a subtle distinction.\nSo, in our case, a “view” is the Python callback function for a particular URL, because that callback function describes which data is presented.\nFurthermore, it’s sensible to separate content from presentation – which is where templates come in. In Django, a “view” describes which data is presented, but a view normally delegates to a template, which describes how the data is presented.\nWhere does the “controller” fit in, then? In Django’s case, it’s probably the framework itself: the machinery that sends a request to the appropriate view, according to the Django URL configuration.\n\n" ]
[ 23 ]
[]
[]
[ "design_patterns", "django", "model_view_controller", "python" ]
stackoverflow_0001291213_design_patterns_django_model_view_controller_python.txt
Q: python regex help: unknown information to skip I'm having trouble with the needed regular expression... I'm sure I need to probably be using some combination of 'lookaround' or conditional expressions, but I'm at a loss. I have a data string like: pattern1 pattern2 pattern3 unwanted-groups pattern4 random number of tokens pattern5 optional1 optional2 more unknown unwanted junk separated with white spaces optional3 optional4 etc where I have a matching expression for each of the 'pattern#' and 'optional#' groups (optional groups being groups that are not required in the data and therefore not always present), but I don't have any pattern (text is free-form) or group count to skip for the other sections other than all 'tokens' are separated by white space. I've managed to figure out how to skip the unwanted stuff between the required groups but when I hit the optional groups, I'm lost. any suggestion on where I should be looking for hints/help? Thanks this is what I currently have: pattern = re.compile(r'(?:(METAR|SPECI)\s*)*(?P<ICAO>[\w]{4}\s)*' r'(?P<NIL>(NIL)\s)*(?P<UTC>[\d]{6}Z\s)*(?P<AUTOCOR>(AUTO|COR)*\s)*' r'(?P<WINDS>[\w]{5,6}G*[\d]{0,2}(MPS|KT|KMH)\s)\s*' r'.*?\s' #skip miscellaneous between winds and thermal data r'(?P<THERM>[\d]{2}/[\d]{2}\s)\s*(?P<PRESS>A[\d]{4}\s)\s*' r'(?:RMK\s)\s*(?P<AUTO>AO\d\s)*' r'(?P<PEAK>(PK\sWND\s[\d]{5,6}/[\d]{2,4}))*' r'(?P<SLP>SLP[\d]{3}\s)*' r'(?P<PRECIP>P[\d]{4}\s)*' r'(?P<remains>.*)' ) example = "METAR KCSM 162353Z AUTO 07011KT 10SM TS SCT100 28/19 A3000 RMK AO2 PK WND 06042/2325 WSHFT 2248 LTG DSNT ALQDS PRESRR SLP135 T02780189 10389 20272 53007=" data = pattern.match(example) It seems to work for the first 10 groups, but that is about it.... again thanks everybody A: If all the data is in that format I'd go with split instead. I think it will be faster. str = "regex1 regex2 regex3 unwanted-regex regex4 random number of tokens regex5 optregex1 optregex2 more unknown unwanted junk separated with white spaces optregex3 optregex4 etc" parts = str.split() # now you have each part as an element of the array. for index,item in enumerate(parts): if index == 3: continue # this is unwanted-regex else: # do what you want with the information here A: You need to use the | operator and findall: >>> re.compile("(regex\d+|optregex\d+)") >>> regex.findall(string) [u'regex1', u'regex2', u'regex3', u'regex4', u'regex5', u'optregex1', u'optregex2', u'optregex3', u'optregex4'] An advice: there are several tools (GUIs) that allow you to experiment with (and actually help writing) regular expressions. For python, I'm quite fond of kodos. A: If all of your targets consist of things like "foo1", "bar22" etc (in other words a sequence of letters followed by a sequence of digits) and everything else (sequences of digits, "words" without numeric suffixes, etc) is "junk" then the following seems to be sufficient: re.findall(r'[A-Za-z]+\d+', targetstr) (We can't use just r'\w+\d+' because \w matches digits and _ (underscores) as well as letters). If you're looking for a limited number of key patterns, or some of the junk might match "foo123 ... then you'll obviously have to be more specific.
python regex help: unknown information to skip
I'm having trouble with the needed regular expression... I'm sure I need to probably be using some combination of 'lookaround' or conditional expressions, but I'm at a loss. I have a data string like: pattern1 pattern2 pattern3 unwanted-groups pattern4 random number of tokens pattern5 optional1 optional2 more unknown unwanted junk separated with white spaces optional3 optional4 etc where I have a matching expression for each of the 'pattern#' and 'optional#' groups (optional groups being groups that are not required in the data and therefore not always present), but I don't have any pattern (text is free-form) or group count to skip for the other sections other than all 'tokens' are separated by white space. I've managed to figure out how to skip the unwanted stuff between the required groups but when I hit the optional groups, I'm lost. any suggestion on where I should be looking for hints/help? Thanks this is what I currently have: pattern = re.compile(r'(?:(METAR|SPECI)\s*)*(?P<ICAO>[\w]{4}\s)*' r'(?P<NIL>(NIL)\s)*(?P<UTC>[\d]{6}Z\s)*(?P<AUTOCOR>(AUTO|COR)*\s)*' r'(?P<WINDS>[\w]{5,6}G*[\d]{0,2}(MPS|KT|KMH)\s)\s*' r'.*?\s' #skip miscellaneous between winds and thermal data r'(?P<THERM>[\d]{2}/[\d]{2}\s)\s*(?P<PRESS>A[\d]{4}\s)\s*' r'(?:RMK\s)\s*(?P<AUTO>AO\d\s)*' r'(?P<PEAK>(PK\sWND\s[\d]{5,6}/[\d]{2,4}))*' r'(?P<SLP>SLP[\d]{3}\s)*' r'(?P<PRECIP>P[\d]{4}\s)*' r'(?P<remains>.*)' ) example = "METAR KCSM 162353Z AUTO 07011KT 10SM TS SCT100 28/19 A3000 RMK AO2 PK WND 06042/2325 WSHFT 2248 LTG DSNT ALQDS PRESRR SLP135 T02780189 10389 20272 53007=" data = pattern.match(example) It seems to work for the first 10 groups, but that is about it.... again thanks everybody
[ "If all the data is in that format I'd go with split instead. I think it will be faster.\n\nstr = \"regex1 regex2 regex3 unwanted-regex regex4 random number of tokens regex5 optregex1 optregex2 more unknown unwanted junk separated with white spaces optregex3 optregex4 etc\"\nparts = str.split() # now you have each part as an element of the array.\nfor index,item in enumerate(parts):\n if index == 3:\n continue # this is unwanted-regex\n else:\n # do what you want with the information here\n\n", "You need to use the | operator and findall:\n>>> re.compile(\"(regex\\d+|optregex\\d+)\")\n>>> regex.findall(string)\n[u'regex1', u'regex2', u'regex3', u'regex4', u'regex5', u'optregex1', u'optregex2', u'optregex3', u'optregex4']\n\nAn advice: there are several tools (GUIs) that allow you to experiment with (and actually help writing) regular expressions. For python, I'm quite fond of kodos.\n", "If all of your targets consist of things like \"foo1\", \"bar22\" etc (in other words a sequence of letters followed by a sequence of digits) and everything else (sequences of digits, \"words\" without numeric suffixes, etc) is \"junk\" then the following seems to be sufficient:\nre.findall(r'[A-Za-z]+\\d+', targetstr)\n\n(We can't use just r'\\w+\\d+' because \\w matches digits and _ (underscores) as well as letters).\nIf you're looking for a limited number of key patterns, or some of the junk might match \"foo123 ... then you'll obviously have to be more specific.\n" ]
[ 4, 1, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001290205_python_regex.txt
Q: Abstraction and client/server architecture questions for Python game program Here is where I am at presently. I am designing a card game with the aim of utilizing major components for future work. The part that is hanging me up is creating a layer of abstraction between the server and the client(s). A server is started, and then one or more clients can connect (locally or remotely). I am designing a thick client but my friend is looking at doing a web-based client. I would like to design the server in a manner that allows a variety of different clients to call a common set of server commands. So, for a start, I would like to create a 'server' which manages the game rules and player interactions, and a 'client' on the local CLI (I'm running Ubuntu Linux for convenience). I'm attempting to flesh out how the two pieces are supposed to interact, without mandating that future clients be CLI-based or on the local machine. I've found the following two questions which are beneficial, but don't quite answer the above. Client Server programming in python? Evaluate my Python server structure I don't require anything full-featured right away; I just want to establish the basic mechanisms for abstraction so that the resulting mock-up code reflects the relationship appropriately: there are different assumptions at play with a client/server relationship than with an all-in-one application. Where do I start? What resources do you recommend? Disclaimers: I am familiar with code in a variety of languages and general programming/logic concepts, but have little real experience writing substantial amounts of code. This pet project is an attempt at rectifying this. Also, I know the information is out there already, but I have the strong impression that I am missing the forest for the trees. A: Read up on RESTful architectures. Your fat client can use REST. It will use urllib2 to make RESTful requests of a server. It can exchange data in JSON notation. A web client can use REST. It can make simple browser HTTP requests or a Javascript component can make more sophisticated REST requests using JSON. Your server can be built as a simple WSGI application using any simple WSGI components. You have nice ones in the standard library, or you can use Werkzeug. Your server simply accepts REST requests and makes REST responses. Your server can work in HTML (for a browser) or JSON (for a fat client or Javascript client.) A: I would consider basing all server / client interactions on HTTP -- probably with JSON payloads. This doesn't directly allow server-initiated interactions ("server push"), but the (newish but already traditional;-) workaround for that is AJAX-y (even though the X makes little sense as I suggest JSON payloads, not XML ones;-) -- the client initiates an async request (via a separate thread or otherwise) to a special URL on the server, and the server responds to those requests to (in practice) do "pushes". From what you say it looks like the limitations of this approach might not be a problem. The key advantage of specifying the interactions in such terms is that they're entirely independent from the programming language -- so the web-based client in Javascript will be just as doable as your CLI one in Python, etc etc. Of course, the server can live on localhost as a special case, but there is no constraint for that as the HTTP URLs can specify whatever host is running the server; etc, etc. A: First of all, regardless of the locality or type of the client, you will be communicating through an established message-based interface. All clients will be operating based on a common set of requests and responses, and the server will handle and reject these based on their validity according to game state. Whether you are dealing with local clients on the same machine or remote clients via HTTP does not matter whatsoever from an abstraction standpoint, as they will all be communicating through the same set of requests/responses. What this comes down to is your protocol. Your protocol should be a well-defined and technically sound language between client and server that will allow clients to a) participate effectively, and b) participate fairly. This protocol should define what messages ('moves') a client can do, and when, and how the server will react. Your protocol should be fully fleshed out and documented before you even start on game logic - the two are intrinsically connected and you will save a lot of wasted time and effort by competely defining your protocol first. You protocol is the abstraction between client and server and it will also serve as the design document and programming guide for both. Protocol design is all about state, state transitions, and validation. Game servers usually have a set of fairly common, generic states for each game instance e.g. initialization, lobby, gameplay, pause, recap, close game, etc... Each one of these states has important state data related with it. For example, a 'lobby' state on the server-side might contain the known state of each player...how long since the last message or ping, what the player is doing (selecting an avatar, switching settings, going to the fridge, etc.). Organizing and managing state and substate data in code is important. Managing these states, and the associated data requirements for each is a process that should be exquisitely planned out as they are directly related to volume of work and project complexity - this is very important and also great practice if you are using this project to step up into larger things. Also, you must keep in mind that if you have a game, and you let people play, people will cheat. It's a fact of life. In order to minimize this, you must carefully design your protocol and state management to only ever allow valid state transitions. Never trust a single client packet. For every permutation of client/server state, you must enforce a limited set of valid game messages, and you must be very careful in what you allow players to do, and when you allow them to do it. Project complexity is generally exponential and not linear - client/server game programming is usually a good/painful way to learn this. Great question. Hope this helps, and good luck!
Abstraction and client/server architecture questions for Python game program
Here is where I am at presently. I am designing a card game with the aim of utilizing major components for future work. The part that is hanging me up is creating a layer of abstraction between the server and the client(s). A server is started, and then one or more clients can connect (locally or remotely). I am designing a thick client but my friend is looking at doing a web-based client. I would like to design the server in a manner that allows a variety of different clients to call a common set of server commands. So, for a start, I would like to create a 'server' which manages the game rules and player interactions, and a 'client' on the local CLI (I'm running Ubuntu Linux for convenience). I'm attempting to flesh out how the two pieces are supposed to interact, without mandating that future clients be CLI-based or on the local machine. I've found the following two questions which are beneficial, but don't quite answer the above. Client Server programming in python? Evaluate my Python server structure I don't require anything full-featured right away; I just want to establish the basic mechanisms for abstraction so that the resulting mock-up code reflects the relationship appropriately: there are different assumptions at play with a client/server relationship than with an all-in-one application. Where do I start? What resources do you recommend? Disclaimers: I am familiar with code in a variety of languages and general programming/logic concepts, but have little real experience writing substantial amounts of code. This pet project is an attempt at rectifying this. Also, I know the information is out there already, but I have the strong impression that I am missing the forest for the trees.
[ "Read up on RESTful architectures.\nYour fat client can use REST. It will use urllib2 to make RESTful requests of a server. It can exchange data in JSON notation.\nA web client can use REST. It can make simple browser HTTP requests or a Javascript component can make more sophisticated REST requests using JSON.\nYour server can be built as a simple WSGI application using any simple WSGI components. You have nice ones in the standard library, or you can use Werkzeug. Your server simply accepts REST requests and makes REST responses. Your server can work in HTML (for a browser) or JSON (for a fat client or Javascript client.)\n", "I would consider basing all server / client interactions on HTTP -- probably with JSON payloads. This doesn't directly allow server-initiated interactions (\"server push\"), but the (newish but already traditional;-) workaround for that is AJAX-y (even though the X makes little sense as I suggest JSON payloads, not XML ones;-) -- the client initiates an async request (via a separate thread or otherwise) to a special URL on the server, and the server responds to those requests to (in practice) do \"pushes\". From what you say it looks like the limitations of this approach might not be a problem.\nThe key advantage of specifying the interactions in such terms is that they're entirely independent from the programming language -- so the web-based client in Javascript will be just as doable as your CLI one in Python, etc etc. Of course, the server can live on localhost as a special case, but there is no constraint for that as the HTTP URLs can specify whatever host is running the server; etc, etc.\n", "First of all, regardless of the locality or type of the client, you will be communicating through an established message-based interface. All clients will be operating based on a common set of requests and responses, and the server will handle and reject these based on their validity according to game state. Whether you are dealing with local clients on the same machine or remote clients via HTTP does not matter whatsoever from an abstraction standpoint, as they will all be communicating through the same set of requests/responses.\nWhat this comes down to is your protocol. Your protocol should be a well-defined and technically sound language between client and server that will allow clients to a) participate effectively, and b) participate fairly. This protocol should define what messages ('moves') a client can do, and when, and how the server will react.\nYour protocol should be fully fleshed out and documented before you even start on game logic - the two are intrinsically connected and you will save a lot of wasted time and effort by competely defining your protocol first.\nYou protocol is the abstraction between client and server and it will also serve as the design document and programming guide for both.\nProtocol design is all about state, state transitions, and validation. Game servers usually have a set of fairly common, generic states for each game instance e.g. initialization, lobby, gameplay, pause, recap, close game, etc...\nEach one of these states has important state data related with it. For example, a 'lobby' state on the server-side might contain the known state of each player...how long since the last message or ping, what the player is doing (selecting an avatar, switching settings, going to the fridge, etc.). Organizing and managing state and substate data in code is important.\nManaging these states, and the associated data requirements for each is a process that should be exquisitely planned out as they are directly related to volume of work and project complexity - this is very important and also great practice if you are using this project to step up into larger things.\nAlso, you must keep in mind that if you have a game, and you let people play, people will cheat. It's a fact of life. In order to minimize this, you must carefully design your protocol and state management to only ever allow valid state transitions. Never trust a single client packet.\nFor every permutation of client/server state, you must enforce a limited set of valid game messages, and you must be very careful in what you allow players to do, and when you allow them to do it.\nProject complexity is generally exponential and not linear - client/server game programming is usually a good/painful way to learn this. Great question. Hope this helps, and good luck!\n" ]
[ 2, 2, 2 ]
[]
[]
[ "abstraction", "client_server", "python" ]
stackoverflow_0001291179_abstraction_client_server_python.txt
Q: When I catch an exception, how do I get the type, file, and line number of the previous frame? From this question, I'm now doing error handling one level down. That is, I call a function which calls another larger function, and I want where it failed in that larger function, not in the smaller function. Specific example. Code is: import sys, os def workerFunc(): return 4/0 def runTest(): try: print workerFunc() except: ty,val,tb = sys.exc_info() print "Error: %s,%s,%s" % ( ty.__name__, os.path.split(tb.tb_frame.f_code.co_filename)[1], tb.tb_lineno) runTest() Output is: Error: ZeroDivisionError,tmp2.py,8 but line 8 is "print workerFunc()" - I know that line failed, but I want the line before: Error: ZeroDivisionError,tmp2.py,4 A: Add a line: tb = tb.tb_next just after your call to sys.exc_info. See the docs here under "Traceback objects". A: tb.tb_next is your friend: import sys, os def workerFunc(): return 4/0 def runTest(): try: print workerFunc() except: ty,val,tb = sys.exc_info() print "Error: %s,%s,%s" % ( ty.__name__, os.path.split(tb.tb_frame.f_code.co_filename)[1], tb.tb_next.tb_lineno) runTest() But the traceback module does this and much more: import traceback def workerFunc(): return 4/0 def runTest(): try: print workerFunc() except: print traceback.format_exc() runTest() A: You need to find the bottom of the traceback, so you need to loop along until there are no more frames. Do this to find the frame you want: while tb.tb_next: tb = tb.tb_next after sys.exc_info. This will find the exception no matter how many call frames down it happened.
When I catch an exception, how do I get the type, file, and line number of the previous frame?
From this question, I'm now doing error handling one level down. That is, I call a function which calls another larger function, and I want where it failed in that larger function, not in the smaller function. Specific example. Code is: import sys, os def workerFunc(): return 4/0 def runTest(): try: print workerFunc() except: ty,val,tb = sys.exc_info() print "Error: %s,%s,%s" % ( ty.__name__, os.path.split(tb.tb_frame.f_code.co_filename)[1], tb.tb_lineno) runTest() Output is: Error: ZeroDivisionError,tmp2.py,8 but line 8 is "print workerFunc()" - I know that line failed, but I want the line before: Error: ZeroDivisionError,tmp2.py,4
[ "Add a line:\n tb = tb.tb_next\n\njust after your call to sys.exc_info.\nSee the docs here under \"Traceback objects\".\n", "tb.tb_next is your friend:\nimport sys, os\n\ndef workerFunc():\n return 4/0\n\ndef runTest():\n try:\n print workerFunc()\n except:\n ty,val,tb = sys.exc_info()\n print \"Error: %s,%s,%s\" % (\n ty.__name__,\n os.path.split(tb.tb_frame.f_code.co_filename)[1],\n tb.tb_next.tb_lineno)\n\nrunTest()\n\nBut the traceback module does this and much more:\nimport traceback\n\ndef workerFunc():\n return 4/0\n\ndef runTest():\n try:\n print workerFunc()\n except:\n print traceback.format_exc()\n\nrunTest()\n\n", "You need to find the bottom of the traceback, so you need to loop along until there are no more frames. Do this to find the frame you want:\nwhile tb.tb_next:\n tb = tb.tb_next\n\nafter sys.exc_info. This will find the exception no matter how many call frames down it happened.\n" ]
[ 4, 3, 2 ]
[]
[]
[ "error_handling", "exception", "exception_handling", "python" ]
stackoverflow_0001291438_error_handling_exception_exception_handling_python.txt
Q: scrollbar for statictext in wxpython? is possible to add a scrollbar to a statictext in wxpython? the thing is that i'm creating this statictext: self.staticText1 = wx.StaticText(id=wxID_FRAME1STATICTEXT1,label=u'some text here',name='staticText1', parent=self.panel1, pos=wx.Point(16, 96), size=wx.Size(408, 216),style=wx.ST_NO_AUTORESIZE | wx.THICK_FRAME | wx.ALIGN_CENTRE | wx.SUNKEN_BORDER) self.staticText1.SetBackgroundColour(wx.Colour(255, 255, 255)) self.staticText1.SetBackgroundStyle(wx.BG_STYLE_SYSTEM) self.staticText1.SetFont(wx.Font(9, wx.SWISS, wx.NORMAL, wx.BOLD, False,u'MS Shell Dlg 2')) self.staticText1.SetAutoLayout(True) self.staticText1.SetConstraints(LayoutAnchors(self.staticText1, False,True, True, False)) self.staticText1.SetHelpText(u'') but later i use StaticText.SetLabel to change the label and the new text is too big to fit the window, so i need to add a scrollbar to the statictext.. i tried adding wx.VSCROLL to the style, and the scrollbar show up but cant scroll down to see the rest of the text.. A: wx.StaticText is designed to never respond to mouse events and never take user focus. Given that this is its role in life, it seems that a scrollbar would be inconsistent with its purpose. There are two ways to get what you want: 1) You could use a regular TextCtrl with the style TE_READONLY (see here); or 2) you could make a scrolled window that contains your StaticText control.
scrollbar for statictext in wxpython?
is possible to add a scrollbar to a statictext in wxpython? the thing is that i'm creating this statictext: self.staticText1 = wx.StaticText(id=wxID_FRAME1STATICTEXT1,label=u'some text here',name='staticText1', parent=self.panel1, pos=wx.Point(16, 96), size=wx.Size(408, 216),style=wx.ST_NO_AUTORESIZE | wx.THICK_FRAME | wx.ALIGN_CENTRE | wx.SUNKEN_BORDER) self.staticText1.SetBackgroundColour(wx.Colour(255, 255, 255)) self.staticText1.SetBackgroundStyle(wx.BG_STYLE_SYSTEM) self.staticText1.SetFont(wx.Font(9, wx.SWISS, wx.NORMAL, wx.BOLD, False,u'MS Shell Dlg 2')) self.staticText1.SetAutoLayout(True) self.staticText1.SetConstraints(LayoutAnchors(self.staticText1, False,True, True, False)) self.staticText1.SetHelpText(u'') but later i use StaticText.SetLabel to change the label and the new text is too big to fit the window, so i need to add a scrollbar to the statictext.. i tried adding wx.VSCROLL to the style, and the scrollbar show up but cant scroll down to see the rest of the text..
[ "wx.StaticText is designed to never respond to mouse events and never take user focus. Given that this is its role in life, it seems that a scrollbar would be inconsistent with its purpose.\nThere are two ways to get what you want: 1) You could use a regular TextCtrl with the style TE_READONLY (see here); or 2) you could make a scrolled window that contains your StaticText control.\n" ]
[ 4 ]
[]
[]
[ "python", "scrollbar", "wxpython" ]
stackoverflow_0001290736_python_scrollbar_wxpython.txt
Q: Django : get entries of today and SplitDateTime Widget? I used : SplitDateTimeWidget to split DateTime field , appointment = forms.DateTimeField(widget=forms.SplitDateTimeWidget) In the template side i manage to use datePicker and TimePicker for each field , using jQuery . When i try to filter the entries regarding to today date as in this code : d = datetime.date.today() entries = Entry.objects.filter(appointment__year=d.year ,appointment__month=d.month ,appointment__day=d.day ) It shows the entries of yesterday 17 aug :( which is really weird ! I Tried to split the Date and Time in the model , i got the same result as well ! Any idea how to fix this ?! A: Fix your timezone settings, in settings.py TIME_ZONE Default: 'America/Chicago' Some excerpts of useful info from the docs: A string representing the time zone for this installation. See available choices. (...) Note that this is the time zone to which Django will convert all dates/times -- not necessarily the timezone of the server. (...) Django cannot reliably use alternate time zones in a Windows environment. If you're running Django on Windows, this variable must be set to match the system timezone.
Django : get entries of today and SplitDateTime Widget?
I used : SplitDateTimeWidget to split DateTime field , appointment = forms.DateTimeField(widget=forms.SplitDateTimeWidget) In the template side i manage to use datePicker and TimePicker for each field , using jQuery . When i try to filter the entries regarding to today date as in this code : d = datetime.date.today() entries = Entry.objects.filter(appointment__year=d.year ,appointment__month=d.month ,appointment__day=d.day ) It shows the entries of yesterday 17 aug :( which is really weird ! I Tried to split the Date and Time in the model , i got the same result as well ! Any idea how to fix this ?!
[ "Fix your timezone settings, in settings.py TIME_ZONE\nDefault: 'America/Chicago'\nSome excerpts of useful info from the docs:\n\nA string representing the time zone\n for this installation. See available\n choices. \n(...)\nNote that this is the time zone to which Django will convert all\n dates/times -- not necessarily the\n timezone of the server. \n(...)\nDjango cannot reliably use alternate\n time zones in a Windows environment.\n If you're running Django on Windows,\n this variable must be set to match the\n system timezone.\n\n" ]
[ 2 ]
[]
[]
[ "django", "django_forms", "python" ]
stackoverflow_0001291537_django_django_forms_python.txt
Q: google python data example: pizza party hey i started learning python and am quite confused as how the google data library works. google has a pizza party example over at this link can anyone here please take the time to explain how it is being done. i would be so grateful. WHAT I UNDERSTAND: <entry xmlns='http://www.w3.org/2005/Atom' xmlns:p='http://example.com/pizza/1.0'> <id>http://www.example.com/pizzaparty/223</id> <title type='text'>Pizza at my house!</title> <author> <name>Joe</name> <email>[email protected]</email> </author> <content type='text'> Join us for a fun filled evening of pizza and games! </content> <link rel='alternate' type='text/html' href='http://www.example.com/joe_user/pizza_at_my_house.html'/> <p:pizza toppings='pepperoni, sausage' size='large'>Pepperoni with cheese and sausage</p:pizza> <p:pizza toppings='mushrooms' size='medium'>Mushroom</p:pizza> <p:pizza toppings='ham, pineapple' size='extra large'>Hawaiian</p:pizza> <p:capacity>25</p:capacity> <p:location>My place.<p:address>123 Imaginary Ln, Sometown MO 63000</p:address></p:location> </entry> this is the XML feed for the pizza. why it is created i do not understand. NOW this is the linking to the XML feed: import atom.core PIZZA_TEMPLATE = '{http://example.com/pizza/1.0}%s' class Capacity(atom.core.XmlElement): _qname = PIZZA_TEMPLATE % 'capacity' in PIZZA_TEMPLATE, what is "%s"? what is atom.core? i am a little confused. please help me. A: %s is a string placeholder, and % is the string interpolation operator. See the Python docs on string formatting for more information. atom.core is a Python module to work with Atom feeds. A: Given that this question is about Python and involves a pizza party, I'd say you were biting off more than you can chew ... But seriously, if you're just learning Python, start with something simpler.
google python data example: pizza party
hey i started learning python and am quite confused as how the google data library works. google has a pizza party example over at this link can anyone here please take the time to explain how it is being done. i would be so grateful. WHAT I UNDERSTAND: <entry xmlns='http://www.w3.org/2005/Atom' xmlns:p='http://example.com/pizza/1.0'> <id>http://www.example.com/pizzaparty/223</id> <title type='text'>Pizza at my house!</title> <author> <name>Joe</name> <email>[email protected]</email> </author> <content type='text'> Join us for a fun filled evening of pizza and games! </content> <link rel='alternate' type='text/html' href='http://www.example.com/joe_user/pizza_at_my_house.html'/> <p:pizza toppings='pepperoni, sausage' size='large'>Pepperoni with cheese and sausage</p:pizza> <p:pizza toppings='mushrooms' size='medium'>Mushroom</p:pizza> <p:pizza toppings='ham, pineapple' size='extra large'>Hawaiian</p:pizza> <p:capacity>25</p:capacity> <p:location>My place.<p:address>123 Imaginary Ln, Sometown MO 63000</p:address></p:location> </entry> this is the XML feed for the pizza. why it is created i do not understand. NOW this is the linking to the XML feed: import atom.core PIZZA_TEMPLATE = '{http://example.com/pizza/1.0}%s' class Capacity(atom.core.XmlElement): _qname = PIZZA_TEMPLATE % 'capacity' in PIZZA_TEMPLATE, what is "%s"? what is atom.core? i am a little confused. please help me.
[ "%s is a string placeholder, and % is the string interpolation operator. See the Python docs on string formatting for more information.\natom.core is a Python module to work with Atom feeds.\n", "Given that this question is about Python and involves a pizza party, I'd say you were biting off more than you can chew ...\nBut seriously, if you're just learning Python, start with something simpler.\n" ]
[ 2, 0 ]
[]
[]
[ "python", "xml" ]
stackoverflow_0001292095_python_xml.txt
Q: How to open a file and find the longest length of a line and then print it out Here's is what I have done so far but the length function isn't working. import string def main(): print " This program reads from a file and then prints out the" print " line with the longest length the line ,or with the highest sum" print " of ASCII values , or the line with the greatest number of words" infile = open("30075165.txt","r") for line in infile: print line infile.close() def length(): maxlength = 0 infile = open("30075165.txt","r") for line in infile: linelength = lengthofline if linelength > maxlength: #If linelength is greater than maxlength value the new value is linelength maxlength = linelength linelength = line print ,maxlinetext infile.close() A: For Python 2.5 to 2.7.12 print max(open(your_filename, 'r'), key=len) For Python 3 and up print(max(open(your_filename, 'r'), key=len)) A: large_line = '' large_line_len = 0 filename = r"C:\tmp\TestFile.txt" with open(filename, 'r') as f: for line in f: if len(line) > large_line_len: large_line_len = len(line) large_line = line print large_line output: This Should Be Largest Line And as a function: def get_longest_line(filename): large_line = '' large_line_len = 0 with open(filename, 'r') as f: for line in f: if len(line) > large_line_len: large_line_len = len(line) large_line = line return large_line print get_longest_line(r"C:\tmp\TestFile.txt") Here is another way, you would need to wrap this in a try/catch for various problems (empty file, etc). def get_longest_line(filename): mydict = {} for line in open(filename, 'r'): mydict[len(line)] = line return mydict[sorted(mydict)[-1]] You also need to decide that happens when you have two 'winning' lines with equal length? Pick first or last? The former function will return the first, the latter will return the last. File contains Small Line Small Line Another Small Line This Should Be Largest Line Small Line Update The comment in your original post: print " This program reads from a file and then prints out the" print " line with the longest length the line ,or with the highest sum" print " of ASCII values , or the line with the greatest number of words" Makes me think you are going to scan the file for length of lines, then for ascii sum, then for number of words. It would probably be better to read the file once and then extract what data you need from the findings. def get_file_data(filename): def ascii_sum(line): return sum([ord(x) for x in line]) def word_count(line): return len(line.split(None)) filedata = [(line, len(line), ascii_sum(line), word_count(line)) for line in open(filename, 'r')] return filedata This function will return a list of each line of the file in the format: line, line_length, line_ascii_sum, line_word_count This can be used as so: afile = r"C:\Tmp\TestFile.txt" for line, line_len, ascii_sum, word_count in get_file_data(afile): print 'Line: %s, Len: %d, Sum: %d, WordCount: %d' % ( line.strip(), line_len, ascii_sum, word_count) to output: Line: Small Line, Len: 11, Sum: 939, WordCount: 2 Line: Small Line, Len: 11, Sum: 939, WordCount: 2 Line: Another Small Line, Len: 19, Sum: 1692, WordCount: 3 Line: This Should Be Largest Line, Len: 28, Sum: 2450, WordCount: 5 Line: Small Line, Len: 11, Sum: 939, WordCount: 2 You can mix this with Steef's solution like so: >>> afile = r"C:\Tmp\TestFile.txt" >>> file_data = get_file_data(afile) >>> max(file_data, key=lambda line: line[1]) # Longest Line ('This Should Be Largest Line\n', 28, 2450, 5) >>> max(file_data, key=lambda line: line[2]) # Largest ASCII sum ('This Should Be Largest Line\n', 28, 2450, 5) >>> max(file_data, key=lambda line: line[3]) # Most Words ('This Should Be Largest Line\n', 28, 2450, 5) A: Try this: def main(): print " This program reads from a file and then prints out the" print " line with the longest length the line ,or with the highest sum" print " of ASCII values , or the line with the greatest number of words" length() def length(): maxlength = 0 maxlinetext = "" infile = open("30075165.txt","r") for line in infile: linelength = len(line) if linelength > maxlength: #If linelength is greater than maxlength value the new value is linelength maxlength = linelength maxlinetext = line print maxlinetext infile.close() EDIT: Added main() function. A: linelength = lengthofline # bug? It should be: linelength = len(line) # fix A: Python might not be the right tool for this job. $ awk 'length() > n { n = length(); x = $0 } END { print x }' 30075165.txt A: My solution (also works in Python 2.5): import os.path def getLongestLineFromFile(fileName): longestLine = "" if not os.path.exists(fileName): raise "File not found" file = open(fileName, "r") for line in file: if len(line) > len(longestLine): longestLine = line return longestLine if __name__ == "__main__": print getLongestLineFromFile("input.data") Example "input.data" contents: 111111111 1111111111111111111111 111111111 22222222222222222 4444444444444444444444444444444 444444444444444 5555
How to open a file and find the longest length of a line and then print it out
Here's is what I have done so far but the length function isn't working. import string def main(): print " This program reads from a file and then prints out the" print " line with the longest length the line ,or with the highest sum" print " of ASCII values , or the line with the greatest number of words" infile = open("30075165.txt","r") for line in infile: print line infile.close() def length(): maxlength = 0 infile = open("30075165.txt","r") for line in infile: linelength = lengthofline if linelength > maxlength: #If linelength is greater than maxlength value the new value is linelength maxlength = linelength linelength = line print ,maxlinetext infile.close()
[ "For Python 2.5 to 2.7.12\nprint max(open(your_filename, 'r'), key=len)\n\nFor Python 3 and up\nprint(max(open(your_filename, 'r'), key=len))\n\n", "large_line = ''\nlarge_line_len = 0\nfilename = r\"C:\\tmp\\TestFile.txt\"\n\nwith open(filename, 'r') as f:\n for line in f:\n if len(line) > large_line_len:\n large_line_len = len(line)\n large_line = line\n\nprint large_line\n\noutput:\nThis Should Be Largest Line\n\nAnd as a function:\ndef get_longest_line(filename):\n large_line = ''\n large_line_len = 0\n\n with open(filename, 'r') as f:\n for line in f:\n if len(line) > large_line_len:\n large_line_len = len(line)\n large_line = line\n\n return large_line\n\nprint get_longest_line(r\"C:\\tmp\\TestFile.txt\")\n\nHere is another way, you would need to wrap this in a try/catch for various problems (empty file, etc).\ndef get_longest_line(filename):\n mydict = {}\n\n for line in open(filename, 'r'):\n mydict[len(line)] = line\n\n return mydict[sorted(mydict)[-1]]\n\nYou also need to decide that happens when you have two 'winning' lines with equal length? Pick first or last? The former function will return the first, the latter will return the last.\nFile contains\nSmall Line\nSmall Line\nAnother Small Line\nThis Should Be Largest Line\nSmall Line\n\nUpdate\nThe comment in your original post:\nprint \" This program reads from a file and then prints out the\"\nprint \" line with the longest length the line ,or with the highest sum\"\nprint \" of ASCII values , or the line with the greatest number of words\"\n\nMakes me think you are going to scan the file for length of lines, then for ascii sum, then\nfor number of words. It would probably be better to read the file once and then extract what data you need from the findings.\ndef get_file_data(filename):\n def ascii_sum(line):\n return sum([ord(x) for x in line])\n def word_count(line):\n return len(line.split(None))\n\n filedata = [(line, len(line), ascii_sum(line), word_count(line)) \n for line in open(filename, 'r')]\n\n return filedata\n\nThis function will return a list of each line of the file in the format: line, line_length, line_ascii_sum, line_word_count\nThis can be used as so:\nafile = r\"C:\\Tmp\\TestFile.txt\"\n\nfor line, line_len, ascii_sum, word_count in get_file_data(afile):\n print 'Line: %s, Len: %d, Sum: %d, WordCount: %d' % (\n line.strip(), line_len, ascii_sum, word_count)\n\nto output:\nLine: Small Line, Len: 11, Sum: 939, WordCount: 2\nLine: Small Line, Len: 11, Sum: 939, WordCount: 2\nLine: Another Small Line, Len: 19, Sum: 1692, WordCount: 3\nLine: This Should Be Largest Line, Len: 28, Sum: 2450, WordCount: 5\nLine: Small Line, Len: 11, Sum: 939, WordCount: 2\n\nYou can mix this with Steef's solution like so:\n>>> afile = r\"C:\\Tmp\\TestFile.txt\"\n>>> file_data = get_file_data(afile)\n>>> max(file_data, key=lambda line: line[1]) # Longest Line\n('This Should Be Largest Line\\n', 28, 2450, 5)\n>>> max(file_data, key=lambda line: line[2]) # Largest ASCII sum\n('This Should Be Largest Line\\n', 28, 2450, 5)\n>>> max(file_data, key=lambda line: line[3]) # Most Words\n('This Should Be Largest Line\\n', 28, 2450, 5)\n\n", "Try this:\ndef main():\n print \" This program reads from a file and then prints out the\"\n print \" line with the longest length the line ,or with the highest sum\"\n print \" of ASCII values , or the line with the greatest number of words\"\n length()\n\ndef length():\n maxlength = 0\n maxlinetext = \"\"\n infile = open(\"30075165.txt\",\"r\")\n for line in infile:\n linelength = len(line)\n if linelength > maxlength:\n #If linelength is greater than maxlength value the new value is linelength\n maxlength = linelength\n maxlinetext = line\n print maxlinetext\n infile.close()\n\nEDIT: Added main() function.\n", "linelength = lengthofline # bug?\n\nIt should be:\nlinelength = len(line) # fix\n\n", "Python might not be the right tool for this job.\n$ awk 'length() > n { n = length(); x = $0 } END { print x }' 30075165.txt\n\n", "My solution (also works in Python 2.5):\nimport os.path\n\ndef getLongestLineFromFile(fileName):\n longestLine = \"\"\n\n if not os.path.exists(fileName):\n raise \"File not found\"\n\n file = open(fileName, \"r\")\n for line in file:\n if len(line) > len(longestLine):\n longestLine = line\n\n return longestLine\n\n\nif __name__ == \"__main__\":\n print getLongestLineFromFile(\"input.data\")\n\nExample \"input.data\" contents:\n\n111111111\n1111111111111111111111\n111111111\n22222222222222222\n4444444444444444444444444444444\n444444444444444\n5555\n\n\n" ]
[ 32, 6, 1, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001292630_python.txt
Q: Which development environment should I use for developing Google App Engine with Python? I would like to ask which IDE should I use for developing applications for Google App Engine with Python language? Is Eclipse suitable or is there any other development environment better? Please give me some advices! Thank you! A: Eclipse with the PyDev plugin is very nice. Recent versions even go out of their way to support App Engine, with builtin support for uploading your project, etc without having to use the command line scripts. See the Pydev blog for more documentation on the App Engine integration. A: I think the answers you are looking for are here Best opensource IDE for building applications on Google App Engine? A: I use Komodo Edit. A: I've always been using Vim (with or without plugins to make it more IDE-like) for Python programming, it's simple but powerful enough. A: Wing IDE is a pretty solid IDE for python. It's not free though (open source or $$).
Which development environment should I use for developing Google App Engine with Python?
I would like to ask which IDE should I use for developing applications for Google App Engine with Python language? Is Eclipse suitable or is there any other development environment better? Please give me some advices! Thank you!
[ "Eclipse with the PyDev plugin is very nice. Recent versions even go out of their way to support App Engine, with builtin support for uploading your project, etc without having to use the command line scripts. \nSee the Pydev blog for more documentation on the App Engine integration.\n", "I think the answers you are looking for are here \nBest opensource IDE for building applications on Google App Engine?\n", "I use Komodo Edit.\n", "I've always been using Vim (with or without plugins to make it more IDE-like) for Python programming, it's simple but powerful enough.\n", "Wing IDE is a pretty solid IDE for python. It's not free though (open source or $$).\n" ]
[ 4, 3, 2, 0, 0 ]
[]
[]
[ "google_app_engine", "ide", "python" ]
stackoverflow_0001287606_google_app_engine_ide_python.txt
Q: Storing dynamically generated code as string or as code object? I'm hacking a little template engine. I've a class (pompously named the template compiler) that produce a string of dynamically generated code. for instance : def dynamic_function(arg): #statement return rendered_template At rendering time, I call the built-in function exec against this code, with a custom globals dictionary (in order to control as mush as possible the code inserted into the template by a potential malicious user). But, I need to cache the compiled template to avoid compiling it each execution. I wonder if it's better to store the string as plain text and load it each time or use compile to produce code_object and store that object (using the shelve module for example). Maybe it worth mentioning that ultimately I would like to make my template engine thread safe. Thank for reading ! Thomas edit : as S.Lott is underlining better doesn't have a sense itself. I mean by better faster, consume less memory simpler and easier debugging. Of course, more and tastier free coffee would have been even better. A: You don't mention where you are going to store these templates, but if you are persisting them "permanently", keep in mind that Python doesn't guarantee byte-code compatibility across major versions. So either choose a method that does guarantee compatibility (like storing the source code), or also store enough information alongside the compiled template that you can throw away the compiled template when it is invalid. The same goes for the marshal module, for example: a value marshaled with Python 2.5 is not promised to be readable with Python 2.6. A: Personally, i'd store text. You can look at text with an editor or whatever, which will make debugging and mucking about easier. It's also vastly easier to write unit tests for, if you want to test that the contents of the cache file are what you expect. Later, if you find that your system isn't fast enough, and if profiling reveals that parsing the cached templates is taking a lot of time, you can try switching to storing bytecode - but only then. As long as the storage mechanism is properly encapsulated, this change should be fairly painless. A: Mako template library caches the compiled template as a python module and uses the built in imp module to handle byte-code caching and code loading. This seems reasonably robust to changes in the interpreter, fast and easily debuggable (you can view the source of the generated code in the cache). See the mako.template module for how it handles this.
Storing dynamically generated code as string or as code object?
I'm hacking a little template engine. I've a class (pompously named the template compiler) that produce a string of dynamically generated code. for instance : def dynamic_function(arg): #statement return rendered_template At rendering time, I call the built-in function exec against this code, with a custom globals dictionary (in order to control as mush as possible the code inserted into the template by a potential malicious user). But, I need to cache the compiled template to avoid compiling it each execution. I wonder if it's better to store the string as plain text and load it each time or use compile to produce code_object and store that object (using the shelve module for example). Maybe it worth mentioning that ultimately I would like to make my template engine thread safe. Thank for reading ! Thomas edit : as S.Lott is underlining better doesn't have a sense itself. I mean by better faster, consume less memory simpler and easier debugging. Of course, more and tastier free coffee would have been even better.
[ "You don't mention where you are going to store these templates, but if you are persisting them \"permanently\", keep in mind that Python doesn't guarantee byte-code compatibility across major versions. So either choose a method that does guarantee compatibility (like storing the source code), or also store enough information alongside the compiled template that you can throw away the compiled template when it is invalid.\nThe same goes for the marshal module, for example: a value marshaled with Python 2.5 is not promised to be readable with Python 2.6.\n", "Personally, i'd store text. You can look at text with an editor or whatever, which will make debugging and mucking about easier. It's also vastly easier to write unit tests for, if you want to test that the contents of the cache file are what you expect.\nLater, if you find that your system isn't fast enough, and if profiling reveals that parsing the cached templates is taking a lot of time, you can try switching to storing bytecode - but only then. As long as the storage mechanism is properly encapsulated, this change should be fairly painless.\n", "Mako template library caches the compiled template as a python module and uses the built in imp module to handle byte-code caching and code loading. This seems reasonably robust to changes in the interpreter, fast and easily debuggable (you can view the source of the generated code in the cache).\nSee the mako.template module for how it handles this.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "exec", "python" ]
stackoverflow_0001292994_exec_python.txt
Q: bulk insert in db table I'm having an array I want to insert in a single query in a table. Any idea? A: If you are using dbapi to access the database, then the connection.executemany() method works. con.executemany("INSERT INTO some_table (field) VALUES (?)", [(v,) for v in your_array]) The format of the bindparameter depends on the database, sqlite uses ?, mysql uses %s, postgresl uses %s or %(key)s if passing in dicts. To abstract this you can use SQLAlchemy: import sqlalchemy as sa metadata = sa.MetaData(sa.create_engine(database_url)) some_table = Table('some_table', metadata, autoload=True) some_table.insert([{'field': v} for v in your_array]).execute() A: As uneffective (individual inserts) one-liner: data = [{'id': 1, 'x': 2, 'y': 4}, {'somethig': 'other', 'otherfield': 5, 'y': 6}] table = 'table_name' connection = connect() # DBAPI connection map(connection.cursor().execute, ('INSERT INTO %s (%s) VALUES (%s)' % ('table', ','.join(x), ','.join(['%s'] * len(x))) for x in data), (x.values() for x in data)) A: If you are using PostgreSQL, you can use the COPY statement.
bulk insert in db table
I'm having an array I want to insert in a single query in a table. Any idea?
[ "If you are using dbapi to access the database, then the connection.executemany() method works.\ncon.executemany(\"INSERT INTO some_table (field) VALUES (?)\", [(v,) for v in your_array])\n\nThe format of the bindparameter depends on the database, sqlite uses ?, mysql uses %s, postgresl uses %s or %(key)s if passing in dicts. To abstract this you can use SQLAlchemy:\nimport sqlalchemy as sa\nmetadata = sa.MetaData(sa.create_engine(database_url))\nsome_table = Table('some_table', metadata, autoload=True)\nsome_table.insert([{'field': v} for v in your_array]).execute()\n\n", "As uneffective (individual inserts) one-liner:\ndata = [{'id': 1, 'x': 2, 'y': 4}, {'somethig': 'other', 'otherfield': 5, 'y': 6}]\ntable = 'table_name'\nconnection = connect() # DBAPI connection\n\nmap(connection.cursor().execute, ('INSERT INTO %s (%s) VALUES (%s)' % ('table', ','.join(x), ','.join(['%s'] * len(x))) for x in data), (x.values() for x in data))\n\n", "If you are using PostgreSQL, you can use the COPY statement.\n" ]
[ 3, 1, 0 ]
[]
[]
[ "database", "python" ]
stackoverflow_0001293056_database_python.txt
Q: How to find the sum of a ASCII value in a file to find the max number of ASCII and to print out the name of the highest sum of ASCII value Here is the code I'm working with def ascii_sum(): x = 0 infile = open("30075165.txt","r") for line in infile: return sum([ord(x) for x in line]) infile.close() This code only prints out the first ASCII value in the file not the max ASCII value A: max(open(fname), key=lambda line: sum(ord(i) for i in line)) A: This is a snippet from an answer to one to of your previous questions def get_file_data(filename): def ascii_sum(line): return sum([ord(x) for x in line]) def word_count(line): return len(line.split(None)) filedata = [{'line': line, 'line_len': len(line), 'ascii_sum': ascii_sum(line), 'word_count': word_count(line)} for line in open(filename, 'r')] return filedata afile = r"C:\Tmp\TestFile.txt" file_data = get_file_data(afile) print max(file_data, key=lambda line: line['line_len']) # Longest Line print max(file_data, key=lambda line: line['ascii_sum']) # Largest ASCII sum print max(file_data, key=lambda line: line['word_count']) # Most Words
How to find the sum of a ASCII value in a file to find the max number of ASCII and to print out the name of the highest sum of ASCII value
Here is the code I'm working with def ascii_sum(): x = 0 infile = open("30075165.txt","r") for line in infile: return sum([ord(x) for x in line]) infile.close() This code only prints out the first ASCII value in the file not the max ASCII value
[ "max(open(fname), key=lambda line: sum(ord(i) for i in line))\n\n", "This is a snippet from an answer to one to of your previous questions\ndef get_file_data(filename):\n def ascii_sum(line):\n return sum([ord(x) for x in line])\n def word_count(line):\n return len(line.split(None))\n\n filedata = [{'line': line, \n 'line_len': len(line), \n 'ascii_sum': ascii_sum(line), \n 'word_count': word_count(line)}\n for line in open(filename, 'r')]\n\n return filedata\n\nafile = r\"C:\\Tmp\\TestFile.txt\"\nfile_data = get_file_data(afile)\n\nprint max(file_data, key=lambda line: line['line_len']) # Longest Line\nprint max(file_data, key=lambda line: line['ascii_sum']) # Largest ASCII sum\nprint max(file_data, key=lambda line: line['word_count']) # Most Words\n\n" ]
[ 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001293404_python.txt
Q: Human readable cookie information using cookielib? Is there a way to print the cookies stored in a cookielib.CookieJar in a human-readable way? I'm scraping a site and I'd like to know if the same cookies are set when I use my script as when I use the browser. A: import urllib2 from cookielib import CookieJar, DefaultCookiePolicy policy = DefaultCookiePolicy( rfc2965=True, strict_ns_domain=DefaultCookiePolicy.DomainStrict) cj = CookieJar(policy) opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) r = opener.open("http://somewebsite.com") [str(i) for i in cj] Produces: ['<Cookie JSESSIONID=BE71BFC3EE6D9799DEBD939A7487BB08 for somewebsite.com>']
Human readable cookie information using cookielib?
Is there a way to print the cookies stored in a cookielib.CookieJar in a human-readable way? I'm scraping a site and I'd like to know if the same cookies are set when I use my script as when I use the browser.
[ "import urllib2\nfrom cookielib import CookieJar, DefaultCookiePolicy\npolicy = DefaultCookiePolicy(\nrfc2965=True, strict_ns_domain=DefaultCookiePolicy.DomainStrict)\ncj = CookieJar(policy)\nopener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))\nr = opener.open(\"http://somewebsite.com\")\n\n[str(i) for i in cj]\n\nProduces:\n['<Cookie JSESSIONID=BE71BFC3EE6D9799DEBD939A7487BB08 for somewebsite.com>']\n\n" ]
[ 1 ]
[]
[]
[ "cookies", "python" ]
stackoverflow_0001293828_cookies_python.txt
Q: Best way to obtain indexed access to a Python queue, thread-safe I have a queue (from the Queue module), and I want to get indexed access into it. (i.e., being able to ask for item number four in the queue, without removing it from the queue.) I saw that a queue uses a deque internally, and deque has indexed access. The question is, how can I use the deque without (1) messing up the queue, (2) breaking thread-safety. A: import Queue class IndexableQueue(Queue): def __getitem__(self, index): with self.mutex: return self.queue[index] It's of course crucial to release the mutex whether the indexing succeeds or raises an IndexError, and I'm using a with statement for that. In older Python versions, try/finally would be used to the same effect.
Best way to obtain indexed access to a Python queue, thread-safe
I have a queue (from the Queue module), and I want to get indexed access into it. (i.e., being able to ask for item number four in the queue, without removing it from the queue.) I saw that a queue uses a deque internally, and deque has indexed access. The question is, how can I use the deque without (1) messing up the queue, (2) breaking thread-safety.
[ "import Queue\n\nclass IndexableQueue(Queue):\n def __getitem__(self, index):\n with self.mutex:\n return self.queue[index]\n\nIt's of course crucial to release the mutex whether the indexing succeeds or raises an IndexError, and I'm using a with statement for that. In older Python versions, try/finally would be used to the same effect.\n" ]
[ 14 ]
[]
[]
[ "deque", "multithreading", "python", "queue" ]
stackoverflow_0001293966_deque_multithreading_python_queue.txt
Q: How to insert / retrieve a file stored as a BLOB in a MySQL db using python I want to write a python script that populates a database with some information. One of the columns in my table is a BLOB that I would like to save a file to for each entry. How can I read the file (binary) and insert it into the DB using python? Likewise, how can I retrieve it and write that file back to some arbitrary location on the hard drive? A: thedata = open('thefile', 'rb').read() sql = "INSERT INTO sometable (theblobcolumn) VALUES (%s)" cursor.execute(sql, (thedata,)) That code of course works as written only if your table has just the BLOB column and what you want to do is INSERT, but of course you could easily tweak it to add more columns, use UPDATE instead of INSERT, or whatever it is that you exactly need to do. I'm also assuming your file is binary rather than text, etc; again, if my guesses are incorrect it's easy for you to tweak the above code accordingly. Some kind of SELECT on cursor.execute, then some kind of fetching from the cursor, is how you retrieve BLOB data, exactly like you retrieve any other kind of data. A: You can insert and read BLOBs from a DB like every other column type. From the database API's view there is nothing special about BLOBs.
How to insert / retrieve a file stored as a BLOB in a MySQL db using python
I want to write a python script that populates a database with some information. One of the columns in my table is a BLOB that I would like to save a file to for each entry. How can I read the file (binary) and insert it into the DB using python? Likewise, how can I retrieve it and write that file back to some arbitrary location on the hard drive?
[ "thedata = open('thefile', 'rb').read()\nsql = \"INSERT INTO sometable (theblobcolumn) VALUES (%s)\"\ncursor.execute(sql, (thedata,))\n\nThat code of course works as written only if your table has just the BLOB column and what\nyou want to do is INSERT, but of course you could easily tweak it to add more columns,\nuse UPDATE instead of INSERT, or whatever it is that you exactly need to do.\nI'm also assuming your file is binary rather than text, etc; again, if my guesses are\nincorrect it's easy for you to tweak the above code accordingly.\nSome kind of SELECT on cursor.execute, then some kind of fetching from the cursor, is how you\nretrieve BLOB data, exactly like you retrieve any other kind of data.\n", "You can insert and read BLOBs from a DB like every other column type. From the database API's view there is nothing special about BLOBs.\n" ]
[ 18, 0 ]
[]
[]
[ "blob", "file_io", "mysql", "python" ]
stackoverflow_0001294385_blob_file_io_mysql_python.txt
Q: how to register more than 10 apps in Google App Engine Anyone knows any "legal" way to surpass the 10-app-limit Google imposes? I wouldn't mind to pay, or anything, but I wasn't able to find a way to have more than 10 apps and can't either remove one. A: Call or write to Google! Google's policies are very exact and very strict, because they are catering to thousands of developers, and thus need those standards and uniformity. But if you have a good reason for needing more than 10, and you can get a real person at the end of a telephone line, I'd think you'd have a good chance of getting the limit raised. Alternatively, you could just get a friend or co-worker to register. That seems like it ought to be legal...but check the User Agreement first.
how to register more than 10 apps in Google App Engine
Anyone knows any "legal" way to surpass the 10-app-limit Google imposes? I wouldn't mind to pay, or anything, but I wasn't able to find a way to have more than 10 apps and can't either remove one.
[ "Call or write to Google! Google's policies are very exact and very strict, because they are catering to thousands of developers, and thus need those standards and uniformity. But if you have a good reason for needing more than 10, and you can get a real person at the end of a telephone line, I'd think you'd have a good chance of getting the limit raised.\nAlternatively, you could just get a friend or co-worker to register. That seems like it ought to be legal...but check the User Agreement first.\n" ]
[ 3 ]
[]
[]
[ "google_app_engine", "python", "registration" ]
stackoverflow_0001294618_google_app_engine_python_registration.txt
Q: Python subprocess question I would like to be able to spawn a process in python and have two way communication. Of course, Pexpect does this and is indeed a way I might go. However, it is not quite ideal. My ideal situation would be to have a cross platform generic technique that involved only the standard python libraries. Subprocess gets pretty close, but the fact that I have to wait for the process to terminate before safely interacting with it is not desirable. Looking at the documentation, it does say there is a stdin,stdout and stderr file descriptors that I can directly manipulate, but there is a big fat warning that says "Don't Do This". Unfortunately its not entirely clear why this warning exists, but from what I gather from google is that it is related to os buffering, and it is possible to write code that unexpectedly deadlocks when those internal buffers fail (as a side note, any examples that show the wrong way and right way would be appreciated). So, risking my code to potential deadlocks, I thought it might be interesting to use poll or select to interactively read from the running process without killing it. Although I lose (i think) the cross platform ability, I like the fact that it requires no additional libraries. But more importantly, I would like to know if this is this a good idea. I have yet to try this approach, but I am concerned about gotchas that could potentially devastate my program. Can it work? What should I test for? In my specific case I am not really concerned about being able to write to the process, just repeatedly reading from it. Also, I don't expect my processes to dump huge amounts of text, so I hope to avoid the deadlocking issue, however I would like to know exactly what those limits are and be able to write some tests to see where it breaks down. A: Use the multiprocessing module in the Python 2.6 standard library. It has a Queue class that can be used for both reading and writing. A: I do this in a separate thread, using message queues to communicate between the threads. In my case the subprocess prints % complete to stdout. I wanted the main thread to put up a pretty progress bar. if sys.platform == 'win32': self.shell = False self.startupinfo = subprocess.STARTUPINFO() self.startupinfo.dwFlags = 0x01 self.startupinfo.wShowWindow = 0 else: self.shell = True self.startupinfo = None . . . f = subprocess.Popen( cmd, stdin=subprocess.PIPE, stderr=subprocess.PIPE, stdout=subprocess.PIPE, env = env, shell = self.shell, startupinfo = self.startupinfo ) f.stdin.close() line = '' while True: log.debug('reading') c = f.stdout.read(1) log.debug(c) if len(c) == 0: log.info('stdout empty; must be done') break; if ord(c) == 13: continue if c == '%': # post % complete message to waiting thread. line = '' else: line += c log.info('checking for errors') errs = f.stderr.readlines() if errs: prettyErrs = 'Reported Errors: ' for i in errs: prettyErrs += i.rstrip('\n') log.warn( prettyErrs ) #post errors to waiting thread else: print 'done' return A: The short answer is that there is no such thing as a good cross platform system for process management, without designing that concept into your system. This is especially in the standar libraries. Even the various unix versions have their own compatibility issues. Your best bet is to instrument all the processes with the proper event handling to notice events that come in from whatever IPC system works best on whatever platform. Named pipes will be the general route for the problem you describe, but there will be implementation differences on each platform. A: Forgive my ignorance on this topic, but couldn't you just launch python with the -u flag for "unbuffered"? This might also be of interest... http://www.gossamer-threads.com/lists/python/python/658167
Python subprocess question
I would like to be able to spawn a process in python and have two way communication. Of course, Pexpect does this and is indeed a way I might go. However, it is not quite ideal. My ideal situation would be to have a cross platform generic technique that involved only the standard python libraries. Subprocess gets pretty close, but the fact that I have to wait for the process to terminate before safely interacting with it is not desirable. Looking at the documentation, it does say there is a stdin,stdout and stderr file descriptors that I can directly manipulate, but there is a big fat warning that says "Don't Do This". Unfortunately its not entirely clear why this warning exists, but from what I gather from google is that it is related to os buffering, and it is possible to write code that unexpectedly deadlocks when those internal buffers fail (as a side note, any examples that show the wrong way and right way would be appreciated). So, risking my code to potential deadlocks, I thought it might be interesting to use poll or select to interactively read from the running process without killing it. Although I lose (i think) the cross platform ability, I like the fact that it requires no additional libraries. But more importantly, I would like to know if this is this a good idea. I have yet to try this approach, but I am concerned about gotchas that could potentially devastate my program. Can it work? What should I test for? In my specific case I am not really concerned about being able to write to the process, just repeatedly reading from it. Also, I don't expect my processes to dump huge amounts of text, so I hope to avoid the deadlocking issue, however I would like to know exactly what those limits are and be able to write some tests to see where it breaks down.
[ "Use the multiprocessing module in the Python 2.6 standard library. \nIt has a Queue class that can be used for both reading and writing.\n", "I do this in a separate thread, using message queues to communicate between the threads. In my case the subprocess prints % complete to stdout. I wanted the main thread to put up a pretty progress bar.\n if sys.platform == 'win32':\n self.shell = False\n self.startupinfo = subprocess.STARTUPINFO()\n self.startupinfo.dwFlags = 0x01\n self.startupinfo.wShowWindow = 0\n else:\n self.shell = True\n self.startupinfo = None\n\n.\n.\n.\nf = subprocess.Popen( cmd, stdin=subprocess.PIPE, stderr=subprocess.PIPE, stdout=subprocess.PIPE, env = env, shell = self.shell, startupinfo = self.startupinfo )\n f.stdin.close()\n line = ''\n while True:\n log.debug('reading')\n c = f.stdout.read(1)\n\n log.debug(c)\n\n if len(c) == 0:\n log.info('stdout empty; must be done')\n break;\n if ord(c) == 13:\n continue\n if c == '%':\n # post % complete message to waiting thread.\n line = ''\n else:\n line += c\n\n\n log.info('checking for errors')\n errs = f.stderr.readlines()\n\n if errs:\n prettyErrs = 'Reported Errors: '\n for i in errs:\n prettyErrs += i.rstrip('\\n')\n\n log.warn( prettyErrs )\n #post errors to waiting thread\n else:\n print 'done' \n return\n\n", "The short answer is that there is no such thing as a good cross platform system for process management, without designing that concept into your system. This is especially in the standar libraries. Even the various unix versions have their own compatibility issues.\nYour best bet is to instrument all the processes with the proper event handling to notice events that come in from whatever IPC system works best on whatever platform. Named pipes\nwill be the general route for the problem you describe, but there will be implementation\ndifferences on each platform.\n", "Forgive my ignorance on this topic, but couldn't you just launch python with the -u flag for \"unbuffered\"? \nThis might also be of interest...\nhttp://www.gossamer-threads.com/lists/python/python/658167\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "python", "subprocess" ]
stackoverflow_0001110804_python_subprocess.txt
Q: Sandboxing / copying a module in two separate places to prevent overwriting or monkey patching Using these four files, all in the same directory: splitter.py #imagine this is a third-party library SPLIT_CHAR = ',' class Splitter(object): def __init__(self, s, split_char=None): self.orig = s if not split_char: self.splitted = s.split(SPLIT_CHAR) a.py #this person makes the mistake of re-setting the global variable #in splitter to get different behavior, instead of just passing #in the extra argument import splitter splitter.SPLIT_CHAR = '|' def go(): s1 = splitter.Splitter("a|b|c|d") print s1.orig print s1.splitted b.py #this person expects the default behavior (splitting commas) from splitter import Splitter def go(): s1 = Splitter('a,b,c,d') print s1.orig print s1.splitted experiment.py import a import b a.go() #this one sets the global var in splitter b.go() #this one expects the default behavior The output of experiment.py will be: a|b|c|d ['a', 'b', 'c', 'd'] #okay... everything is fine a,b,c,d ['a,b,c,d'] #not what the programmer expected.. should be split on commas Is there a way to prevent this result? In this case, maybe a.py and b.py are written by two different coworkers, yet A's code is affecting B. It can sometimes be useful to overwrite something in a module (e.g. monkeypatching), but in this case it's producing confusing behavior. Is there some way to make a copy of the module or sandbox the execution so that a.py can't overwrite values in splitter.py and end up affecting b.py? Also, let's say instead of the simple a.py and b.py, we are running a hundred web apps under mod_python or something (sharing as much as possible to decrease memory), is there a way to prevent one rogue app from tinkering with a module and breaking all the other apps? Something like Google App Engine has this solved, of course :) A: "Is there a way to prevent this result?" Yes. Find the people who monkeypatched the module and make them stop. Monkeypatching doesn't require fancy code work-arounds. It requires people to simply cooperate. If you write a module and some co-worker makes a mess of it, you should talk to that co-worker. It's cheaper, simpler, and more effective in the long run. A: How about when you instantiate the Splitter, you set it's default split char to whatever it is you want it to be, and make a setter for it, so that people can change it? A: Another way to prevent that is to "hint" that splitter.SPLIT_CHAR is "private" by calling it _SPLIT_CHAR. From PEP8, the style guide for Python code: _single_leading_underscore: weak "internal use" indicator. E.g. "from M import *" does not import objects whose name starts with an underscore. and Use one leading underscore only for non-public methods and instance variables So while neither of those shout "don't mess with me," it is a hint to the next user, at least if they are familiar with Python's style.
Sandboxing / copying a module in two separate places to prevent overwriting or monkey patching
Using these four files, all in the same directory: splitter.py #imagine this is a third-party library SPLIT_CHAR = ',' class Splitter(object): def __init__(self, s, split_char=None): self.orig = s if not split_char: self.splitted = s.split(SPLIT_CHAR) a.py #this person makes the mistake of re-setting the global variable #in splitter to get different behavior, instead of just passing #in the extra argument import splitter splitter.SPLIT_CHAR = '|' def go(): s1 = splitter.Splitter("a|b|c|d") print s1.orig print s1.splitted b.py #this person expects the default behavior (splitting commas) from splitter import Splitter def go(): s1 = Splitter('a,b,c,d') print s1.orig print s1.splitted experiment.py import a import b a.go() #this one sets the global var in splitter b.go() #this one expects the default behavior The output of experiment.py will be: a|b|c|d ['a', 'b', 'c', 'd'] #okay... everything is fine a,b,c,d ['a,b,c,d'] #not what the programmer expected.. should be split on commas Is there a way to prevent this result? In this case, maybe a.py and b.py are written by two different coworkers, yet A's code is affecting B. It can sometimes be useful to overwrite something in a module (e.g. monkeypatching), but in this case it's producing confusing behavior. Is there some way to make a copy of the module or sandbox the execution so that a.py can't overwrite values in splitter.py and end up affecting b.py? Also, let's say instead of the simple a.py and b.py, we are running a hundred web apps under mod_python or something (sharing as much as possible to decrease memory), is there a way to prevent one rogue app from tinkering with a module and breaking all the other apps? Something like Google App Engine has this solved, of course :)
[ "\"Is there a way to prevent this result?\"\nYes. Find the people who monkeypatched the module and make them stop.\nMonkeypatching doesn't require fancy code work-arounds. It requires people to simply cooperate.\nIf you write a module and some co-worker makes a mess of it, you should talk to that co-worker. It's cheaper, simpler, and more effective in the long run.\n", "How about when you instantiate the Splitter, you set it's default split char to whatever it is you want it to be, and make a setter for it, so that people can change it?\n", "Another way to prevent that is to \"hint\" that splitter.SPLIT_CHAR is \"private\" by calling it _SPLIT_CHAR. From PEP8, the style guide for Python code:\n\n_single_leading_underscore: weak \"internal use\" indicator. E.g. \"from M import *\" does not import objects whose name starts with an underscore.\n\nand\n\nUse one leading underscore only for non-public methods and instance variables\n\nSo while neither of those shout \"don't mess with me,\" it is a hint to the next user, at least if they are familiar with Python's style.\n" ]
[ 2, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001293879_python.txt
Q: HTML snippet from Python I'm new to Python and CGI, so this may be trivial. But I'd like Python to produce a block of HTML whenever a visitor loads a page. <html> <body> <!-- Beautiful website with great content --> <!-- Suddenly... --> <h1> Here's Some Python Output </h1> <!-- RUN PYTHON SCRIPT, display output --> </body> </html> Using a Python script that, for simplicity, looks something like this: #!/usr/bin/python print "Content-type: text/html" print print "<p>Hello world!</p>" Most resources I was able to find demonstrate how to produce an ENTIRE webpage using a Python script. Can you invoke a Python script mid-page to produce JUST an HTML snippet? A: Use AJAX client-side, or templates server side. A template will allow you to keep most of your page static, but use a server-side scripting language (like Python) to fill in the dynamic bits. There are lots of good posts on Python template systems on Stackoverflow. Here's one AJAX will allow you to update a page after it initially loads with results from a server.
HTML snippet from Python
I'm new to Python and CGI, so this may be trivial. But I'd like Python to produce a block of HTML whenever a visitor loads a page. <html> <body> <!-- Beautiful website with great content --> <!-- Suddenly... --> <h1> Here's Some Python Output </h1> <!-- RUN PYTHON SCRIPT, display output --> </body> </html> Using a Python script that, for simplicity, looks something like this: #!/usr/bin/python print "Content-type: text/html" print print "<p>Hello world!</p>" Most resources I was able to find demonstrate how to produce an ENTIRE webpage using a Python script. Can you invoke a Python script mid-page to produce JUST an HTML snippet?
[ "Use AJAX client-side, or templates server side. \nA template will allow you to keep most of your page static, but use a server-side scripting language (like Python) to fill in the dynamic bits. There are lots of good posts on Python template systems on Stackoverflow. Here's one\nAJAX will allow you to update a page after it initially loads with results from a server. \n" ]
[ 3 ]
[]
[]
[ "cgi", "html", "python" ]
stackoverflow_0001295446_cgi_html_python.txt
Q: Can't get Beaker sessions to work (KeyError) I'm a newb to the Python world and am having the dangest time with getting sessions to work in my web frameworks. I've tried getting Beaker sessions to work with the webpy framework and the Juno framework. And in both frameworks I always get a KeyError when I try to start the session. Here is the error message in webpy (its pretty much the exact same thing when I try to use beaker sessions in Juno too)... ERROR <type 'exceptions.KeyError'> at / 'beaker.session' Python /Users/tyler/Dropbox/Code/sites/webpy1/code.py in GET, line 15 Web GET http://localhost:1234/ 15. session = web.ctx.environ['beaker.session'] CODE import web import beaker.session from beaker.middleware import SessionMiddleware urls = ( '/', 'index' ) class index: def GET(self): session = web.ctx.environ['beaker.session'] return "hello" app = web.application(urls, globals()) if __name__ == "__main__": app.run() A: You haven't created the session object yet, so you can't find it in the environment (the KeyError simply means "beaker.session is not in this dictionary"). Note that I don't know either webpy nor beaker very well, so I can't give you deeper advice, but from what I understand from the docs and source this should get you started: if __name__ == "__main__": app.run(SessionMiddleware)
Can't get Beaker sessions to work (KeyError)
I'm a newb to the Python world and am having the dangest time with getting sessions to work in my web frameworks. I've tried getting Beaker sessions to work with the webpy framework and the Juno framework. And in both frameworks I always get a KeyError when I try to start the session. Here is the error message in webpy (its pretty much the exact same thing when I try to use beaker sessions in Juno too)... ERROR <type 'exceptions.KeyError'> at / 'beaker.session' Python /Users/tyler/Dropbox/Code/sites/webpy1/code.py in GET, line 15 Web GET http://localhost:1234/ 15. session = web.ctx.environ['beaker.session'] CODE import web import beaker.session from beaker.middleware import SessionMiddleware urls = ( '/', 'index' ) class index: def GET(self): session = web.ctx.environ['beaker.session'] return "hello" app = web.application(urls, globals()) if __name__ == "__main__": app.run()
[ "You haven't created the session object yet, so you can't find it in the environment (the KeyError simply means \"beaker.session is not in this dictionary\").\nNote that I don't know either webpy nor beaker very well, so I can't give you deeper advice, but from what I understand from the docs and source this should get you started:\nif __name__ == \"__main__\": app.run(SessionMiddleware)\n\n" ]
[ 2 ]
[]
[]
[ "django", "python", "session", "web.py" ]
stackoverflow_0001290840_django_python_session_web.py.txt
Q: How to replace Python function while supporting all passed in parameters I'm looking for a way to decorate an arbitrary python function, so that an alternate function is called instead of the original, with all parameters passed as a list or dict. More precisely, something like this (where f is any function, and replacement_f takes a list and a dict): def replace_func(f, replacement_f): def new_f(*args, **kwargs): replacement_f(args, kwargs) return new_f However, I cannot reference replacement_f inside new_f. And I can't use the standard trick of passing replacement_f to new_f as the default for a different parameter, because I'm using the *args and **kwargs variable argument lists. The location where the original function is called cannot change, and will accept both positional and named parameters. I fear that isn't very clear, but I'm happy to clarify if needed. Thanks A: why don't you just try: f = replacement_f example: >>> def rep(*args): print(*args, sep=' -- ') >>> def ori(*args): print(args) >>> ori('dfef', 32) ('dfef', 32) >>> ori = rep >>> ori('dfef', 32) dfef -- 32 A: Although I think SilentGhost's answer is the best solution if it works for you, for the sake of completeness, here is the correct version of what you where trying to do: To define a decorator that takes an argument, you have to introduce an additional level: def replace_function(repl): def deco(f): def inner_f(*args, **kwargs): repl(*args, **kwargs) return inner_f return deco Now you can use the decorator with an argument: @replace_function(replacement_f) def original_function(*args, **kwargs): .... A: If I understand this correctly, you want to pass all arguments from one function to another. Simply do: def replace_func(f, replacement_f): def new_f(*args, **kwargs): replacement_f(*args, **kwargs) # args & kwargs will be expanded return new_f A: does this do what you mean? Untested def replace_func(f, replacement_f): nonlocal replacement_f #<<<<<<<<<<<<<<<py3k magic def new_f(*args, **kwargs): replacement_f(*args, **kwargs) # args & kwargs will be expanded replacement_f = new_f Python nonlocal statement edit: its a start, im confused exactly what you want, but i think nonlocal will help
How to replace Python function while supporting all passed in parameters
I'm looking for a way to decorate an arbitrary python function, so that an alternate function is called instead of the original, with all parameters passed as a list or dict. More precisely, something like this (where f is any function, and replacement_f takes a list and a dict): def replace_func(f, replacement_f): def new_f(*args, **kwargs): replacement_f(args, kwargs) return new_f However, I cannot reference replacement_f inside new_f. And I can't use the standard trick of passing replacement_f to new_f as the default for a different parameter, because I'm using the *args and **kwargs variable argument lists. The location where the original function is called cannot change, and will accept both positional and named parameters. I fear that isn't very clear, but I'm happy to clarify if needed. Thanks
[ "why don't you just try:\nf = replacement_f\n\nexample:\n>>> def rep(*args):\n print(*args, sep=' -- ')\n\n>>> def ori(*args):\n print(args)\n\n>>> ori('dfef', 32)\n('dfef', 32)\n>>> ori = rep\n>>> ori('dfef', 32)\ndfef -- 32\n\n", "Although I think SilentGhost's answer is the best solution if it works for you, for the sake of completeness, here is the correct version of what you where trying to do:\nTo define a decorator that takes an argument, you have to introduce an additional level:\ndef replace_function(repl):\n def deco(f):\n def inner_f(*args, **kwargs):\n repl(*args, **kwargs)\n\n return inner_f\n return deco\n\nNow you can use the decorator with an argument:\n@replace_function(replacement_f)\ndef original_function(*args, **kwargs):\n ....\n\n", "If I understand this correctly, you want to pass all arguments from one function to another.\nSimply do:\ndef replace_func(f, replacement_f):\n def new_f(*args, **kwargs):\n replacement_f(*args, **kwargs) # args & kwargs will be expanded\n return new_f\n\n", "does this do what you mean? Untested\ndef replace_func(f, replacement_f):\n nonlocal replacement_f #<<<<<<<<<<<<<<<py3k magic\n def new_f(*args, **kwargs):\n replacement_f(*args, **kwargs) # args & kwargs will be expanded\n replacement_f = new_f\n\nPython nonlocal statement\nedit: its a start, im confused exactly what you want, but i think nonlocal will help\n" ]
[ 4, 1, 0, 0 ]
[]
[]
[ "decorator", "keyword_argument", "python" ]
stackoverflow_0001295415_decorator_keyword_argument_python.txt
Q: Evaluate a script (e.g. Python) in Java for Android platform Is it possible to evaluate a string of python code (or Perl) from Java when developing Android applications? I am trying to do something like evaluating a text-input script: String script = text1.getText().toString(); String result = PythonRuntime.evaluate(script); text2.setText(result); A: In case you weren't aware of it, the Android Scripting Environment might be useful to you, though I don't think it does exactly what you're looking for. A: Jython and its derivatives should be able to do this. See also Jythondroid.
Evaluate a script (e.g. Python) in Java for Android platform
Is it possible to evaluate a string of python code (or Perl) from Java when developing Android applications? I am trying to do something like evaluating a text-input script: String script = text1.getText().toString(); String result = PythonRuntime.evaluate(script); text2.setText(result);
[ "In case you weren't aware of it, the Android Scripting Environment might be useful to you, though I don't think it does exactly what you're looking for.\n", "Jython and its derivatives should be able to do this. See also Jythondroid.\n" ]
[ 5, 4 ]
[]
[]
[ "android", "java", "python", "scripting" ]
stackoverflow_0001295720_android_java_python_scripting.txt
Q: Getting DOM tree of XML document Does anyone know how I would get a DOM instance (tree) of an XML file in Python. I am trying to compare two XML documents to eachother that may have elements and attributes in different order. How would I do this? A: Personally, whenever possible, I'd start with elementtree (preferably the C implementation that comes with Python's standard library, or the lxml implementation, but that's essentialy a matter of higher speed, only). It's not a standard-compliant DOM, but holds the same information in a more Pythonic and handier way. You can start by calling xml.etree.ElementTree.parse, which takes the XML source and returns an element-tree; do that on both sources, use getroot on each element tree to obtain its root element, then recursively compare elements starting from the root ones. Children of an element form a sequence, in element tree just as in the standard DOM, meaning their order is considered important; but it's easy to make Python sets out of them (or with a little more effort "multi-sets" of some kind, if repetitions are important in your use case though order is not) for a laxer comparison. It's even easier for attributes for a given element, where uniqueness is assured and order is semantically not relevant. Is there some specific reason you need a standard DOM rather than an alternative container like an element tree, or are you just using the term DOM in a general sense so that element tree would be OK? In the past I've also had good results using PyRXP, which uses an even starker and simpler representation than ElementTree. However, it WAS years and years ago; I have no recent experience as to how PyRXP today compares with lxml or cElementTree. A: Some solutions to ponder: minidom amara (xml data binding) A: For comparing XML document instances, a naive compare of the parsed DOM trees will not work. You will probably need to implement your own NodeComperator that recursively compares a node and its child-nodes with some other node and its child-nodes based on your specific criteria such as: When is the order of child elements significant? When is whitespace in text-content significant? Are there default values for some elements and are they applied by your parser? Should entity references be expanded for comparison Minidom is a good starting point for parsing the files and is easy to use. The actual implementation of the comparison function for your specific application however needs to be done by you.
Getting DOM tree of XML document
Does anyone know how I would get a DOM instance (tree) of an XML file in Python. I am trying to compare two XML documents to eachother that may have elements and attributes in different order. How would I do this?
[ "Personally, whenever possible, I'd start with elementtree (preferably the C implementation that comes with Python's standard library, or the lxml implementation, but that's essentialy a matter of higher speed, only). It's not a standard-compliant DOM, but holds the same information in a more Pythonic and handier way. You can start by calling xml.etree.ElementTree.parse, which takes the XML source and returns an element-tree; do that on both sources, use getroot on each element tree to obtain its root element, then recursively compare elements starting from the root ones.\nChildren of an element form a sequence, in element tree just as in the standard DOM, meaning their order is considered important; but it's easy to make Python sets out of them (or with a little more effort \"multi-sets\" of some kind, if repetitions are important in your use case though order is not) for a laxer comparison. It's even easier for attributes for a given element, where uniqueness is assured and order is semantically not relevant.\nIs there some specific reason you need a standard DOM rather than an alternative container like an element tree, or are you just using the term DOM in a general sense so that element tree would be OK?\nIn the past I've also had good results using PyRXP, which uses an even starker and simpler representation than ElementTree. However, it WAS years and years ago; I have no recent experience as to how PyRXP today compares with lxml or cElementTree.\n", "Some solutions to ponder:\n\nminidom\namara (xml data binding)\n\n", "For comparing XML document instances, a naive compare of the parsed DOM trees will not work. You will probably need to implement your own NodeComperator that recursively compares a node and its child-nodes with some other node and its child-nodes based on your specific criteria such as:\n\nWhen is the order of child elements significant?\nWhen is whitespace in text-content significant?\nAre there default values for some elements and are they applied by your parser?\nShould entity references be expanded for comparison\n\nMinidom is a good starting point for parsing the files and is easy to use. The actual implementation of the comparison function for your specific application however needs to be done by you.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "dom", "python", "xml" ]
stackoverflow_0001294654_dom_python_xml.txt
Q: Numpy: Is there an array size limit? I'm learning to use Numpy and I wanted to see the speed difference in the summation of a list of numbers so I made this code: np_array = numpy.arange(1000000) start = time.time() sum_ = np_array.sum() print time.time() - start, sum_ >>> 0.0 1783293664 python_list = range(1000000) start = time.time() sum_ = sum(python_list) print time.time() - start, sum_ >>> 0.390000104904 499999500000 The python_list sum is correct. If I do the same code with the summation to 1000, both print the right answer. Is there an upper limit to the length of the Numpy array or is it with the Numpy sum function? Thanks for your help A: The standard list switched over to doing arithmetic with the long type when numbers got larger than a 32-bit int. The numpy array did not switch to long, and suffered from integer overflow. The price for speed is smaller range of values allowed. >>> 499999500000 % 2**32 1783293664L A: Numpy is creating an array of 32-bit unsigned ints. When it sums them, it sums them into a 32-bit value. if 499999500000L % (2**32) == 1783293664L: print "Overflowed a 32-bit integer" You can explicitly choose the data type at array creation time: a = numpy.arange(1000000, dtype=numpy.uint64) a.sum() -> 499999500000 A: Notice that 499999500000 % 2**32 equals exactly 1783293664 ... i.e., numpy is doing operations modulo 2**32, because that's the type of the numpy.array you've told it to use. Make np_array = numpy.arange(1000000, dtype=numpy.uint64), for example, and your sum will come out OK (although of course there are still limits, with any finite-size number type). You can use dtype=numpy.object to tell numpy that the array holds generic Python objects; of course, performance will decay as generality increases.
Numpy: Is there an array size limit?
I'm learning to use Numpy and I wanted to see the speed difference in the summation of a list of numbers so I made this code: np_array = numpy.arange(1000000) start = time.time() sum_ = np_array.sum() print time.time() - start, sum_ >>> 0.0 1783293664 python_list = range(1000000) start = time.time() sum_ = sum(python_list) print time.time() - start, sum_ >>> 0.390000104904 499999500000 The python_list sum is correct. If I do the same code with the summation to 1000, both print the right answer. Is there an upper limit to the length of the Numpy array or is it with the Numpy sum function? Thanks for your help
[ "The standard list switched over to doing arithmetic with the long type when numbers got larger than a 32-bit int.\nThe numpy array did not switch to long, and suffered from integer overflow. The price for speed is smaller range of values allowed.\n>>> 499999500000 % 2**32\n1783293664L\n\n", "Numpy is creating an array of 32-bit unsigned ints.\nWhen it sums them, it sums them into a 32-bit value.\nif 499999500000L % (2**32) == 1783293664L:\n print \"Overflowed a 32-bit integer\"\n\nYou can explicitly choose the data type at array creation time:\na = numpy.arange(1000000, dtype=numpy.uint64)\na.sum() -> 499999500000\n\n", "Notice that 499999500000 % 2**32 equals exactly 1783293664 ... i.e., numpy is doing operations modulo 2**32, because that's the type of the numpy.array you've told it to use. \nMake np_array = numpy.arange(1000000, dtype=numpy.uint64), for example, and your sum will come out OK (although of course there are still limits, with any finite-size number type).\nYou can use dtype=numpy.object to tell numpy that the array holds generic Python objects; of course, performance will decay as generality increases.\n" ]
[ 9, 9, 6 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0001295994_numpy_python.txt
Q: simple python script to block nokia n73 screen i want that upon opening my N73's camera cover,the camera software keeps working as usual,but that it is blocked by a black screen covering the whole screen so that it appears that the camera is not working... I know my requirement is weired but i need this.. ;) Can anyone guide me to write a python script that does exactly this... i searched a lot over net for any existing apps but couldnot find one.. Thanks for helping.. A: I'm rather sceptical whether this can be achieved with PyS60. First of all, AFAIK for PyS60 program you'd need to start the interpreter environment first, autostarting when camera starts probably won't be possible. Also, opening the camera cover probably does not have any callback so you won't be able to detect it from python however, you could try something like this presented in PyS60 1.4.5 doc (http://downloads.sourceforge.net/project/pys60/pys60/1.4.5/PythonForS60_1_4_5_doc.pdf): >>> import appuifw >>> import camera >>> def cb(im): ... appuifw.app.body.blit(im) ... >>> import graphics >>> appuifw.app.body=appuifw.Canvas() >>> camera.start_finder(cb) Instead of blitting im you could just blit a black screen and store im. Or something. But seriously, this sounds like some very bad prank...
simple python script to block nokia n73 screen
i want that upon opening my N73's camera cover,the camera software keeps working as usual,but that it is blocked by a black screen covering the whole screen so that it appears that the camera is not working... I know my requirement is weired but i need this.. ;) Can anyone guide me to write a python script that does exactly this... i searched a lot over net for any existing apps but couldnot find one.. Thanks for helping..
[ "I'm rather sceptical whether this can be achieved with PyS60. First of all, AFAIK for PyS60 program you'd need to start the interpreter environment first, autostarting when camera starts probably won't be possible.\nAlso, opening the camera cover probably does not have any callback so you won't be able to detect it from python however, you could try something like this presented in PyS60 1.4.5 doc (http://downloads.sourceforge.net/project/pys60/pys60/1.4.5/PythonForS60_1_4_5_doc.pdf):\n>>> import appuifw\n>>> import camera\n>>> def cb(im):\n... appuifw.app.body.blit(im)\n...\n>>> import graphics\n>>> appuifw.app.body=appuifw.Canvas()\n>>> camera.start_finder(cb)\n\nInstead of blitting im you could just blit a black screen and store im. Or something. But seriously, this sounds like some very bad prank...\n" ]
[ 0 ]
[]
[]
[ "camera", "n73", "pys60", "python" ]
stackoverflow_0001296359_camera_n73_pys60_python.txt
Q: Getting system status in python Is there any way to get system status in python, for example the amount of memory free, processes that are running, cpu load and so on. I know on linux I can get this from the /proc directory, but I would like to do this on unix and windows as well. A: I don't know of any such library/ package that currently supports both Linux and Windows. There's libstatgrab which doesn't seem to be very actively developed (it already supports a decent variety of Unix platforms though) and the very active PSI (Python System Information) which works on AIX, Linux, SunOS and Darwin. Both projects aim at having Windows support sometime in the future. Good luck. A: I don't think there is a cross-platform library for that yet (there definitely should be one though) I can however provide you with one snippet I used to get the current CPU load from /proc/stat under Linux: Edit: replaced horrible undocumented code with slightly more pythonic and documented code import time INTERVAL = 0.1 def getTimeList(): """ Fetches a list of time units the cpu has spent in various modes Detailed explanation at http://www.linuxhowtos.org/System/procstat.htm """ cpuStats = file("/proc/stat", "r").readline() columns = cpuStats.replace("cpu", "").split(" ") return map(int, filter(None, columns)) def deltaTime(interval): """ Returns the difference of the cpu statistics returned by getTimeList that occurred in the given time delta """ timeList1 = getTimeList() time.sleep(interval) timeList2 = getTimeList() return [(t2-t1) for t1, t2 in zip(timeList1, timeList2)] def getCpuLoad(): """ Returns the cpu load as a value from the interval [0.0, 1.0] """ dt = list(deltaTime(INTERVAL)) idle_time = float(dt[3]) total_time = sum(dt) load = 1-(idle_time/total_time) return load while True: print "CPU usage=%.2f%%" % (getCpuLoad()*100.0) time.sleep(0.1)
Getting system status in python
Is there any way to get system status in python, for example the amount of memory free, processes that are running, cpu load and so on. I know on linux I can get this from the /proc directory, but I would like to do this on unix and windows as well.
[ "I don't know of any such library/ package that currently supports both Linux and Windows. There's libstatgrab which doesn't seem to be very actively developed (it already supports a decent variety of Unix platforms though) and the very active PSI (Python System Information) which works on AIX, Linux, SunOS and Darwin. Both projects aim at having Windows support sometime in the future. Good luck.\n", "I don't think there is a cross-platform library for that yet (there definitely should be one though)\nI can however provide you with one snippet I used to get the current CPU load from /proc/stat under Linux:\nEdit: replaced horrible undocumented code with slightly more pythonic and documented code\nimport time\n\nINTERVAL = 0.1\n\ndef getTimeList():\n \"\"\"\n Fetches a list of time units the cpu has spent in various modes\n Detailed explanation at http://www.linuxhowtos.org/System/procstat.htm\n \"\"\"\n cpuStats = file(\"/proc/stat\", \"r\").readline()\n columns = cpuStats.replace(\"cpu\", \"\").split(\" \")\n return map(int, filter(None, columns))\n\ndef deltaTime(interval):\n \"\"\"\n Returns the difference of the cpu statistics returned by getTimeList\n that occurred in the given time delta\n \"\"\"\n timeList1 = getTimeList()\n time.sleep(interval)\n timeList2 = getTimeList()\n return [(t2-t1) for t1, t2 in zip(timeList1, timeList2)]\n\ndef getCpuLoad():\n \"\"\"\n Returns the cpu load as a value from the interval [0.0, 1.0]\n \"\"\"\n dt = list(deltaTime(INTERVAL))\n idle_time = float(dt[3])\n total_time = sum(dt)\n load = 1-(idle_time/total_time)\n return load\n\n\nwhile True:\n print \"CPU usage=%.2f%%\" % (getCpuLoad()*100.0)\n time.sleep(0.1)\n\n" ]
[ 8, 7 ]
[]
[]
[ "operating_system", "python" ]
stackoverflow_0001296703_operating_system_python.txt
Q: What is the best way to handle rotating sprites for a top-down view game I am working on a top-down view 2d game at the moment and I am learning a ton about sprites and sprite handling. My question is how to handle a set of sprites that can be rotated in as many as 32 directions. At the moment a given object has its sprite sheet with all of the animations oriented with the object pointing at 0 degrees at all times. Now, since the object can rotate in as many as 32 directions, what is the best way to work with that original sprite sheet. My current best guess is to have the program basically dynamically create 32 more sprite sheets when the object is first loaded into the game, and then all subsequent instances of that type of object will share those sprite sheets. Anyways, any advice in this regard would be helpful. Let me know if I need to rephrase the question, I know its kindof an odd one. Thanks Edit: I guess for more clarification. If I have, for instance an object that has 2 animations of 5 frames a peice, that is a pretty easy sprite sheet to create and organize, its a simple 2x5 grid (or 5x2 depending on how you lay it out). But the problem is that now those 2 animations have to be rotated in 32 directions. This means that in the end there will be 320 individual sprites. I am going to say that (and correct me if im wrong) since I'm concerned about performance and frame-rate, rotating the sprites on the fly every single frame is not an option. So, how should these 320 sprites that make up these 2 animations be organized? Would it be better to Think of it as 32 2x5 sprite sheets split the sprite sheet up into individual frames, and then have an array the 32 different directions per frame (so 10 arrays of 32 directional sprites) Other....? Doesn't matter? Thanks A: The 32 directions for the sprite translate into 32 rotations by 11.25 degrees. You can reduce the number of precalculated images to 8 you only calculate the first 90 degrees (11.25, 22.5, 33.75, 45.0, 56.25, 67.5, 78.75, 90.0) and use the flip operations dynamically. Flips are much faster because they essentially only change the order an image is copied from the buffer. For example, when you display an image that is rotated by 101.25 degrees, load the precalculated image of 67.5 degrees and flip it vertically. I just realized that this only works if your graphic is symmetrical ;-) When talking about a modern computer, you might not need to optimize anything. The memory used by precalculating the sprites is certainly negligible, and the cpu usage when rotating the image probably too. When you are programming for a embedded device however, it does matter. A: You can just rotate the sprites directly by setting the rotate portion of the sprite's transform. For some sample code, check out the Chimp sprite in this pygame tutorial. It rotates the sprites when the Chimp is "dizzy". A: Typically you will sacrifice either processor time or memory, and you need to balance between the two. Unless you've got some great limit to your processor or you're computing a lot of expensive stuff, there's no real reason to put all that into the memory. Rotating a few sprites with a transform is cheap enough that it is definitely not worth it to store 32x as much information in memory - especially because that information is a bunch of images, and images use up a lot of memory, relatively speaking.
What is the best way to handle rotating sprites for a top-down view game
I am working on a top-down view 2d game at the moment and I am learning a ton about sprites and sprite handling. My question is how to handle a set of sprites that can be rotated in as many as 32 directions. At the moment a given object has its sprite sheet with all of the animations oriented with the object pointing at 0 degrees at all times. Now, since the object can rotate in as many as 32 directions, what is the best way to work with that original sprite sheet. My current best guess is to have the program basically dynamically create 32 more sprite sheets when the object is first loaded into the game, and then all subsequent instances of that type of object will share those sprite sheets. Anyways, any advice in this regard would be helpful. Let me know if I need to rephrase the question, I know its kindof an odd one. Thanks Edit: I guess for more clarification. If I have, for instance an object that has 2 animations of 5 frames a peice, that is a pretty easy sprite sheet to create and organize, its a simple 2x5 grid (or 5x2 depending on how you lay it out). But the problem is that now those 2 animations have to be rotated in 32 directions. This means that in the end there will be 320 individual sprites. I am going to say that (and correct me if im wrong) since I'm concerned about performance and frame-rate, rotating the sprites on the fly every single frame is not an option. So, how should these 320 sprites that make up these 2 animations be organized? Would it be better to Think of it as 32 2x5 sprite sheets split the sprite sheet up into individual frames, and then have an array the 32 different directions per frame (so 10 arrays of 32 directional sprites) Other....? Doesn't matter? Thanks
[ "The 32 directions for the sprite translate into 32 rotations by 11.25 degrees. \nYou can reduce the number of precalculated images to 8 you only calculate the first 90 degrees (11.25, 22.5, 33.75, 45.0, 56.25, 67.5, 78.75, 90.0) and use the flip operations dynamically. Flips are much faster because they essentially only change the order an image is copied from the buffer. \nFor example, when you display an image that is rotated by 101.25 degrees, load the precalculated image of 67.5 degrees and flip it vertically.\nI just realized that this only works if your graphic is symmetrical ;-)\nWhen talking about a modern computer, you might not need to optimize anything. The memory used by precalculating the sprites is certainly negligible, and the cpu usage when rotating the image probably too. When you are programming for a embedded device however, it does matter.\n", "You can just rotate the sprites directly by setting the rotate portion of the sprite's transform.\nFor some sample code, check out the Chimp sprite in this pygame tutorial. It rotates the sprites when the Chimp is \"dizzy\".\n", "Typically you will sacrifice either processor time or memory, and you need to balance between the two. Unless you've got some great limit to your processor or you're computing a lot of expensive stuff, there's no real reason to put all that into the memory. Rotating a few sprites with a transform is cheap enough that it is definitely not worth it to store 32x as much information in memory - especially because that information is a bunch of images, and images use up a lot of memory, relatively speaking.\n" ]
[ 4, 2, 1 ]
[]
[]
[ "pygame", "python", "sprite" ]
stackoverflow_0001275482_pygame_python_sprite.txt
Q: Does IronPython implement python standard library? I tried IronPython some time ago and it seemed that it implements only python language, and uses .NET for libraries. Is this still the case? Can one use python modules from IronPython? A: The IronPython installer includes the Python standard library. Otherwise, you can use the standard library from a compatible Python install (IPy 2.0 -> CPy 2.5, IPy 2.6 -> CPy 2.6). Either copy the Python Lib directory to the IronPython folder, or set IRONPYTHONPATH. Do note that only the pure Pyton modules will be available; Python modules that require C extensions have to be re-implemented (and most of them have been). A: You can use the Python standard library from IronPython just fine. Here's how: Install Python. Setup an environment variable named IRONPYTHONPATH that points to the standard library directory. Next time ipy.exe is run, site.py is read and you're good to go. A: The .msi installer for IronPython includes all parts of the CPython standard library that should work with IronPython. You can simply copy the standard library from a CPython install if you prefer, although you're better off getting just the modules that the IronPython developers have ensured with with IronPython - this is most of them. Modules that are implemented using the CPython C API ('extension modules') will not be available. IronClad is an open-source project that aims to let you seamlessly use these modules - it's not perfect yet, but (e.g.) most of the NumPy tests pass. The other option for these 'extension modules' is to replace them with a pure-Python version (e.g. from PyPy) or with a wrapper over a .NET class. The IronPython Community Edition is an IronPython distribution that includes such wrappers for many modules that are not included in the standard IronPython distribution. A: (As I said here:) I've not used it myself, but you may get some mileage out of Ironclad - it supposedly lets you use CPython from IronPython...
Does IronPython implement python standard library?
I tried IronPython some time ago and it seemed that it implements only python language, and uses .NET for libraries. Is this still the case? Can one use python modules from IronPython?
[ "The IronPython installer includes the Python standard library. Otherwise, you can use the standard library from a compatible Python install (IPy 2.0 -> CPy 2.5, IPy 2.6 -> CPy 2.6). Either copy the Python Lib directory to the IronPython folder, or set IRONPYTHONPATH.\nDo note that only the pure Pyton modules will be available; Python modules that require C extensions have to be re-implemented (and most of them have been).\n", "You can use the Python standard library from IronPython just fine. Here's how:\n\nInstall Python.\nSetup an environment variable named IRONPYTHONPATH that points to the standard library directory. \n\nNext time ipy.exe is run, site.py is read and you're good to go.\n", "The .msi installer for IronPython includes all parts of the CPython standard library that should work with IronPython. You can simply copy the standard library from a CPython install if you prefer, although you're better off getting just the modules that the IronPython developers have ensured with with IronPython - this is most of them.\nModules that are implemented using the CPython C API ('extension modules') will not be available. IronClad is an open-source project that aims to let you seamlessly use these modules - it's not perfect yet, but (e.g.) most of the NumPy tests pass.\nThe other option for these 'extension modules' is to replace them with a pure-Python version (e.g. from PyPy) or with a wrapper over a .NET class. The IronPython Community Edition is an IronPython distribution that includes such wrappers for many modules that are not included in the standard IronPython distribution.\n", "(As I said here:)\n\nI've not used it myself, but you may\n get some mileage out of Ironclad\n - it supposedly lets you use CPython from IronPython...\n\n" ]
[ 9, 4, 3, 0 ]
[]
[]
[ "ironpython", "python" ]
stackoverflow_0001296640_ironpython_python.txt
Q: Efficiency of using a Python list as a queue A coworker recently wrote a program in which he used a Python list as a queue. In other words, he used .append(x) when needing to insert items and .pop(0) when needing to remove items. I know that Python has collections.deque and I'm trying to figure out whether to spend my (limited) time to rewrite this code to use it. Assuming that we perform millions of appends and pops but never have more than a few thousand entries, will his list usage be a problem? In particular, will the underlying array used by the Python list implementation continue to grow indefinitely have millions of spots even though the list only has a thousand things, or will Python eventually do a realloc and free up some of that memory? A: Some answers claimed a "10x" speed advantage for deque vs list-used-as-FIFO when both have 1000 entries, but that's a bit of an overbid: $ python -mtimeit -s'q=range(1000)' 'q.append(23); q.pop(0)' 1000000 loops, best of 3: 1.24 usec per loop $ python -mtimeit -s'import collections; q=collections.deque(range(1000))' 'q.append(23); q.popleft()' 1000000 loops, best of 3: 0.573 usec per loop python -mtimeit is your friend -- a really useful and simple micro-benchmarking approach! With it you can of course also trivially explore performance in much-smaller cases: $ python -mtimeit -s'q=range(100)' 'q.append(23); q.pop(0)' 1000000 loops, best of 3: 0.972 usec per loop $ python -mtimeit -s'import collections; q=collections.deque(range(100))' 'q.append(23); q.popleft()' 1000000 loops, best of 3: 0.576 usec per loop (not very different for 12 instead of 100 items btw), and in much-larger ones: $ python -mtimeit -s'q=range(10000)' 'q.append(23); q.pop(0)' 100000 loops, best of 3: 5.81 usec per loop $ python -mtimeit -s'import collections; q=collections.deque(range(10000))' 'q.append(23); q.popleft()' 1000000 loops, best of 3: 0.574 usec per loop You can see that the claim of O(1) performance for deque is well founded, while a list is over twice as slow around 1,000 items, an order of magnitude around 10,000. You can also see that even in such cases you're only wasting 5 microseconds or so per append/pop pair and decide how significant that wastage is (though if that's all you're doing with that container, deque has no downside, so you might as well switch even if 5 usec more or less won't make an important difference). A: You won't run out of memory using the list implementation, but performance will be poor. From the docs: Though list objects support similar operations, they are optimized for fast fixed-length operations and incur O(n) memory movement costs for pop(0) and insert(0, v) operations which change both the size and position of the underlying data representation. So using a deque will be much faster. A: From Beazley's Python Essential Reference, Fourth Edition, p. 194: Some library modules provide new types that outperform the built-ins at certain tasks. For instance, collections.deque type provides similar functionality to a list but has been highly optimized for the insertion of items at both ends. A list, in contrast, is only efficient when appending items at the end. If you insert items at the front, all of the other elements need to be shifted in order to make room. The time required to do this grows as the list gets larger and larger. Just to give you an idea of the difference, here is a timing measurement of inserting one million items at the front of a list and a deque: And there follows this code sample: >>> from timeit import timeit >>> timeit('s.appendleft(37)', 'import collections; s = collections.deque()', number=1000000) 0.13162776274638258 >>> timeit('s.insert(0,37)', 's = []', number=1000000) 932.07849908298408 Timings are from my machine. 2012-07-01 Update >>> from timeit import timeit >>> n = 1024 * 1024 >>> while n > 1: ... print '-' * 30, n ... timeit('s.appendleft(37)', 'import collections; s = collections.deque()', number=n) ... timeit('s.insert(0,37)', 's = []', number=n) ... n >>= 1 ... ------------------------------ 1048576 0.1239769458770752 799.2552740573883 ------------------------------ 524288 0.06924104690551758 148.9747350215912 ------------------------------ 262144 0.029170989990234375 35.077512979507446 ------------------------------ 131072 0.013737916946411133 9.134140014648438 ------------------------------ 65536 0.006711006164550781 1.8818109035491943 ------------------------------ 32768 0.00327301025390625 0.48307204246520996 ------------------------------ 16384 0.0016388893127441406 0.11021995544433594 ------------------------------ 8192 0.0008249282836914062 0.028419017791748047 ------------------------------ 4096 0.00044918060302734375 0.00740504264831543 ------------------------------ 2048 0.00021195411682128906 0.0021741390228271484 ------------------------------ 1024 0.00011205673217773438 0.0006101131439208984 ------------------------------ 512 6.198883056640625e-05 0.00021386146545410156 ------------------------------ 256 2.9087066650390625e-05 8.797645568847656e-05 ------------------------------ 128 1.5974044799804688e-05 3.600120544433594e-05 ------------------------------ 64 8.821487426757812e-06 1.9073486328125e-05 ------------------------------ 32 5.0067901611328125e-06 1.0013580322265625e-05 ------------------------------ 16 3.0994415283203125e-06 5.9604644775390625e-06 ------------------------------ 8 3.0994415283203125e-06 5.0067901611328125e-06 ------------------------------ 4 3.0994415283203125e-06 4.0531158447265625e-06 ------------------------------ 2 2.1457672119140625e-06 2.86102294921875e-06 A: Every .pop(0) takes N steps, since the list has to be reorganized. The required memory will not grow endlessly and only be as big as required for the items that are held. I'd recommend using deque to get O(1) append and pop from front. A: it sounds like a bit of empirical testing might be the best thing to do here - second order issues might make one approach better in practice, even if it's not better in theory.
Efficiency of using a Python list as a queue
A coworker recently wrote a program in which he used a Python list as a queue. In other words, he used .append(x) when needing to insert items and .pop(0) when needing to remove items. I know that Python has collections.deque and I'm trying to figure out whether to spend my (limited) time to rewrite this code to use it. Assuming that we perform millions of appends and pops but never have more than a few thousand entries, will his list usage be a problem? In particular, will the underlying array used by the Python list implementation continue to grow indefinitely have millions of spots even though the list only has a thousand things, or will Python eventually do a realloc and free up some of that memory?
[ "Some answers claimed a \"10x\" speed advantage for deque vs list-used-as-FIFO when both have 1000 entries, but that's a bit of an overbid:\n$ python -mtimeit -s'q=range(1000)' 'q.append(23); q.pop(0)'\n1000000 loops, best of 3: 1.24 usec per loop\n$ python -mtimeit -s'import collections; q=collections.deque(range(1000))' 'q.append(23); q.popleft()'\n1000000 loops, best of 3: 0.573 usec per loop\n\npython -mtimeit is your friend -- a really useful and simple micro-benchmarking approach! With it you can of course also trivially explore performance in much-smaller cases:\n$ python -mtimeit -s'q=range(100)' 'q.append(23); q.pop(0)'\n1000000 loops, best of 3: 0.972 usec per loop\n$ python -mtimeit -s'import collections; q=collections.deque(range(100))' 'q.append(23); q.popleft()'\n1000000 loops, best of 3: 0.576 usec per loop\n\n(not very different for 12 instead of 100 items btw), and in much-larger ones:\n$ python -mtimeit -s'q=range(10000)' 'q.append(23); q.pop(0)'\n100000 loops, best of 3: 5.81 usec per loop\n$ python -mtimeit -s'import collections; q=collections.deque(range(10000))' 'q.append(23); q.popleft()'\n1000000 loops, best of 3: 0.574 usec per loop\n\nYou can see that the claim of O(1) performance for deque is well founded, while a list is over twice as slow around 1,000 items, an order of magnitude around 10,000. You can also see that even in such cases you're only wasting 5 microseconds or so per append/pop pair and decide how significant that wastage is (though if that's all you're doing with that container, deque has no downside, so you might as well switch even if 5 usec more or less won't make an important difference).\n", "You won't run out of memory using the list implementation, but performance will be poor. From the docs:\n\nThough list objects support similar\n operations, they are optimized for\n fast fixed-length operations and incur\n O(n) memory movement costs for\n pop(0) and insert(0, v) operations\n which change both the size and\n position of the underlying data\n representation.\n\nSo using a deque will be much faster.\n", "From Beazley's Python Essential Reference, Fourth Edition, p. 194:\n\nSome library modules provide new types\n that outperform the built-ins at\n certain tasks. For instance,\n collections.deque type provides\n similar functionality to a list but\n has been highly optimized for the\n insertion of items at both ends. A\n list, in contrast, is only efficient\n when appending items at the end. If\n you insert items at the front, all of\n the other elements need to be shifted\n in order to make room. The time\n required to do this grows as the list\n gets larger and larger. Just to give\n you an idea of the difference, here is a timing measurement of inserting one million items at the front of a list and a deque:\n\nAnd there follows this code sample:\n>>> from timeit import timeit\n>>> timeit('s.appendleft(37)', 'import collections; s = collections.deque()', number=1000000)\n0.13162776274638258\n>>> timeit('s.insert(0,37)', 's = []', number=1000000)\n932.07849908298408\n\nTimings are from my machine.\n\n2012-07-01 Update\n>>> from timeit import timeit\n>>> n = 1024 * 1024\n>>> while n > 1:\n... print '-' * 30, n\n... timeit('s.appendleft(37)', 'import collections; s = collections.deque()', number=n)\n... timeit('s.insert(0,37)', 's = []', number=n)\n... n >>= 1\n... \n------------------------------ 1048576\n0.1239769458770752\n799.2552740573883\n------------------------------ 524288\n0.06924104690551758\n148.9747350215912\n------------------------------ 262144\n0.029170989990234375\n35.077512979507446\n------------------------------ 131072\n0.013737916946411133\n9.134140014648438\n------------------------------ 65536\n0.006711006164550781\n1.8818109035491943\n------------------------------ 32768\n0.00327301025390625\n0.48307204246520996\n------------------------------ 16384\n0.0016388893127441406\n0.11021995544433594\n------------------------------ 8192\n0.0008249282836914062\n0.028419017791748047\n------------------------------ 4096\n0.00044918060302734375\n0.00740504264831543\n------------------------------ 2048\n0.00021195411682128906\n0.0021741390228271484\n------------------------------ 1024\n0.00011205673217773438\n0.0006101131439208984\n------------------------------ 512\n6.198883056640625e-05\n0.00021386146545410156\n------------------------------ 256\n2.9087066650390625e-05\n8.797645568847656e-05\n------------------------------ 128\n1.5974044799804688e-05\n3.600120544433594e-05\n------------------------------ 64\n8.821487426757812e-06\n1.9073486328125e-05\n------------------------------ 32\n5.0067901611328125e-06\n1.0013580322265625e-05\n------------------------------ 16\n3.0994415283203125e-06\n5.9604644775390625e-06\n------------------------------ 8\n3.0994415283203125e-06\n5.0067901611328125e-06\n------------------------------ 4\n3.0994415283203125e-06\n4.0531158447265625e-06\n------------------------------ 2\n2.1457672119140625e-06\n2.86102294921875e-06\n\n", "Every .pop(0) takes N steps, since the list has to be reorganized. The required memory will not grow endlessly and only be as big as required for the items that are held.\nI'd recommend using deque to get O(1) append and pop from front.\n", "it sounds like a bit of empirical testing might be the best thing to do here - second order issues might make one approach better in practice, even if it's not better in theory.\n" ]
[ 79, 47, 22, 5, 2 ]
[]
[]
[ "list", "memory_leaks", "python" ]
stackoverflow_0001296511_list_memory_leaks_python.txt
Q: Python sendto() not working on 3.1 (works on 2.6) For some reason, the following seems to work perfectly on my ubuntu machine running python 2.6 and returns an error on my windows xp box running python 3.1 from socket import socket, AF_INET, SOCK_DGRAM data = 'UDP Test Data' port = 12345 hostname = '192.168.0.1' udp = socket(AF_INET,SOCK_DGRAM) udp.sendto(data, (hostname, port)) Below is the error that the python 3.1 throws: Traceback (most recent call last): File "sendto.py", line 6, in <module> udp.sendto(data, (hostname, port)) TypeError: sendto() takes exactly 3 arguments (2 given) I have consulted the documentation for python 3.1 and the sendto() only requires two parameters. Any ideas as to what may be causing this? A: In Python 3, the string (first) argument must be of type bytes or buffer, not str. You'll get that error message if you supply the optional flags parameter. Change data to: data = b'UDP Test Data' You might want to file a bug report about that at the python.org bug tracker. [EDIT: already filed as noted by Dav] ... >>> data = 'UDP Test Data' >>> udp.sendto(data, (hostname, port)) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: sendto() takes exactly 3 arguments (2 given) >>> udp.sendto(data, 0, (hostname, port)) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: sendto() argument 1 must be bytes or buffer, not str >>> data = b'UDP Test Data' >>> udp.sendto(data, 0, (hostname, port)) 13 >>> udp.sendto(data, (hostname, port)) 13 A: Related issue on the Python bugtracker: http://bugs.python.org/issue5421
Python sendto() not working on 3.1 (works on 2.6)
For some reason, the following seems to work perfectly on my ubuntu machine running python 2.6 and returns an error on my windows xp box running python 3.1 from socket import socket, AF_INET, SOCK_DGRAM data = 'UDP Test Data' port = 12345 hostname = '192.168.0.1' udp = socket(AF_INET,SOCK_DGRAM) udp.sendto(data, (hostname, port)) Below is the error that the python 3.1 throws: Traceback (most recent call last): File "sendto.py", line 6, in <module> udp.sendto(data, (hostname, port)) TypeError: sendto() takes exactly 3 arguments (2 given) I have consulted the documentation for python 3.1 and the sendto() only requires two parameters. Any ideas as to what may be causing this?
[ "In Python 3, the string (first) argument must be of type bytes or buffer, not str. You'll get that error message if you supply the optional flags parameter. Change data to:\ndata = b'UDP Test Data'\nYou might want to file a bug report about that at the python.org bug tracker. [EDIT: already filed as noted by Dav]\n...\n>>> data = 'UDP Test Data'\n>>> udp.sendto(data, (hostname, port))\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: sendto() takes exactly 3 arguments (2 given)\n>>> udp.sendto(data, 0, (hostname, port))\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: sendto() argument 1 must be bytes or buffer, not str\n>>> data = b'UDP Test Data'\n>>> udp.sendto(data, 0, (hostname, port))\n13\n>>> udp.sendto(data, (hostname, port))\n13\n\n", "Related issue on the Python bugtracker:\nhttp://bugs.python.org/issue5421\n" ]
[ 6, 4 ]
[]
[]
[ "python", "sendto", "ubuntu", "udp", "windows" ]
stackoverflow_0001297505_python_sendto_ubuntu_udp_windows.txt
Q: Python WWW macro i need something like iMacros for Python. It would be great to have something like that: browse_to('www.google.com') type_in_input('search', 'query') click_button('search') list = get_all('<p>') Do you know something like that? Thanks in advance, Etam. A: Almost a direct fulfillment of the wishes in the question - twill. twill is a simple language that allows users to browse the Web from a command-line interface. With twill, you can navigate through Web sites that use forms, cookies, and most standard Web features. twill supports automated Web testing and has a simple Python interface. (pyparsing, mechanize, and BeautifulSoup are included with twill for convenience.) A Python API example: from twill.commands import go, showforms, formclear, fv, submit go('http://issola.caltech.edu/~t/qwsgi/qwsgi-demo.cgi/') go('./widgets') showforms() formclear('1') fv("1", "name", "test") fv("1", "password", "testpass") fv("1", "confirm", "yes") showforms() submit('0') A: Use mechanize. Other than executing JavaScript in a page, it's pretty good. A: Another thing to consider is writing your own script. It's actually not too tough once you get the hang of it, and without invoking a half dozen huge libraries it might even be faster (but I'm not sure). I use a web debugger called "Charles" to surf websites that I want to scrape. It logs all outgoing/incoming http communications, and I use the records to reverse engineer the query strings. Manipulating them in python makes for quite speedy, flexible scraping.
Python WWW macro
i need something like iMacros for Python. It would be great to have something like that: browse_to('www.google.com') type_in_input('search', 'query') click_button('search') list = get_all('<p>') Do you know something like that? Thanks in advance, Etam.
[ "Almost a direct fulfillment of the wishes in the question - twill.\n\ntwill is a simple language that allows users to browse the Web from a command-line interface. With twill, you can navigate through Web sites that use forms, cookies, and most standard Web features.\ntwill supports automated Web testing and has a simple Python interface.\n\n(pyparsing, mechanize, and BeautifulSoup are included with twill for convenience.)\nA Python API example:\nfrom twill.commands import go, showforms, formclear, fv, submit\n\ngo('http://issola.caltech.edu/~t/qwsgi/qwsgi-demo.cgi/')\ngo('./widgets')\nshowforms()\n\nformclear('1')\nfv(\"1\", \"name\", \"test\")\nfv(\"1\", \"password\", \"testpass\")\nfv(\"1\", \"confirm\", \"yes\")\nshowforms()\n\nsubmit('0')\n\n", "Use mechanize. Other than executing JavaScript in a page, it's pretty good.\n", "Another thing to consider is writing your own script. It's actually not too tough once you get the hang of it, and without invoking a half dozen huge libraries it might even be faster (but I'm not sure). I use a web debugger called \"Charles\" to surf websites that I want to scrape. It logs all outgoing/incoming http communications, and I use the records to reverse engineer the query strings. Manipulating them in python makes for quite speedy, flexible scraping. \n" ]
[ 7, 6, 0 ]
[]
[]
[ "python", "screen_scraping" ]
stackoverflow_0001294862_python_screen_scraping.txt
Q: Python class method - Is there a way to make the calls shorter? I am playing around with Python, and I've created a class in a different package from the one calling it. In this class, I've added a class method which is being called from my main function. Again, they are in separate packages. The line to call the class method is much longer than I thought it would be from the examples I've seen in other places. These examples tend to call class methods from within the same package - thus shortening the calling syntax. Here's an example that I hope helps: In a 'config' package: class TestClass : memberdict = { } @classmethod def add_key( clazz, key, value ) : memberdict[ key ] = value Now in a different package named 'test': import sys import config.TestClass def main() : config.TestClass.TestClass.add_key( "mykey", "newvalue" ) return 0 if __name__ == "__main__" : sys.exit( main() ) You can see how 'config.TestClass.TestClass.add_key' is much more verbose than normal class method calls. Is there a way to make it shorter? Maybe 'TestClass.add_key'? Am I defining something in a strange way (Case of the class matching the python file name?) A: from config.TestClass import TestClass TestClass.add_key( "mykey", "newvalue" )
Python class method - Is there a way to make the calls shorter?
I am playing around with Python, and I've created a class in a different package from the one calling it. In this class, I've added a class method which is being called from my main function. Again, they are in separate packages. The line to call the class method is much longer than I thought it would be from the examples I've seen in other places. These examples tend to call class methods from within the same package - thus shortening the calling syntax. Here's an example that I hope helps: In a 'config' package: class TestClass : memberdict = { } @classmethod def add_key( clazz, key, value ) : memberdict[ key ] = value Now in a different package named 'test': import sys import config.TestClass def main() : config.TestClass.TestClass.add_key( "mykey", "newvalue" ) return 0 if __name__ == "__main__" : sys.exit( main() ) You can see how 'config.TestClass.TestClass.add_key' is much more verbose than normal class method calls. Is there a way to make it shorter? Maybe 'TestClass.add_key'? Am I defining something in a strange way (Case of the class matching the python file name?)
[ "from config.TestClass import TestClass\nTestClass.add_key( \"mykey\", \"newvalue\" )\n\n" ]
[ 13 ]
[]
[]
[ "class_method", "python" ]
stackoverflow_0001297583_class_method_python.txt
Q: Writing a tail -f python script that doesn't utilize 100% CPU Here's a simple implementation of tail -f written in python. The problem with this is with the looping nature, this script likes to hog a lot of the CPU time. If it's something one would like to run as a forked process/daemon, it would be an inefficient start. What's a solution to have a CPU efficient tail -f written in Python? # CPU inefficient tail -f def main(): filename = '/tmp/foo' file = open(filename, 'r') st_results = os.stat(filename) st_size = st_results[6] file.seek(st_size) lastmatch = 0 while 1: where = file.tell() line = file.readline() if not line: time.sleep(1) file.seek(where) else: print line, A: I don't know of an implementation that's going to be extremely efficient AND portable between Windows on one side, and just about every other system on the other. Just about everywhere, I'd use the select module of the standard library (which can be based on system level functionality such as select, kqueue, etc) to get woken up when and only when there are changes to a file, without polling; but on Windows select can only work on sockets. So, on Windows, I'd use directory change notifications instead (e.g. via ctypes or win32).
Writing a tail -f python script that doesn't utilize 100% CPU
Here's a simple implementation of tail -f written in python. The problem with this is with the looping nature, this script likes to hog a lot of the CPU time. If it's something one would like to run as a forked process/daemon, it would be an inefficient start. What's a solution to have a CPU efficient tail -f written in Python? # CPU inefficient tail -f def main(): filename = '/tmp/foo' file = open(filename, 'r') st_results = os.stat(filename) st_size = st_results[6] file.seek(st_size) lastmatch = 0 while 1: where = file.tell() line = file.readline() if not line: time.sleep(1) file.seek(where) else: print line,
[ "I don't know of an implementation that's going to be extremely efficient AND portable between Windows on one side, and just about every other system on the other. Just about everywhere, I'd use the select module of the standard library (which can be based on system level functionality such as select, kqueue, etc) to get woken up when and only when there are changes to a file, without polling; but on Windows select can only work on sockets. So, on Windows, I'd use directory change notifications instead (e.g. via ctypes or win32).\n" ]
[ 0 ]
[]
[]
[ "algorithm", "python" ]
stackoverflow_0001297563_algorithm_python.txt
Q: Why there is a difference in "import" vs. "import *"? """module a.py""" test = "I am test" _test = "I am _test" __test = "I am __test" ============= ~ $ python Python 2.6.2 (r262:71600, Apr 16 2009, 09:17:39) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from a import * >>> test 'I am test' >>> _test Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name '_test' is not defined >>> __test Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name '__test' is not defined >>> import a >>> a.test 'I am test' >>> a._test 'I am _test' >>> a.__test 'I am __test' >>> A: Variables with a leading "_" (underbar) are not public names and will not be imported when from x import * is used. Here, _test and __test are not public names. From the import statement description: If the list of identifiers is replaced by a star ('*'), all public names defined in the module are bound in the local namespace of the import statement.. The public names defined by a module are determined by checking the module’s namespace for a variable named __all__; if defined, it must be a sequence of strings which are names defined or imported by that module. The names given in __all__ are all considered public and are required to exist. If __all__ is not defined, the set of public names includes all names found in the module’s namespace which do not begin with an underscore character ('_'). __all__ should contain the entire public API. It is intended to avoid accidentally exporting items that are not part of the API (such as library modules which were imported and used within the module).
Why there is a difference in "import" vs. "import *"?
"""module a.py""" test = "I am test" _test = "I am _test" __test = "I am __test" ============= ~ $ python Python 2.6.2 (r262:71600, Apr 16 2009, 09:17:39) [GCC 4.0.1 (Apple Computer, Inc. build 5250)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from a import * >>> test 'I am test' >>> _test Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name '_test' is not defined >>> __test Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name '__test' is not defined >>> import a >>> a.test 'I am test' >>> a._test 'I am _test' >>> a.__test 'I am __test' >>>
[ "Variables with a leading \"_\" (underbar) are not public names and will not be imported when from x import * is used.\nHere, _test and __test are not public names.\nFrom the import statement description:\n\nIf the list of identifiers is replaced\n by a star ('*'), all public names\n defined in the module are bound in the\n local namespace of the import\n statement..\nThe public names defined by a module\n are determined by checking the\n module’s namespace for a variable\n named __all__; if defined, it must be\n a sequence of strings which are names\n defined or imported by that module.\n The names given in __all__ are all\n considered public and are required to\n exist. If __all__ is not defined, the\n set of public names includes all names\n found in the module’s namespace which\n do not begin with an underscore\n character ('_'). __all__ should\n contain the entire public API. It is\n intended to avoid accidentally\n exporting items that are not part of\n the API (such as library modules which\n were imported and used within the\n module).\n\n" ]
[ 21 ]
[]
[]
[ "import", "python" ]
stackoverflow_0001297766_import_python.txt
Q: SQLAlchemy(Postgres) and transaction I want a record from a table(queue) to be selected, locked(no other process can edit this record) and updated at a later point in time. I assumed if I put the whole querying and updating in a transaction, no other process can edit/query the same record. But I am not quite able to achieve this. def move(one, two): from settings import DATABASE_USER, DATABASE_PASSWORD, DATABASE_HOST, DATABASE_PORT, DATABASE_NAME from sqlalchemy.orm import sessionmaker, scoped_session from sqlalchemy import create_engine engine = create_engine('postgres://%s:%s@%s:%s/%s' % (DATABASE_USER, DATABASE_PASSWORD, DATABASE_HOST, DATABASE_PORT, DATABASE_NAME), echo = False) conn = engine.connect() tran = conn.begin() Session = scoped_session(sessionmaker()) session = Session(bind=conn) url = session.query(URLQueue).filter(URLQueue.status == one).first() print "Got record: " + str(url.urlqueue_id) time.sleep(5) url.status = two session.merge(url) session.close() tran.commit() move('START', 'WIP') If I start 2 process, they both update the same record. I am not sure if I created the connections/sessions/transactions properly. Any pointers? A: Either make your transaction isolation level serializable, or fetch the record for updating via query.with_lockmode('update').
SQLAlchemy(Postgres) and transaction
I want a record from a table(queue) to be selected, locked(no other process can edit this record) and updated at a later point in time. I assumed if I put the whole querying and updating in a transaction, no other process can edit/query the same record. But I am not quite able to achieve this. def move(one, two): from settings import DATABASE_USER, DATABASE_PASSWORD, DATABASE_HOST, DATABASE_PORT, DATABASE_NAME from sqlalchemy.orm import sessionmaker, scoped_session from sqlalchemy import create_engine engine = create_engine('postgres://%s:%s@%s:%s/%s' % (DATABASE_USER, DATABASE_PASSWORD, DATABASE_HOST, DATABASE_PORT, DATABASE_NAME), echo = False) conn = engine.connect() tran = conn.begin() Session = scoped_session(sessionmaker()) session = Session(bind=conn) url = session.query(URLQueue).filter(URLQueue.status == one).first() print "Got record: " + str(url.urlqueue_id) time.sleep(5) url.status = two session.merge(url) session.close() tran.commit() move('START', 'WIP') If I start 2 process, they both update the same record. I am not sure if I created the connections/sessions/transactions properly. Any pointers?
[ "Either make your transaction isolation level serializable, or fetch the record for updating via query.with_lockmode('update').\n" ]
[ 1 ]
[]
[]
[ "postgresql", "python", "sqlalchemy" ]
stackoverflow_0001296994_postgresql_python_sqlalchemy.txt
Q: Python: Which modules for a discussion site? A site should be ready in 6 days. I am not allowed to use any framework such as Django. I am going to use: Python modules HTMLGen to generate HTML code from class-based description SQLObject, relational tables onto Python's class model ? Other Python 2.5 A variant of the Postgres schema Super Smack for testing the schema Which modules would you use in the limited time? Plan To generate class model with SQLOject from the schema then generate HTML code from the gererated class model with HTMLGen. (changed to Jinja2) ? A: How about Jinja for templating? It will be much faster than working with autogenerated html. http://pypi.python.org/pypi/Jinja2/2.0 A: I think TurboGears started out as a project to collect best-of-breed packages together with some glue code to stitch them together. I think the latest incarnation uses Pylons, but perhaps only for the controller. At the very least, you can see the TurboGears Wikipedia entry to see what components they selected (see the subsections TurboGears 1.x components and TurboGears 2.x components), since they've obviously had some experience with this kind of thing. There's nothing "discussion" specific, but really you just want a templating library, a database library or ORM, a WSGI implementation with a router/controller and perhaps some AJAXy or other presentation widgets.
Python: Which modules for a discussion site?
A site should be ready in 6 days. I am not allowed to use any framework such as Django. I am going to use: Python modules HTMLGen to generate HTML code from class-based description SQLObject, relational tables onto Python's class model ? Other Python 2.5 A variant of the Postgres schema Super Smack for testing the schema Which modules would you use in the limited time? Plan To generate class model with SQLOject from the schema then generate HTML code from the gererated class model with HTMLGen. (changed to Jinja2) ?
[ "How about Jinja for templating? It will be much faster than working with autogenerated html.\nhttp://pypi.python.org/pypi/Jinja2/2.0\n", "I think TurboGears started out as a project to collect best-of-breed packages together with some glue code to stitch them together. I think the latest incarnation uses Pylons, but perhaps only for the controller. At the very least, you can see the TurboGears Wikipedia entry to see what components they selected (see the subsections TurboGears 1.x components and TurboGears 2.x components), since they've obviously had some experience with this kind of thing. There's nothing \"discussion\" specific, but really you just want a templating library, a database library or ORM, a WSGI implementation with a router/controller and perhaps some AJAXy or other presentation widgets.\n" ]
[ 3, 1 ]
[]
[]
[ "module", "postgresql", "python", "web" ]
stackoverflow_0001297350_module_postgresql_python_web.txt
Q: Is there an easy way to use a python tempfile in a shelve (and make sure it cleans itself up)? Basically, I want an infinite size (more accurately, hard-drive rather than memory bound) dict in a python program I'm writing. It seems like the tempfile and shelve modules are naturally suited for this, however, I can't see how to use them together in a safe manner. I want the tempfile to be deleted when the shelve is GCed (or at guarantee deletion after the shelve is out of use, regardless of when), but the only solution I can come up with for this involves using tempfile.TemporaryFile() to open a file handle, getting the filename from the handle, using this filename for opening a shelve, keeping the reference to the file handle to prevent it from getting GCed (and the file deleted), and then putting a wrapper on the shelve that stores this reference. Anyone have a better solution than this convoluted mess? Restrictions: Can only use the standard python library and must be fully cross platform. A: I would rather inherit from shelve.Shelf, and override the close method (*) to unlink the files. Notice that, depending on the specific dbm module being used, you may have more than one file that contains the shelf. One solution could be to create a temporary directory, rather than a temporary file, and remove anything in the directory when done. The other solution would be to bind to a specific dbm module (say, bsddb, or dumbdbm), and remove specifically those files that these libraries create. (*) notice that the close method of a shelf is also called when the shelf is garbage collected. The only case how you could end up with garbage files is when the interpreter crashes or gets killed.
Is there an easy way to use a python tempfile in a shelve (and make sure it cleans itself up)?
Basically, I want an infinite size (more accurately, hard-drive rather than memory bound) dict in a python program I'm writing. It seems like the tempfile and shelve modules are naturally suited for this, however, I can't see how to use them together in a safe manner. I want the tempfile to be deleted when the shelve is GCed (or at guarantee deletion after the shelve is out of use, regardless of when), but the only solution I can come up with for this involves using tempfile.TemporaryFile() to open a file handle, getting the filename from the handle, using this filename for opening a shelve, keeping the reference to the file handle to prevent it from getting GCed (and the file deleted), and then putting a wrapper on the shelve that stores this reference. Anyone have a better solution than this convoluted mess? Restrictions: Can only use the standard python library and must be fully cross platform.
[ "I would rather inherit from shelve.Shelf, and override the close method (*) to unlink the files. Notice that, depending on the specific dbm module being used, you may have more than one file that contains the shelf. One solution could be to create a temporary directory, rather than a temporary file, and remove anything in the directory when done. The other solution would be to bind to a specific dbm module (say, bsddb, or dumbdbm), and remove specifically those files that these libraries create.\n(*) notice that the close method of a shelf is also called when the shelf is garbage collected. The only case how you could end up with garbage files is when the interpreter crashes or gets killed.\n" ]
[ 1 ]
[]
[]
[ "python", "shelve", "temporary_files" ]
stackoverflow_0001298037_python_shelve_temporary_files.txt
Q: Passing kwargs from template to view? As you may be able to tell from my questions, I'm new to both python and django. I would like to allow dynamic filter specifications of query sets from my templates using **kwargs. I'm thinking like a select box of a bunch of kwargs. For example: <select id="filter"> <option value="physician__isnull=True">Unassigned patients</option> </select> Does django provide an elegant solution to this problem that I haven't come across yet? I'm trying to solve this in a generic manner since I need to pass this filter to other views. For example, I need to pass a filter to a paginated patient list view, so the pagination knows what items it's working with. Another example is this filter would have to be passed to a patient detail page so you can iterate through the filtered list of patients with prev/next links. Thanks a bunch, Pete Update: What I came up with was building a FilterSpecification class: class FilterSpec(object): def __init__(self, name, *args): super(FilterSpec, self).__init__() self.name = name self.filters = [] for filter in args: self.add(filter) def pickle(self): return encrypt(pickle.dumps(self)) def add(self, f): self.filters.append(f) def kwargs(self): kwargs = {} for f in self.filters: kwargs = f.kwarg(**kwargs) return kwargs def __unicode__(self): return self.name class Filter(object): def __init__(self, key, value): super(Filter, self).__init__() self.filter_key = key self.filter_value = value def kwarg(self, **kwargs): if self.filter_key != None: kwargs[self.filter_key] = self.filter_value return kwargs I then can filter any type of model like this: filterSpec = FilterSpec('Assigned', Filter('service__isnull', False))) patients = Patient.objects.filter(**filterSpec.kwargs()) I pass these filterSpec objects from the client to server by serializing, compressing, applying some symmetric encryption, and url-safe base-64 encoding. The only downside is that you end up with URLs looking like this: http://127.0.0.1:8000/hospitalists/assign_test/?filter=eJwBHQHi_iDiTrccFpHA4It7zvtNIW5nUdRAxdiT-cZStYhy0PHezZH2Q7zmJB-NGAdYY4Q60Tr_gT_Jjy_bXfB6iR8inrNOVkXKVvLz3SCVrCktGc4thePSNAKoBtJHkcuoaf9YJA5q9f_1i6uh45-6k7ZyXntRu5CVEsm0n1u5T1vdMwMnaNA8QzYk4ecsxJRSy6SMbUHIGhDiwHHj1UnQaOWtCSJEt2zVxaurMuCRFT2bOKlj5nHfXCBTUCh4u3aqZZjmSd2CGMXZ8Pn3QGBppWhZQZFztP_1qKJaqSVeTNnDWpehbMvqabpivtnFTxwszJQw9BMcCBNTpvJf3jUGarw_dJ89VX12LuxALsketkPbYhXzXNxTK1PiZBYqGfBbioaYkjo%3D I would love to get some comments on this approach and hear other solutions. A: Rather than face the horrible dangers of SQL injection, why not just assign a value to each select option and have your form-handling view run the selected query based on the value. Passing the parameters for a DB query from page to view is just asking for disaster. Django is built to avoid this sort of thing. A: Concerning your update: FilterSpecs are unfortunately one of those (rare) pieces of Django that lack public documentation. As such, there is no guarantee that they will keep working as they do. Another approach would be to use Alex Gaynor's django-filter which look really well thought out. I'll be using them for my next project.
Passing kwargs from template to view?
As you may be able to tell from my questions, I'm new to both python and django. I would like to allow dynamic filter specifications of query sets from my templates using **kwargs. I'm thinking like a select box of a bunch of kwargs. For example: <select id="filter"> <option value="physician__isnull=True">Unassigned patients</option> </select> Does django provide an elegant solution to this problem that I haven't come across yet? I'm trying to solve this in a generic manner since I need to pass this filter to other views. For example, I need to pass a filter to a paginated patient list view, so the pagination knows what items it's working with. Another example is this filter would have to be passed to a patient detail page so you can iterate through the filtered list of patients with prev/next links. Thanks a bunch, Pete Update: What I came up with was building a FilterSpecification class: class FilterSpec(object): def __init__(self, name, *args): super(FilterSpec, self).__init__() self.name = name self.filters = [] for filter in args: self.add(filter) def pickle(self): return encrypt(pickle.dumps(self)) def add(self, f): self.filters.append(f) def kwargs(self): kwargs = {} for f in self.filters: kwargs = f.kwarg(**kwargs) return kwargs def __unicode__(self): return self.name class Filter(object): def __init__(self, key, value): super(Filter, self).__init__() self.filter_key = key self.filter_value = value def kwarg(self, **kwargs): if self.filter_key != None: kwargs[self.filter_key] = self.filter_value return kwargs I then can filter any type of model like this: filterSpec = FilterSpec('Assigned', Filter('service__isnull', False))) patients = Patient.objects.filter(**filterSpec.kwargs()) I pass these filterSpec objects from the client to server by serializing, compressing, applying some symmetric encryption, and url-safe base-64 encoding. The only downside is that you end up with URLs looking like this: http://127.0.0.1:8000/hospitalists/assign_test/?filter=eJwBHQHi_iDiTrccFpHA4It7zvtNIW5nUdRAxdiT-cZStYhy0PHezZH2Q7zmJB-NGAdYY4Q60Tr_gT_Jjy_bXfB6iR8inrNOVkXKVvLz3SCVrCktGc4thePSNAKoBtJHkcuoaf9YJA5q9f_1i6uh45-6k7ZyXntRu5CVEsm0n1u5T1vdMwMnaNA8QzYk4ecsxJRSy6SMbUHIGhDiwHHj1UnQaOWtCSJEt2zVxaurMuCRFT2bOKlj5nHfXCBTUCh4u3aqZZjmSd2CGMXZ8Pn3QGBppWhZQZFztP_1qKJaqSVeTNnDWpehbMvqabpivtnFTxwszJQw9BMcCBNTpvJf3jUGarw_dJ89VX12LuxALsketkPbYhXzXNxTK1PiZBYqGfBbioaYkjo%3D I would love to get some comments on this approach and hear other solutions.
[ "Rather than face the horrible dangers of SQL injection, why not just assign a value to each select option and have your form-handling view run the selected query based on the value.\nPassing the parameters for a DB query from page to view is just asking for disaster. Django is built to avoid this sort of thing.\n", "Concerning your update: FilterSpecs are unfortunately one of those (rare) pieces of Django that lack public documentation. As such, there is no guarantee that they will keep working as they do.\nAnother approach would be to use Alex Gaynor's django-filter which look really well thought out. I'll be using them for my next project.\n" ]
[ 1, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001269544_django_python.txt
Q: Running a plain python interpreter in presense of ipython with manage.py shell I have ipython installed, I want to run a plain python interpreter instead with manage.py shell. So I try, python2.5 manage.py shell --plain Which gave me an error, and text which suggest that --plain was passed to ipython So I read, http://docs.djangoproject.com/en/dev/ref/django-admin/ which suggets django-admin.py shell --plain Which gives me Error: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined. Which seem the correct thing for it to do. What am I mising here? [Ubuntu Jaunty, django.VERSION = (1, 2, 0, 'alpha', 0), python 2.5 and 2.6] A: If the reason you want to use python's interpretor over iPython's is because you need to paste the doc tests, you can try typing %doctest_mode in the ipython console instead In [1]: %doctest_mode *** Pasting of code with ">>>" or "..." has been enabled. Exception reporting mode: Plain Doctest mode is: ON >>> A: link to a blog post explaining the same thing for bpython
Running a plain python interpreter in presense of ipython with manage.py shell
I have ipython installed, I want to run a plain python interpreter instead with manage.py shell. So I try, python2.5 manage.py shell --plain Which gave me an error, and text which suggest that --plain was passed to ipython So I read, http://docs.djangoproject.com/en/dev/ref/django-admin/ which suggets django-admin.py shell --plain Which gives me Error: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined. Which seem the correct thing for it to do. What am I mising here? [Ubuntu Jaunty, django.VERSION = (1, 2, 0, 'alpha', 0), python 2.5 and 2.6]
[ "If the reason you want to use python's interpretor over iPython's is because you need to paste the doc tests, you can try typing\n%doctest_mode\n\nin the ipython console instead\nIn [1]: %doctest_mode\n*** Pasting of code with \">>>\" or \"...\" has been enabled.\nException reporting mode: Plain\nDoctest mode is: ON\n>>> \n\n", "link to a blog post explaining the same thing for bpython\n" ]
[ 1, 0 ]
[]
[]
[ "django", "django_manage.py", "python" ]
stackoverflow_0001295492_django_django_manage.py_python.txt
Q: --home or --prefix in python package install? When you build and install a python package, you have two choices: --home and --prefix. I never really got the difference between the two (I always use --home) but if I understood correctly one is deprecated and the other is "the way to go"™. Am I wrong ? A: According to the Installing Python Modules documentation, the "standard" way is to specify neither, and to let Python install it in either /usr/local/lib/pythonX.Y/site-packages on *nix or C:\Python\ on Windows. But, if you do decide to go for an alternate method, you can specify --home to name the base installation directory, typically when you want to store multiple packages in just your own directory, usually on a multi-user machine when you don't have admin access, or perhaps for just testing before a system-wide install. --home is not deprecated; in fact, it was only added to Windows as of Python 2.4. The --prefix option is more strange, because this lets you use one version of Python to build the module you're installing, while letting you install the module to a different location from normal. Another example is when you have to write to a directory with one name, while reading from it with another name (some network shares are set up this way). So the --home prefix specifies home/lib/python, home/bin, home/share, while the --prefix option specifies prefix/lib/pythonX.Y/site-packages/, prefix/bin, prefix/share on *nix and prefix/Scripts and prefix/Data on Windows.
--home or --prefix in python package install?
When you build and install a python package, you have two choices: --home and --prefix. I never really got the difference between the two (I always use --home) but if I understood correctly one is deprecated and the other is "the way to go"™. Am I wrong ?
[ "According to the Installing Python Modules documentation, the \"standard\" way is to specify neither, and to let Python install it in either /usr/local/lib/pythonX.Y/site-packages on *nix or C:\\Python\\ on Windows.\nBut, if you do decide to go for an alternate method, you can specify --home to name the base installation directory, typically when you want to store multiple packages in just your own directory, usually on a multi-user machine when you don't have admin access, or perhaps for just testing before a system-wide install. --home is not deprecated; in fact, it was only added to Windows as of Python 2.4.\nThe --prefix option is more strange, because this lets you use one version of Python to build the module you're installing, while letting you install the module to a different location from normal. Another example is when you have to write to a directory with one name, while reading from it with another name (some network shares are set up this way).\nSo the --home prefix specifies home/lib/python, home/bin, home/share, while the --prefix option specifies prefix/lib/pythonX.Y/site-packages/, prefix/bin, prefix/share on *nix and prefix/Scripts and prefix/Data on Windows.\n" ]
[ 4 ]
[]
[]
[ "installation", "python" ]
stackoverflow_0001298036_installation_python.txt
Q: Py2Exe - "The application configuration is incorrect." I've compiled my Python program using Py2Exe, and on the client's computer we've satisfied all the dependencies using dependency walker, but we still get "The application configuration is incorrect. Reinstalling the application may correct the problem." I'm also using wxPython. The client does not have administrator access. Any ideas? A: Give GUI2exe a shot; it's developed by Andrea Gavana who's big in the wxpython community and wraps a bunch of the freezers, including py2exe. It's likely a dll issue, try searching the wxpython list archive. This thread may be of use. A: I've ran into this myself and my random Googling has pointed me to several people saying to downgrade python 2.6 to 2.5, which worked for me. A: Just ran into this same issue with python 2.6, PyQt and py2exe. The root cause was a missing dependency, resolved by installing the visual studio 2008 SP1 redist(x86). A: I have run into similar problem few minutes ago. I couldn't run py2exe installation file, it kept saying that application configuration was incorrect. Downgrading to python 2.5 didn't work for me because I used 'with' statements through out the code and didn't want to change it. I reinstalled python 2.6 and I checked the option that says that anyone on the computer can use python. Worked out just fine.
Py2Exe - "The application configuration is incorrect."
I've compiled my Python program using Py2Exe, and on the client's computer we've satisfied all the dependencies using dependency walker, but we still get "The application configuration is incorrect. Reinstalling the application may correct the problem." I'm also using wxPython. The client does not have administrator access. Any ideas?
[ "Give GUI2exe a shot; it's developed by Andrea Gavana who's big in the wxpython community and wraps a bunch of the freezers, including py2exe. It's likely a dll issue, try searching the wxpython list archive. This thread may be of use.\n", "I've ran into this myself and my random Googling has pointed me to several people saying to downgrade python 2.6 to 2.5, which worked for me.\n", "Just ran into this same issue with python 2.6, PyQt and py2exe. The root cause was a missing dependency, resolved by installing the visual studio 2008 SP1 redist(x86).\n", "I have run into similar problem few minutes ago. I couldn't run py2exe installation file, it kept saying that application configuration was incorrect. Downgrading to python 2.5 didn't work for me because I used 'with' statements through out the code and didn't want to change it. \nI reinstalled python 2.6 and I checked the option that says that anyone on the computer can use python. Worked out just fine.\n" ]
[ 3, 1, 1, 0 ]
[]
[]
[ "compilation", "py2exe", "python", "wxpython" ]
stackoverflow_0000441256_compilation_py2exe_python_wxpython.txt
Q: Match all urls that aren't wrapped into tag I am seeking for a regular expression pattern that could match urls in HTML that aren't wrapped into 'a' tag, in order to wrap them into 'a' tag further (i.e. highlight all non-highlighted links). Input is simple HTML with 'a', 'b', 'i', 'br', 'p' 'img' tags allowed. All other HTML tags shouldn't appear in the input, but tags mentioned above could appear in any combinations. So pattern should omit all urls that are parts of existing 'a' tags, and match all other links that are just plain text not wrapped into 'a' tags and thus are not highlighted and are not hyperlinks yet. It would be good if pattern will match urls beginning with http://, https:// or www., and ending with .net, .com. or .org if the url isn't begin with http://, https:// or www. I've tried something like '(?!<[aA][^>]+>)http://[a-zA-Z0-9._-]+(?!)' to match more simple case than I described above, but it seems that this task is not so obvious. Thanks much for any help. A: You could use BeautifulSoup or similar to exclude all urls that are already part of links. Then you can match the plain text with one of the url regular expressions that's already out there (google "url regular expression", which one you want depends on how fancy you want to get). A: Parsing HTML with a single regex is almost impossible by definition, since regexes don't have state. Build/Use a real parser instead. Maybe BeautifulSoup or html5lib. This code below uses BeautifulSoup to extract all links from the page: from BeautifulSoup import BeautifulSoup from urllib2 import urlopen url = 'http://stackoverflow.com/questions/1296778/' stream = urlopen(url) soup = BeautifulSoup(stream) for link in soup.findAll('a'): if link.has_key('href'): print unicode(link.string), '->', link['href'] Similarly you could find all text using soup.findAll(text=True) and search for urls there. Searching for urls is also very complex - you wouldn't believe on what's allowed on a url. A simple search shows thousands of examples, but none match exactly the specs. You should try what works better for you.
Match all urls that aren't wrapped into tag
I am seeking for a regular expression pattern that could match urls in HTML that aren't wrapped into 'a' tag, in order to wrap them into 'a' tag further (i.e. highlight all non-highlighted links). Input is simple HTML with 'a', 'b', 'i', 'br', 'p' 'img' tags allowed. All other HTML tags shouldn't appear in the input, but tags mentioned above could appear in any combinations. So pattern should omit all urls that are parts of existing 'a' tags, and match all other links that are just plain text not wrapped into 'a' tags and thus are not highlighted and are not hyperlinks yet. It would be good if pattern will match urls beginning with http://, https:// or www., and ending with .net, .com. or .org if the url isn't begin with http://, https:// or www. I've tried something like '(?!<[aA][^>]+>)http://[a-zA-Z0-9._-]+(?!)' to match more simple case than I described above, but it seems that this task is not so obvious. Thanks much for any help.
[ "You could use BeautifulSoup or similar to exclude all urls that are already part of links. \nThen you can match the plain text with one of the url regular expressions that's already out there (google \"url regular expression\", which one you want depends on how fancy you want to get).\n", "Parsing HTML with a single regex is almost impossible by definition, since regexes don't have state.\nBuild/Use a real parser instead. Maybe BeautifulSoup or html5lib.\nThis code below uses BeautifulSoup to extract all links from the page:\nfrom BeautifulSoup import BeautifulSoup\nfrom urllib2 import urlopen\n\nurl = 'http://stackoverflow.com/questions/1296778/'\nstream = urlopen(url)\nsoup = BeautifulSoup(stream)\nfor link in soup.findAll('a'):\n if link.has_key('href'):\n print unicode(link.string), '->', link['href']\n\nSimilarly you could find all text using soup.findAll(text=True) and search for urls there.\nSearching for urls is also very complex - you wouldn't believe on what's allowed on a url. A simple search shows thousands of examples, but none match exactly the specs. You should try what works better for you.\n" ]
[ 5, 5 ]
[ "Thanks guys! Below is my solution:\nfrom django.utils.html import urlize # Yes, I am using Django's urlize to do all dirty work :)\n\ndef urlize_html(value):\n \"\"\"\n Urlizes text containing simple HTML tags.\n \"\"\"\n A_IMG_REGEX = r'(<[aA][^>]+>[^<]+</[aA]>|<[iI][mM][gG][^>]+>)'\n a_img_re = re.compile(A_IMG_REGEX)\n\n TAG_REGEX = r'(<[a-zA-Z]+[^>]+>|</[a-zA-Z]>)'\n tag_re = re.compile(TAG_REGEX)\n\n def process(s, p, f):\n return \"\".join([c if p.match(c) else f(c) for c in p.split(s)])\n\n def process_urlize(s):\n return process(s, tag_re, urlize)\n\n return process(value, a_img_re, process_urlize)\n\n" ]
[ -2 ]
[ "python", "regex" ]
stackoverflow_0001296778_python_regex.txt
Q: Handling international dates in python I have a date that is either in German for e.g, 2. Okt. 2009 and also perhaps as 2. Oct. 2009 How do I convert this into an ISO datetime (or Python datetime)? Solved by using this snippet: for l in locale.locale_alias: worked = False try: locale.setlocale(locale.LC_TIME, l) worked = True except: worked = False if worked: print l And then plugging in the appropriate for the parameter l in setlocale. Can parse using import datetime print datetime.datetime.strptime("09. Okt. 2009", "%d. %b. %Y") A: http://docs.python.org/library/locale.html The datetime module is already locale-aware. It's something like the following # German locale loc = locale.setlocale(locale.LC_TIME, ("de","de")) try: date = datetime.date.strptime(input, "%d. %b. %Y") except: # English locale loc = locale.setlocale(locale.LC_TIME, ("en","us")) date = datetime.date.strptime(input, "%d. %b. %Y") A: Very minor point about your code snippet: I'm no Python expert but I'd consider the whole "flag to check for success + silently swallowing all exceptions" to be bad style. try/expect/else does what you want in a cleaner way, I think: for l in locale.locale_alias: try: locale.setlocale(locale.LC_TIME, l) except locale.Error: # the doc says setlocale should throw this on failure pass else: print l
Handling international dates in python
I have a date that is either in German for e.g, 2. Okt. 2009 and also perhaps as 2. Oct. 2009 How do I convert this into an ISO datetime (or Python datetime)? Solved by using this snippet: for l in locale.locale_alias: worked = False try: locale.setlocale(locale.LC_TIME, l) worked = True except: worked = False if worked: print l And then plugging in the appropriate for the parameter l in setlocale. Can parse using import datetime print datetime.datetime.strptime("09. Okt. 2009", "%d. %b. %Y")
[ "http://docs.python.org/library/locale.html\nThe datetime module is already locale-aware.\nIt's something like the following\n# German locale\nloc = locale.setlocale(locale.LC_TIME, (\"de\",\"de\"))\ntry:\n date = datetime.date.strptime(input, \"%d. %b. %Y\")\nexcept:\n # English locale\n loc = locale.setlocale(locale.LC_TIME, (\"en\",\"us\"))\n date = datetime.date.strptime(input, \"%d. %b. %Y\")\n \n\n", "Very minor point about your code snippet: I'm no Python expert but I'd consider the whole \"flag to check for success + silently swallowing all exceptions\" to be bad style.\ntry/expect/else does what you want in a cleaner way, I think:\nfor l in locale.locale_alias:\n try:\n locale.setlocale(locale.LC_TIME, l)\n except locale.Error: # the doc says setlocale should throw this on failure\n pass\n else:\n print l\n\n" ]
[ 11, 3 ]
[]
[]
[ "datetime", "internationalization", "locale", "python" ]
stackoverflow_0001299377_datetime_internationalization_locale_python.txt
Q: Python re question - sub challenge I want to add a href links to all words prefixed with # or ! or @ If this is the text Check the #bamboo and contact @Fred re #bamboo #garden should be converted to: Check the <a href="/what/bamboo">#bamboo</a> and contact <a href="/who/fred">@Fred</a> re <a href="/what/bamboo">#bamboo</a> <a href="/what/garden">#garden</a> Note that # and @ go to different places. This is as far as I have got, just doing the hashes... matched = re.sub("[#](?P<keyword>\w+)", \ '<a href="/what/(?P=keyword)">(?P=keyword)</a>', \ text) Any re gurus able to point me in the right direction. Do I need to do separate matches for each symbol? A: I'd do it with a single match and a function picking the "place". I.e.: import re places = {'#': 'what', '@': 'who', '!': 'why', } def replace(m): all = m.group(0) first, rest = all[0], all[1:] return '<a href="/%s/%s">%s</a>' % ( places[first], rest, all) markedup = re.sub(r'[#!@]\w+', replace, text)
Python re question - sub challenge
I want to add a href links to all words prefixed with # or ! or @ If this is the text Check the #bamboo and contact @Fred re #bamboo #garden should be converted to: Check the <a href="/what/bamboo">#bamboo</a> and contact <a href="/who/fred">@Fred</a> re <a href="/what/bamboo">#bamboo</a> <a href="/what/garden">#garden</a> Note that # and @ go to different places. This is as far as I have got, just doing the hashes... matched = re.sub("[#](?P<keyword>\w+)", \ '<a href="/what/(?P=keyword)">(?P=keyword)</a>', \ text) Any re gurus able to point me in the right direction. Do I need to do separate matches for each symbol?
[ "I'd do it with a single match and a function picking the \"place\". I.e.:\nimport re\n\nplaces = {'#': 'what',\n '@': 'who',\n '!': 'why',\n }\n\ndef replace(m):\n all = m.group(0)\n first, rest = all[0], all[1:]\n return '<a href=\"/%s/%s\">%s</a>' % (\n places[first], rest, all)\n\nmarkedup = re.sub(r'[#!@]\\w+', replace, text)\n\n" ]
[ 5 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001300350_python_regex.txt
Q: HTTPResponse - Return organised data (table?) My Webservice is currently returning a HTTPResponse which contains thousands of data from a mySQL database (via a Objects.filter). It's currently just displating them in a very boring way! I'm looking to organised this data in some way. What would be ideal is two things: The possibility of having a table and maybe having some kind of scroll bar? Does anyone have any idea what I'm referring to or what package would help me? A: Do you need to return pure HTML (as opposed to, say, JSON and Javascript to show it)? If so, the <table> tag of HTML seems to be what you want; and this is a way to have a scrollbar on the table.
HTTPResponse - Return organised data (table?)
My Webservice is currently returning a HTTPResponse which contains thousands of data from a mySQL database (via a Objects.filter). It's currently just displating them in a very boring way! I'm looking to organised this data in some way. What would be ideal is two things: The possibility of having a table and maybe having some kind of scroll bar? Does anyone have any idea what I'm referring to or what package would help me?
[ "Do you need to return pure HTML (as opposed to, say, JSON and Javascript to show it)? If so, the <table> tag of HTML seems to be what you want; and this is a way to have a scrollbar on the table.\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001300310_django_python.txt
Q: Error while importing SQLObject on Windows I am getting following error while importing SQLObject on Window. Does anyone knows what is this error about and how to solve it? ============================== from sqlobject import * File "c:\python26\lib\site-packages\sqlobject-0.10.4-py2.6.egg\sqlobject\__init__.py", line 5, in <module> from main import * File "c:\python26\lib\site-packages\sqlobject-0.10.4-py2.6.egg\sqlobject\main.py", line 32, in <module> import dbconnection File "c:\python26\lib\site-packages\sqlobject-0.10.4-py2.6.egg\sqlobject\dbconnection.py", line 17, in <module> from joins import sorter File "c:\python26\lib\site-packages\sqlobject-0.10.4-py2.6.egg\sqlobject\joins.py", line 5, in <module> import events File "c:\python26\lib\site-packages\sqlobject-0.10.4-py2.6.egg\sqlobject\events.py", line 3, in <module> from sqlobject.include.pydispatch import dispatcher File "c:\python26\lib\site-packages\sqlobject-0.10.4-py2.6.egg\sqlobject\include\pydispatch\dispatcher.py", line 30, in <module> import saferef, robustapply, errors EOFError: EOF read where object expected A: Looks like your c:\python26\lib\site-packages\sqlobject-0.10.4-py2.6.egg file may be truncated or otherwise damaged. How does it compare to a freshly downloaded one (in terms of both length and checksum)?
Error while importing SQLObject on Windows
I am getting following error while importing SQLObject on Window. Does anyone knows what is this error about and how to solve it? ============================== from sqlobject import * File "c:\python26\lib\site-packages\sqlobject-0.10.4-py2.6.egg\sqlobject\__init__.py", line 5, in <module> from main import * File "c:\python26\lib\site-packages\sqlobject-0.10.4-py2.6.egg\sqlobject\main.py", line 32, in <module> import dbconnection File "c:\python26\lib\site-packages\sqlobject-0.10.4-py2.6.egg\sqlobject\dbconnection.py", line 17, in <module> from joins import sorter File "c:\python26\lib\site-packages\sqlobject-0.10.4-py2.6.egg\sqlobject\joins.py", line 5, in <module> import events File "c:\python26\lib\site-packages\sqlobject-0.10.4-py2.6.egg\sqlobject\events.py", line 3, in <module> from sqlobject.include.pydispatch import dispatcher File "c:\python26\lib\site-packages\sqlobject-0.10.4-py2.6.egg\sqlobject\include\pydispatch\dispatcher.py", line 30, in <module> import saferef, robustapply, errors EOFError: EOF read where object expected
[ "Looks like your c:\\python26\\lib\\site-packages\\sqlobject-0.10.4-py2.6.egg file may be truncated or otherwise damaged. How does it compare to a freshly downloaded one (in terms of both length and checksum)?\n" ]
[ 0 ]
[]
[]
[ "python", "sqlobject", "windows" ]
stackoverflow_0001299849_python_sqlobject_windows.txt
Q: Best Python Library for Downloading and Extracting Addresses I've just been given a project which involves the following steps Grab an email from a POP3 address Open an attachment from the email Extract the To: email address Add this to a global suppression list I'd like to try and do this in Python even though I could it in PHP in half the time (this is because I dont know anywhere near as much Python as PHP) My question would be. Can anyone recommend a Python library for interacting with email in the way described above? Many thanks in advance A: Two bits from the standard library: poplib to grab the email via POP3, email to slice and dice it as you wish.
Best Python Library for Downloading and Extracting Addresses
I've just been given a project which involves the following steps Grab an email from a POP3 address Open an attachment from the email Extract the To: email address Add this to a global suppression list I'd like to try and do this in Python even though I could it in PHP in half the time (this is because I dont know anywhere near as much Python as PHP) My question would be. Can anyone recommend a Python library for interacting with email in the way described above? Many thanks in advance
[ "Two bits from the standard library: poplib to grab the email via POP3, email to slice and dice it as you wish.\n" ]
[ 2 ]
[]
[]
[ "email", "pop3", "python" ]
stackoverflow_0001300479_email_pop3_python.txt
Q: Python - substr I need to be able to get the last digit of a number. i.e., I need 2 to be returned from: 12. Like this in PHP: $minute = substr(date('i'), -1) but I need this in Python. Any ideas A: last_digit = str(number)[-1] A: Use the % operator: x = 12 % 10 # returns 2 y = 25 % 10 # returns 5 z = abs(-25) % 10 # returns 5 A: Python distinguishes between strings and numbers (and actually also between numbers of different kinds, i.e., int vs float) so the best solution depends on what type you start with (str or int?) and what type you want as a result (ditto). Int to int: abs(x) % 10 Int to str: str(x)[-1] Str to int: int(x[-1]) Str to str: x[-1]
Python - substr
I need to be able to get the last digit of a number. i.e., I need 2 to be returned from: 12. Like this in PHP: $minute = substr(date('i'), -1) but I need this in Python. Any ideas
[ "last_digit = str(number)[-1]\n\n", "Use the % operator:\n x = 12 % 10 # returns 2\n y = 25 % 10 # returns 5\n z = abs(-25) % 10 # returns 5\n\n", "Python distinguishes between strings and numbers (and actually also between numbers of different kinds, i.e., int vs float) so the best solution depends on what type you start with (str or int?) and what type you want as a result (ditto).\nInt to int: abs(x) % 10\nInt to str: str(x)[-1]\nStr to int: int(x[-1])\nStr to str: x[-1]\n" ]
[ 9, 8, 2 ]
[]
[]
[ "php", "python", "substr" ]
stackoverflow_0001300610_php_python_substr.txt
Q: Multidimensional list(array) reassignment problem Good day coders and codereses, I am writing a piece of code that goes through a pile of statistical data and returns what I ask from it. To complete its task the method reads from one multidimensional array and writes into another one. The piece of code giving me problems is: writer.variables[variable][:, :, :, :] = reader.variables[variable][offset:, 0, 0:5, 3] The size of both slices is 27:1:6:1 but it throws up an exception: ValueError: total size of new array must be unchanged I am flabbergasted. Thank you. A: The size of a slice with 0:5 is not 6 as you say: it's 5. The upper limit is excluded in slicing (as it most always is, in Python). Don't know whether that's your actual problem or just a typo in your question...
Multidimensional list(array) reassignment problem
Good day coders and codereses, I am writing a piece of code that goes through a pile of statistical data and returns what I ask from it. To complete its task the method reads from one multidimensional array and writes into another one. The piece of code giving me problems is: writer.variables[variable][:, :, :, :] = reader.variables[variable][offset:, 0, 0:5, 3] The size of both slices is 27:1:6:1 but it throws up an exception: ValueError: total size of new array must be unchanged I am flabbergasted. Thank you.
[ "The size of a slice with 0:5 is not 6 as you say: it's 5. The upper limit is excluded in slicing (as it most always is, in Python). Don't know whether that's your actual problem or just a typo in your question...\n" ]
[ 2 ]
[]
[]
[ "list", "netcdf", "numpy", "python", "scipy" ]
stackoverflow_0001300648_list_netcdf_numpy_python_scipy.txt
Q: Hierarchical data output from App Engine Datastore to JSON? I have a large hierarchical dataset in the App Engine Datastore. The hierarchy is preserved by storing the data in Entity groups, so that I can pull a whole tree by simply knowing the top element key like so: query = db.Query().ancestor(db.get(key)) The question: How do I now output this data as JSON and preserve the hierarchy? Google has a utility class called GqlEncoder that add support for datastore query results to simplejson, but it basically flattens the data, destroying the hierarchy. Any suggestions? A: I imagine you're referring to this code and the "flattening" you mention is done by lines 51-52: if isinstance(obj, db.GqlQuery): return list(obj) while the rest of the code is fine for your purpose. So, how would you like to represent a GQL query, since you don't what a JS array (Python list) of the objects it contains? It's not clear, besides the entity group (which you're recovering entirely), what gives it hierarchy; is it an issue of "parent"? Anyway, once that's clarified, copying and editing that file into your own code seems best (it's not designed to let you override just that one tidbit).
Hierarchical data output from App Engine Datastore to JSON?
I have a large hierarchical dataset in the App Engine Datastore. The hierarchy is preserved by storing the data in Entity groups, so that I can pull a whole tree by simply knowing the top element key like so: query = db.Query().ancestor(db.get(key)) The question: How do I now output this data as JSON and preserve the hierarchy? Google has a utility class called GqlEncoder that add support for datastore query results to simplejson, but it basically flattens the data, destroying the hierarchy. Any suggestions?
[ "I imagine you're referring to this code and the \"flattening\" you mention is done by lines 51-52:\n if isinstance(obj, db.GqlQuery):\n return list(obj)\n\nwhile the rest of the code is fine for your purpose. So, how would you like to represent a GQL query, since you don't what a JS array (Python list) of the objects it contains? It's not clear, besides the entity group (which you're recovering entirely), what gives it hierarchy; is it an issue of \"parent\"?\nAnyway, once that's clarified, copying and editing that file into your own code seems best (it's not designed to let you override just that one tidbit).\n" ]
[ 1 ]
[]
[]
[ "google_app_engine", "google_cloud_datastore", "json", "python" ]
stackoverflow_0001300694_google_app_engine_google_cloud_datastore_json_python.txt
Q: Writing Python/Django view to "join" across three models/tables Just begin my Python/Django experience and i have a problem :-) So i have a model.py like this: from django.db import models class Priority(models.Model): name = models.CharField(max_length=100) class Projects(models.Model): name = models.CharField(max_length=30) description = models.CharField(max_length=150) priority = models.ForeignKey(Priority) class Tasks(models.Model): name = models.CharField(max_length=30) description = models.CharField(max_length=40) priority = models.ForeignKey(Priority) In the priority table i plan to store data like 1.High, 2.Medium , 3.Low and in Tasks table priority will be stored as id (1, 2 or 3) And the question is how to write a view that display all my tasks but with Priority named? For example: name: Task 1 description: Description 1 priority: **High** A: Your view doesn't have to do much. tasks = Tasks.objects.all() Provide this to your template. Your template can then do something like the following. {% for t in tasks %} name: {{t.name}} description: {{t.description}} priority: **{{t.priority.name}}** {% endfor %} A: There are many ways of accomplishing what you need. One of the easiest would be to keep a dictionary of number to string. Like this: 1->High, 2->Medium, 3->High. Keep this dictionary outside of your view functions so that you can access it from any of your view functions that need to get the priority. You could also simply write a switch that determines what to display in the template.
Writing Python/Django view to "join" across three models/tables
Just begin my Python/Django experience and i have a problem :-) So i have a model.py like this: from django.db import models class Priority(models.Model): name = models.CharField(max_length=100) class Projects(models.Model): name = models.CharField(max_length=30) description = models.CharField(max_length=150) priority = models.ForeignKey(Priority) class Tasks(models.Model): name = models.CharField(max_length=30) description = models.CharField(max_length=40) priority = models.ForeignKey(Priority) In the priority table i plan to store data like 1.High, 2.Medium , 3.Low and in Tasks table priority will be stored as id (1, 2 or 3) And the question is how to write a view that display all my tasks but with Priority named? For example: name: Task 1 description: Description 1 priority: **High**
[ "Your view doesn't have to do much.\ntasks = Tasks.objects.all()\n\nProvide this to your template.\nYour template can then do something like the following.\n{% for t in tasks %}\n name: {{t.name}}\n description: {{t.description}}\n priority: **{{t.priority.name}}**\n{% endfor %}\n\n", "There are many ways of accomplishing what you need. One of the easiest would be to keep a dictionary of number to string. Like this: 1->High, 2->Medium, 3->High.\nKeep this dictionary outside of your view functions so that you can access it from any of your view functions that need to get the priority.\nYou could also simply write a switch that determines what to display in the template.\n" ]
[ 2, 0 ]
[]
[]
[ "django_views", "python" ]
stackoverflow_0001300657_django_views_python.txt
Q: On Google AppEngine what is the best way to merge two tables? If I have two tables, Company and Sales, and I want to display both sets of data in a single list, how would I do this on Google App Engine using GQL? The models are: class Company(db.Model): companyname = db.StringProperty() companyid = db.StringProperty() salesperson = db.StringProperty() class Sales(db.Model): companyid = db.StringProperty() weeklysales = db.StringProperty() monthlysales = db.StringProperty() The views are: def company(request): companys = db.GqlQuery("SELECT * FROM Company") sales = db.GqlQuery("SELECT * FROM Sales") template_values = { 'companys' : companys, 'sales' : sales } return respond(request, 'list', template_values) List html includes: {%for company in companys%} {% for sale in sales %} {% ifequal company.companyid sales.companyid %} {{sales.weeklysales}} {{sales.monthlysales}} {% endifequal %} {% endfor %} {{company.companyname}} {{company.companyid}} {{company.salesperson}} {%endfor%} Any help would be greatly appreciated. A: You've said in a comment that there's a 1-1 relationship between sales and companies. So you could get the data in the same order: def company(request): companys = db.GqlQuery("SELECT * FROM Company ORDER BY companyid").fetch(1000) sales = db.GqlQuery("SELECT * FROM Sales ORDER BY companyid").fetch(1000) template_values = { 'companys' : companys, 'sales' : sales } return respond(request, 'list', template_values) {%for company in companys%} {{sales[forloop.counter0].weeklysales}} {{sales[forloop.counter0].monthlysales}} {{company.companyname}} {{company.companyid}} {{company.salesperson}} {%endfor%} That's still not a great solution, though. If you're confident that the 1-1 relationship is correct, then I would just have a single entity containing all the information. If nothing else, it saves you worrying about database inconsistency where you create a company, but your attempt to create the corresponding sales data entity fails for some reason. A: You should use a ReferenceProperty in your Sales model: company = db.ReferenceProperty(Company) An example on how to iterate through the sales for a given company: company = db.GqlQuery("SELECT * FROM Company").fetch(1)[0] for sale in company.sales_set: #Do something ... A: {%for company in companys%} {% for sale in sales %} {% ifequal company.companyid sales.companyid %} that code is problematic. If you have 200 companies and 1000 sales, you will be running that ifequal statement 200000 times! In general, if you have more than 1000 sales or companies, this won't work at all, because you can only get 1000 items at a time from the datastore. (and if you're not planning on having more than 1000 items, app engine is probably overkill for your project) I think your first goal should be to figure out how you want to break your list up into pages. Do you want to display 50 sales per page? or maybe 10 companies per page, along with all their respective sales? Once you decide on that, you can query for just the information you need.
On Google AppEngine what is the best way to merge two tables?
If I have two tables, Company and Sales, and I want to display both sets of data in a single list, how would I do this on Google App Engine using GQL? The models are: class Company(db.Model): companyname = db.StringProperty() companyid = db.StringProperty() salesperson = db.StringProperty() class Sales(db.Model): companyid = db.StringProperty() weeklysales = db.StringProperty() monthlysales = db.StringProperty() The views are: def company(request): companys = db.GqlQuery("SELECT * FROM Company") sales = db.GqlQuery("SELECT * FROM Sales") template_values = { 'companys' : companys, 'sales' : sales } return respond(request, 'list', template_values) List html includes: {%for company in companys%} {% for sale in sales %} {% ifequal company.companyid sales.companyid %} {{sales.weeklysales}} {{sales.monthlysales}} {% endifequal %} {% endfor %} {{company.companyname}} {{company.companyid}} {{company.salesperson}} {%endfor%} Any help would be greatly appreciated.
[ "You've said in a comment that there's a 1-1 relationship between sales and companies. So you could get the data in the same order:\ndef company(request): \n companys = db.GqlQuery(\"SELECT * FROM Company ORDER BY companyid\").fetch(1000)\n sales = db.GqlQuery(\"SELECT * FROM Sales ORDER BY companyid\").fetch(1000)\n template_values = {\n 'companys' : companys,\n 'sales' : sales\n } \n return respond(request, 'list', template_values)\n\n{%for company in companys%} \n {{sales[forloop.counter0].weeklysales}}\n {{sales[forloop.counter0].monthlysales}}\n\n {{company.companyname}}\n {{company.companyid}}\n {{company.salesperson}}\n{%endfor%}\n\nThat's still not a great solution, though. If you're confident that the 1-1 relationship is correct, then I would just have a single entity containing all the information. If nothing else, it saves you worrying about database inconsistency where you create a company, but your attempt to create the corresponding sales data entity fails for some reason.\n", "You should use a ReferenceProperty in your Sales model:\ncompany = db.ReferenceProperty(Company)\nAn example on how to iterate through the sales for a given company:\ncompany = db.GqlQuery(\"SELECT * FROM Company\").fetch(1)[0]\nfor sale in company.sales_set:\n #Do something ...\n\n", "{%for company in companys%} \n {% for sale in sales %} \n {% ifequal company.companyid sales.companyid %} \n\nthat code is problematic. If you have 200 companies and 1000 sales, you will be running that ifequal statement 200000 times! \nIn general, if you have more than 1000 sales or companies, this won't work at all, because you can only get 1000 items at a time from the datastore. (and if you're not planning on having more than 1000 items, app engine is probably overkill for your project)\nI think your first goal should be to figure out how you want to break your list up into pages. Do you want to display 50 sales per page? or maybe 10 companies per page, along with all their respective sales? Once you decide on that, you can query for just the information you need. \n" ]
[ 1, 0, 0 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001295832_google_app_engine_python.txt
Q: Unescape/unquote binary strings in (extended) url encoding in python for analysis I'd have to unescape URL-encoded binary strings (non-printable characters most likely). The strings sadly come in the extended URL-encoding form, e.g. "%u616f". I want to store them in a file that then contains the raw binary values, eg. 0x61 0x6f here. How do I get this into binary data in python? (urllib.unquote only handles the "%HH"-form) A: The strings sadly come in the extended URL-encoding form, e.g. "%u616f" Incidentally that's not anything to do with URL-encoding. It's an arbitrary made-up format produced by the JavaScript escape() function and pretty much nothing else. If you can, the best thing to do would be to change the JavaScript to use the encodeURIComponent function instead. This will give you a proper, standard URL-encoded UTF-8 string. e.g. "%u616f". I want to store them in a file that then contains the raw binary values, eg. 0x61 0x6f here. Are you sure 0x61 0x6f (the letters "ao") is the byte stream you want to store? That would imply UTF-16BE encoding; are you treating all your strings that way? Normally you'd want to turn the input into Unicode then write it out using an appropriate encoding, such as UTF-8 or UTF-16LE. Here's a quick way of doing it, relying on the hack of making Python read '%u1234' as the string-escaped format u'\u1234': >>> ex= 'hello %e9 %u616f' >>> ex.replace('%u', r'\u').replace('%', r'\x').decode('unicode-escape') u'hello \xe9 \u616f' >>> print _ hello é 慯 >>> _.encode('utf-8') 'hello \xc2\xa0 \xe6\x85\xaf' A: I guess you will have to write the decoder function by yourself. Here is an implementation to get you started: def decode(file): while True: c = file.read(1) if c == "": # End of file break if c != "%": # Not an escape sequence yield c continue c = file.read(1) if c != "u": # One hex-byte yield chr(int(c + file.read(1), 16)) continue # Two hex-bytes yield chr(int(file.read(2), 16)) yield chr(int(file.read(2), 16)) Usage: input = open("/path/to/input-file", "r") output = open("/path/to/output-file", "wb") output.writelines(decode(input)) output.close() input.close() A: Here is a regex-based approach: # the replace function concatenates the two matches after # converting them from hex to ascii repfunc = lambda m: chr(int(m.group(1), 16))+chr(int(m.group(2), 16)) # the last parameter is the text you want to convert result = re.sub('%u(..)(..)', repfunc, '%u616f') print result gives ao
Unescape/unquote binary strings in (extended) url encoding in python
for analysis I'd have to unescape URL-encoded binary strings (non-printable characters most likely). The strings sadly come in the extended URL-encoding form, e.g. "%u616f". I want to store them in a file that then contains the raw binary values, eg. 0x61 0x6f here. How do I get this into binary data in python? (urllib.unquote only handles the "%HH"-form)
[ "\nThe strings sadly come in the extended URL-encoding form, e.g. \"%u616f\"\n\nIncidentally that's not anything to do with URL-encoding. It's an arbitrary made-up format produced by the JavaScript escape() function and pretty much nothing else. If you can, the best thing to do would be to change the JavaScript to use the encodeURIComponent function instead. This will give you a proper, standard URL-encoded UTF-8 string.\n\ne.g. \"%u616f\". I want to store them in a file that then contains the raw binary values, eg. 0x61 0x6f here.\n\nAre you sure 0x61 0x6f (the letters \"ao\") is the byte stream you want to store? That would imply UTF-16BE encoding; are you treating all your strings that way?\nNormally you'd want to turn the input into Unicode then write it out using an appropriate encoding, such as UTF-8 or UTF-16LE. Here's a quick way of doing it, relying on the hack of making Python read '%u1234' as the string-escaped format u'\\u1234':\n>>> ex= 'hello %e9 %u616f'\n>>> ex.replace('%u', r'\\u').replace('%', r'\\x').decode('unicode-escape')\nu'hello \\xe9 \\u616f'\n\n>>> print _\nhello é 慯\n\n>>> _.encode('utf-8')\n'hello \\xc2\\xa0 \\xe6\\x85\\xaf'\n\n", "I guess you will have to write the decoder function by yourself. Here is an implementation to get you started:\ndef decode(file):\n while True:\n c = file.read(1)\n if c == \"\":\n # End of file\n break\n if c != \"%\":\n # Not an escape sequence\n yield c\n continue\n c = file.read(1)\n if c != \"u\":\n # One hex-byte\n yield chr(int(c + file.read(1), 16))\n continue\n # Two hex-bytes\n yield chr(int(file.read(2), 16))\n yield chr(int(file.read(2), 16))\n\nUsage:\ninput = open(\"/path/to/input-file\", \"r\")\noutput = open(\"/path/to/output-file\", \"wb\")\noutput.writelines(decode(input))\noutput.close()\ninput.close()\n\n", "Here is a regex-based approach:\n# the replace function concatenates the two matches after \n# converting them from hex to ascii\nrepfunc = lambda m: chr(int(m.group(1), 16))+chr(int(m.group(2), 16))\n\n# the last parameter is the text you want to convert\nresult = re.sub('%u(..)(..)', repfunc, '%u616f')\nprint result\n\ngives\nao\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ "binary", "python", "urlencode" ]
stackoverflow_0001298319_binary_python_urlencode.txt
Q: What regex can I use to capture groups from this string? Assume the following strings: A01B100 A01.B100 A01 A01............................B100 ( whatever between A and B ) The thing is, the numbers should be \d+, and in all of the strings A will always be present, while B may not. A will always be followed by one or more digits, and so will B, if present. What regex could I use to capture A and B's digit? I have the following regex: (A(\d+)).*?(B?(\d+)?) but this only works for the first and the third case. A: Must A precede B? Assuming yes. Can B appear more than once? Assuming no. Can B appear except as part of a B-number group? Assuming no. Then, A\d+.*?(B\d+)? using the lazy .*? or A\d+[^B]*(B\d+)? which is more efficient but requires that B be a single character. EDIT: Upon further reflection, I have parenthesized the patterns in a less-than-perfect way. The following patterns should require fewer assumptions: A\d+(.*?B\d+)? a\d+([^B]*B\d+)? A: (?ms)^A(\d+)(?:[^\n\r]*B(\d+))?$ Assuming one string per line: the [^\n\r]* is a non-greedy match for any characters (except newlines) after Axx, meaing it could gobble an intermediate Byy before the last B: A01...B01...B23 would be matched, with 01 and 23 detected. A: A\d+.*(B\d+)? OK, so that provides something which passes all test cases... BUT it has some false positives. A\d+(.*B\d+)? It seems other characters should only appear if B(whatever) is after them, so use the above instead. #perl test case hackup @array = ('A01B100', 'A01.B100', 'A01', 'A01............................B100', 'A01FAIL', 'NEVER'); for (@array) { print "$_\n" if $_ =~ /^A\d+(.*B\d+)?$/; } A: import re m = re.match(r"A(?P<d1>\d+)\.*(B(?P<d2>\d+))?", "A01.B100") print m.groupdict()
What regex can I use to capture groups from this string?
Assume the following strings: A01B100 A01.B100 A01 A01............................B100 ( whatever between A and B ) The thing is, the numbers should be \d+, and in all of the strings A will always be present, while B may not. A will always be followed by one or more digits, and so will B, if present. What regex could I use to capture A and B's digit? I have the following regex: (A(\d+)).*?(B?(\d+)?) but this only works for the first and the third case.
[ "\nMust A precede B? Assuming yes.\nCan B appear more than once? Assuming no. \nCan B appear except as part of a B-number group? Assuming no.\n\nThen,\nA\\d+.*?(B\\d+)?\n\nusing the lazy .*? or\nA\\d+[^B]*(B\\d+)?\n\nwhich is more efficient but requires that B be a single character.\nEDIT: Upon further reflection, I have parenthesized the patterns in a less-than-perfect way. The following patterns should require fewer assumptions:\nA\\d+(.*?B\\d+)?\na\\d+([^B]*B\\d+)?\n\n", "(?ms)^A(\\d+)(?:[^\\n\\r]*B(\\d+))?$\n\nAssuming one string per line:\n\nthe [^\\n\\r]* is a non-greedy match for any characters (except newlines) after Axx, meaing it could gobble an intermediate Byy before the last B:\nA01...B01...B23\n\nwould be matched, with 01 and 23 detected.\n", "A\\d+.*(B\\d+)?\n\nOK, so that provides something which passes all test cases...\nBUT it has some false positives.\nA\\d+(.*B\\d+)?\n\nIt seems other characters should only appear if B(whatever) is after them, so use the above instead.\n#perl test case hackup\n@array = ('A01B100', 'A01.B100', 'A01', 'A01............................B100', 'A01FAIL', 'NEVER');\nfor (@array) {\nprint \"$_\\n\" if $_ =~ /^A\\d+(.*B\\d+)?$/;\n}\n\n", "import re\nm = re.match(r\"A(?P<d1>\\d+)\\.*(B(?P<d2>\\d+))?\", \"A01.B100\")\nprint m.groupdict()\n\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001301257_python_regex.txt
Q: mod_python problem? I have been working on a website using mod_python, python, and SQL Alchemy when I ran into a strange problem: When I query the database for all of the records, it returns the correct result set; however, when I refresh the page, it returns me a result set with that same result set appended to it. I get more result sets "stacked" on top of eachother as I refresh the page more. For example: First page load: 10 results Second page load: 20 results (two of each) Third page load: 30 results (three of each) etc... Is this some underlying problem with mod_python? I don't recall running into this when using mod_wsgi. A: Not that I've ever heard of, but it's impossible to tell without some code to look at. Maybe you initialised your result set list as a global, or shared member, and then appended results to it when the application was called without resetting it to empty? A classic way of re-using lists accidentally is to put one in a default argument value to a function. (The same could happen in mod_wsgi of course.) A: I don't know about any of the technologies you are using. However, before you think that might be a possible bug in the packages you are using, you have to consider one thing. Occam's razor. Basically, "when you have two competing theories that make exactly the same predictions, the simpler one is the better." Your two possible major theories here is that there is a bug in the components you are using (that many others use) or there is a bug in your code. Chances are (and I'm sorry) there is a bug in your code. I use this idea with my own code and every time there was a problem it did turn out to be my code. Hopefully others can direct you to the bug and you might want to post the problem code. You may not be clearing a result set or something -- a variable -- is being held on longer than you expect.
mod_python problem?
I have been working on a website using mod_python, python, and SQL Alchemy when I ran into a strange problem: When I query the database for all of the records, it returns the correct result set; however, when I refresh the page, it returns me a result set with that same result set appended to it. I get more result sets "stacked" on top of eachother as I refresh the page more. For example: First page load: 10 results Second page load: 20 results (two of each) Third page load: 30 results (three of each) etc... Is this some underlying problem with mod_python? I don't recall running into this when using mod_wsgi.
[ "Not that I've ever heard of, but it's impossible to tell without some code to look at.\nMaybe you initialised your result set list as a global, or shared member, and then appended results to it when the application was called without resetting it to empty? A classic way of re-using lists accidentally is to put one in a default argument value to a function. \n(The same could happen in mod_wsgi of course.)\n", "I don't know about any of the technologies you are using. However, before you think that might be a possible bug in the packages you are using, you have to consider one thing.\nOccam's razor.\nBasically, \"when you have two competing theories that make exactly the same predictions, the simpler one is the better.\"\nYour two possible major theories here is that there is a bug in the components you are using (that many others use) or there is a bug in your code. Chances are (and I'm sorry) there is a bug in your code. \nI use this idea with my own code and every time there was a problem it did turn out to be my code. \nHopefully others can direct you to the bug and you might want to post the problem code. You may not be clearing a result set or something -- a variable -- is being held on longer than you expect.\n" ]
[ 0, 0 ]
[]
[]
[ "mod_python", "python", "sqlalchemy" ]
stackoverflow_0001301000_mod_python_python_sqlalchemy.txt
Q: Beginner at testing Python code, need help! I don't do tests, but I'd like to start. I have some questions : Is it ok to use the unittest module for that? From what I understand, the unittest module will run any method starting with test. if I have a separate directory for the tests ( consider a directory named tests ), how would I import the code I'm testing? Do I need to use the imp module? Here's the directory structure : src/ tests/ A: Another good way to start tests with your Python code is use the doctest module, whereby you include tests inside method and class comments. The neat bit is that these serve as code examples, and therefore, partial documentation. Extremely easy to do, too. A: It's fine to use unittest. This module will run methods beginning with test for classes which inherit from unittest.TestCase. There's no need to use the imp module - your test module would just import the code under test as normal. You might need to add the src directory to your path: import sys sys.path.append('../src') # OnLinux - use r'..\src' for Windows This code would be in your test module(s) before any import of your modules. A better approach is to use the OS environment variable PYTHONPATH. (Windows) SET PYTHONPATH=path\to\module; python test.py (Linux) PYTHONPATH=path/to/module; python test.py An alternative to unittest is nose. A: I had a bit of problem with the whole seperate directory for tests issue. The way I solved it was by having a test runner in the source directory with the following code: import sys, os, re, unittest # Run all tests in t/ def regressionTest(): path = os.path.abspath(os.path.dirname(sys.argv[0])) + '/t' files = os.listdir(path) test = re.compile("^t_.+\.py$", re.IGNORECASE) files = filter(test.search, files) filenameToModuleName = lambda f: 't.'+os.path.splitext(f)[0] moduleNames = map(filenameToModuleName, files) modules = map(__import__, moduleNames) modules = map(lambda name: sys.modules[name], moduleNames) load = unittest.defaultTestLoader.loadTestsFromModule return unittest.TestSuite(map(load, modules)) suite = regressionTest() if __name__ == "__main__": unittest.TextTestRunner(verbosity=2).run(suite) Then I had a folder named t containing all my tests, named t_<something>.py. If you need help on getting started with unit testing in Python, I can recommend the official documentation and Dive Into Python among other things. A: As mentioned by Sebastian P., Mark Pilgrim's Dive Into Python has great chapters on unit testing and test-driven development using Python's unittest module. I used these chapters to get started with testing, myself. I wrote a blog post describing my approach to importing modules for testing. Note that it solves the shortcoming of Vinay Sajip's approach, which will not work if you call the testing module from anywhere but the directory in which it resides. A reader posted a nice solution in the comments of my blog post as well. S. Lott hints at a method using PYTHONPATH in Vinay's post; I hope he will expound on it.
Beginner at testing Python code, need help!
I don't do tests, but I'd like to start. I have some questions : Is it ok to use the unittest module for that? From what I understand, the unittest module will run any method starting with test. if I have a separate directory for the tests ( consider a directory named tests ), how would I import the code I'm testing? Do I need to use the imp module? Here's the directory structure : src/ tests/
[ "Another good way to start tests with your Python code is use the doctest module, whereby you include tests inside method and class comments. The neat bit is that these serve as code examples, and therefore, partial documentation. Extremely easy to do, too.\n", "It's fine to use unittest. This module will run methods beginning with test for classes which inherit from unittest.TestCase.\nThere's no need to use the imp module - your test module would just import the code under test as normal. You might need to add the src directory to your path:\nimport sys\nsys.path.append('../src') # OnLinux - use r'..\\src' for Windows\n\nThis code would be in your test module(s) before any import of your modules.\nA better approach is to use the OS environment variable PYTHONPATH.\n(Windows) SET PYTHONPATH=path\\to\\module; python test.py\n(Linux) PYTHONPATH=path/to/module; python test.py\nAn alternative to unittest is nose.\n", "I had a bit of problem with the whole seperate directory for tests issue. \nThe way I solved it was by having a test runner in the source directory with the following code:\nimport sys, os, re, unittest\n\n# Run all tests in t/\n\ndef regressionTest():\n path = os.path.abspath(os.path.dirname(sys.argv[0])) + '/t'\n files = os.listdir(path)\n test = re.compile(\"^t_.+\\.py$\", re.IGNORECASE)\n files = filter(test.search, files)\n filenameToModuleName = lambda f: 't.'+os.path.splitext(f)[0]\n moduleNames = map(filenameToModuleName, files)\n modules = map(__import__, moduleNames)\n modules = map(lambda name: sys.modules[name], moduleNames)\n load = unittest.defaultTestLoader.loadTestsFromModule\n return unittest.TestSuite(map(load, modules))\n\nsuite = regressionTest()\n\nif __name__ == \"__main__\":\n unittest.TextTestRunner(verbosity=2).run(suite)\n\nThen I had a folder named t containing all my tests, named t_<something>.py.\nIf you need help on getting started with unit testing in Python, I can recommend the official documentation and Dive Into Python among other things.\n", "As mentioned by Sebastian P., Mark Pilgrim's Dive Into Python has great chapters on unit testing and test-driven development using Python's unittest module. I used these chapters to get started with testing, myself.\nI wrote a blog post describing my approach to importing modules for testing. Note that it solves the shortcoming of Vinay Sajip's approach, which will not work if you call the testing module from anywhere but the directory in which it resides. A reader posted a nice solution in the comments of my blog post as well.\nS. Lott hints at a method using PYTHONPATH in Vinay's post; I hope he will expound on it.\n" ]
[ 5, 4, 2, 2 ]
[]
[]
[ "python", "testing" ]
stackoverflow_0001299672_python_testing.txt
Q: How do you enable auto-scrolling on GtkSourceView2? I am having a problem with GtkSourceView used from Python. Two major problems: 1) When a user types text into the GtkSourceView, and types past the bottom of the visible text, the GtkSourceView does not autoscroll to the users cursor. This wouldnt be so bad, except: 2) The arrow keys, page up and page down keys, do not cause the GtkSourceView to scroll either. The mouse scrollbar does work on the GtkSourceView. Does anyone have knowledge/experience of this? My code is here http://launchpad.net/kabikaboo A: Ok I just figured this out. I was adding the GtkSourceView2 into a GtkScrolledWindow. Only, it was adding a ViewPort first via ScrolledWindow.add_with_viewport(). This disables part of the scrolling behavior via keyboard. Instead, use ScrolledWindow.add(), and the ViewPort is skipped and the GtkAdjustments take care of the scrolling!
How do you enable auto-scrolling on GtkSourceView2?
I am having a problem with GtkSourceView used from Python. Two major problems: 1) When a user types text into the GtkSourceView, and types past the bottom of the visible text, the GtkSourceView does not autoscroll to the users cursor. This wouldnt be so bad, except: 2) The arrow keys, page up and page down keys, do not cause the GtkSourceView to scroll either. The mouse scrollbar does work on the GtkSourceView. Does anyone have knowledge/experience of this? My code is here http://launchpad.net/kabikaboo
[ "Ok I just figured this out.\nI was adding the GtkSourceView2 into a GtkScrolledWindow.\nOnly, it was adding a ViewPort first via ScrolledWindow.add_with_viewport().\nThis disables part of the scrolling behavior via keyboard.\nInstead, use ScrolledWindow.add(), and the ViewPort is skipped and the GtkAdjustments take care of the scrolling!\n" ]
[ 0 ]
[]
[]
[ "gtk", "python" ]
stackoverflow_0001250566_gtk_python.txt
Q: django admin: company branches must manage only their records across many models One company with many branches across the world using the same app. Each branch's supervisor, signing into the same /admin, should see and be able to manage only their records across many models (blog, galleries, subscribed users, clients list, etc.). How to solve it best within django? I need a flexible and reliable solution, not hacks. Never came across this task, so really have no idea how to do it for the moment. Tx A: There is a nice tutorial here on Django Admin. It includes customizing the Admin to add row-level permissions (which, as i understand it, is what you want).
django admin: company branches must manage only their records across many models
One company with many branches across the world using the same app. Each branch's supervisor, signing into the same /admin, should see and be able to manage only their records across many models (blog, galleries, subscribed users, clients list, etc.). How to solve it best within django? I need a flexible and reliable solution, not hacks. Never came across this task, so really have no idea how to do it for the moment. Tx
[ "There is a nice tutorial here on Django Admin. It includes customizing the Admin to add row-level permissions (which, as i understand it, is what you want).\n" ]
[ 1 ]
[]
[]
[ "django", "django_admin", "personalization", "python" ]
stackoverflow_0001301757_django_django_admin_personalization_python.txt
Q: Counting python method calls within another method I'm actually trying doing this in Java, but I'm in the process of teaching myself python and it made me wonder if there was an easy/clever way to do this with wrappers or something. I want to know how many times a specific method was called inside another method. For example: def foo(z): #do something return result def bar(x,y): #complicated algorithm/logic involving foo return foobar So for each call to bar with various parameters, I'd like to know how many times foo was called, perhaps with output like this: >>> print bar('xyz',3) foo was called 15 times [results here] >>> print bar('stuv',6) foo was called 23 times [other results here] edit: I realize I could just slap a counter inside bar and dump it when I return, but it would be cool if there was some magic you could do with wrappers to accomplish the same thing. It would also mean I could reuse the same wrappers somewhere else without having to modify any code inside the method. A: Sounds like almost the textbook example for decorators! def counted(fn): def wrapper(*args, **kwargs): wrapper.called += 1 return fn(*args, **kwargs) wrapper.called = 0 wrapper.__name__ = fn.__name__ return wrapper @counted def foo(): return >>> foo() >>> foo.called 1 You could even use another decorator to automate the recording of how many times a function is called inside another function: def counting(other): def decorator(fn): def wrapper(*args, **kwargs): other.called = 0 try: return fn(*args, **kwargs) finally: print '%s was called %i times' % (other.__name__, other.called) wrapper.__name__ = fn.__name__ return wrapper return decorator @counting(foo) def bar(): foo() foo() >>> bar() foo was called 2 times If foo or bar can end up calling themselves, though, you'd need a more complicated solution involving stacks to cope with the recursion. Then you're heading towards a full-on profiler... Possibly this wrapped decorator stuff, which tends to be used for magic, isn't the ideal place to be looking if you're still ‘teaching yourself Python’! A: This defines a decorator to do it: def count_calls(fn): def _counting(*args, **kwargs): _counting.calls += 1 return fn(*args, **kwargs) _counting.calls = 0 return _counting @count_calls def foo(x): return x def bar(y): foo(y) foo(y) bar(1) print foo.calls A: After your response - here's a way with a decorator factory... import inspect def make_decorators(): # Mutable shared storage... caller_L = [] callee_L = [] called_count = [0] def caller_decorator(caller): caller_L.append(caller) def counting_caller(*args, **kwargs): # Returning result here separate from the count report in case # the result needs to be used... result = caller(*args, **kwargs) print callee_L[0].__name__, \ 'was called', called_count[0], 'times' called_count[0] = 0 return result return counting_caller def callee_decorator(callee): callee_L.append(callee) def counting_callee(*args, **kwargs): # Next two lines are an alternative to # sys._getframe(1).f_code.co_name mentioned by Ned... current_frame = inspect.currentframe() caller_name = inspect.getouterframes(current_frame)[1][3] if caller_name == caller_L[0].__name__: called_count[0] += 1 return callee(*args, **kwargs) return counting_callee return caller_decorator, callee_decorator caller_decorator, callee_decorator = make_decorators() @callee_decorator def foo(z): #do something return ' foo result' @caller_decorator def bar(x,y): # complicated algorithm/logic simulation... for i in xrange(x+y): foo(i) foobar = 'some result other than the call count that you might use' return foobar bar(1,1) bar(1,2) bar(2,2) And here's the output (tested with Python 2.5.2): foo was called 2 times foo was called 3 times foo was called 4 times
Counting python method calls within another method
I'm actually trying doing this in Java, but I'm in the process of teaching myself python and it made me wonder if there was an easy/clever way to do this with wrappers or something. I want to know how many times a specific method was called inside another method. For example: def foo(z): #do something return result def bar(x,y): #complicated algorithm/logic involving foo return foobar So for each call to bar with various parameters, I'd like to know how many times foo was called, perhaps with output like this: >>> print bar('xyz',3) foo was called 15 times [results here] >>> print bar('stuv',6) foo was called 23 times [other results here] edit: I realize I could just slap a counter inside bar and dump it when I return, but it would be cool if there was some magic you could do with wrappers to accomplish the same thing. It would also mean I could reuse the same wrappers somewhere else without having to modify any code inside the method.
[ "Sounds like almost the textbook example for decorators!\ndef counted(fn):\n def wrapper(*args, **kwargs):\n wrapper.called += 1\n return fn(*args, **kwargs)\n wrapper.called = 0\n wrapper.__name__ = fn.__name__\n return wrapper\n\n@counted\ndef foo():\n return\n\n>>> foo()\n>>> foo.called\n1\n\nYou could even use another decorator to automate the recording of how many times a function is called inside another function:\ndef counting(other):\n def decorator(fn):\n def wrapper(*args, **kwargs):\n other.called = 0\n try:\n return fn(*args, **kwargs)\n finally:\n print '%s was called %i times' % (other.__name__, other.called)\n wrapper.__name__ = fn.__name__\n return wrapper\n return decorator\n\n@counting(foo)\ndef bar():\n foo()\n foo()\n\n>>> bar()\nfoo was called 2 times\n\nIf foo or bar can end up calling themselves, though, you'd need a more complicated solution involving stacks to cope with the recursion. Then you're heading towards a full-on profiler...\nPossibly this wrapped decorator stuff, which tends to be used for magic, isn't the ideal place to be looking if you're still ‘teaching yourself Python’!\n", "This defines a decorator to do it:\ndef count_calls(fn):\n def _counting(*args, **kwargs):\n _counting.calls += 1\n return fn(*args, **kwargs)\n _counting.calls = 0\n return _counting\n\n@count_calls\ndef foo(x):\n return x\n\ndef bar(y):\n foo(y)\n foo(y)\n\nbar(1)\nprint foo.calls\n\n", "After your response - here's a way with a decorator factory...\nimport inspect\n\ndef make_decorators():\n # Mutable shared storage...\n caller_L = []\n callee_L = []\n called_count = [0]\n def caller_decorator(caller):\n caller_L.append(caller)\n def counting_caller(*args, **kwargs):\n # Returning result here separate from the count report in case\n # the result needs to be used...\n result = caller(*args, **kwargs)\n print callee_L[0].__name__, \\\n 'was called', called_count[0], 'times'\n called_count[0] = 0\n return result\n return counting_caller\n\n def callee_decorator(callee):\n callee_L.append(callee)\n def counting_callee(*args, **kwargs):\n # Next two lines are an alternative to\n # sys._getframe(1).f_code.co_name mentioned by Ned...\n current_frame = inspect.currentframe()\n caller_name = inspect.getouterframes(current_frame)[1][3]\n if caller_name == caller_L[0].__name__:\n called_count[0] += 1\n return callee(*args, **kwargs)\n return counting_callee\n\n return caller_decorator, callee_decorator\n\ncaller_decorator, callee_decorator = make_decorators()\n\n@callee_decorator\ndef foo(z):\n #do something\n return ' foo result'\n\n@caller_decorator\ndef bar(x,y):\n # complicated algorithm/logic simulation...\n for i in xrange(x+y):\n foo(i)\n foobar = 'some result other than the call count that you might use'\n return foobar\n\n\nbar(1,1)\nbar(1,2)\nbar(2,2)\n\nAnd here's the output (tested with Python 2.5.2):\nfoo was called 2 times\nfoo was called 3 times\nfoo was called 4 times\n\n" ]
[ 22, 7, 2 ]
[]
[]
[ "profiling", "python" ]
stackoverflow_0001301735_profiling_python.txt
Q: Fastest ways to key-wise add a list of dicts together in python Say I have a bunch of dictionaries a = {'x': 1.0, 'y': 0.5, 'z': 0.25 } b = {'w': 0.5, 'x': 0.2 } There's only two there, but the question is regarding an arbitary amount. What's the fastest way to find the mean value for each key? The dicts are quite sparse, so there will be a lot of cases where lots of keys aren't present in various dicts. The result I'm looking for is a new dictionary which has all the keys and the mean values for each one. The values are always floats, I'm happy to dip into ctypes. The approach I have is slower than I'd like, possibly because in my case I'm using defaultdicts which means I'm actually initialising values even if they're not there. If this is the cause of the slowness I'm happy to refactor, just want to make sure I'm not missing anything obvious. Edit: I think I was misleading with what the result should be, if the value isn't present it should act as 0.0, so the result for the above example would be: {'w':0.25,'x':0.6,'y':0.25,'z':0.125} So the division is by the total number of unique keys. The main thing I'm wondering is if there's a sneaky way to divide the whole dict by the length in one step, or do the additions in one step. Basically a very fast vector addition and division. I've looked briefly at numpy arrays, but they don't seem to apply to dicts and if I converted the dicts to lists I'd have to remove the sparseness property (by explicitly setting absent values to 0). A: It may be proven through profiling that this isn't quite the fastest but... import collections a = {'x': 1.0, 'y': 0.5, 'z': 0.25 } b = {'w': 0.5, 'x': 0.2 } dicts = [a,b] totals = collections.defaultdict(list) avg = {} for D in dicts: for key,value in D.iteritems(): totals[key].append(value) for key,values in totals.iteritems(): avg[key] = sum(values) / len(values) I'm guessing that allowing Python to use the built-ins sum() and len() is going to gain some performance over calculating the mean as you see new values, but I could sure be wrong about that. A: This works: import collections data= [ {'x': 1.0, 'y': 0.5, 'z': 0.25 }, {'w': 0.5, 'x': 0.2 } ] tally = collections.defaultdict(lambda: (0.0, 0)) for d in data: for k,v in d.items(): sum, count = tally[k] tally[k] = (sum+v, count+1) results = {} for k, v in tally.items(): t = tally[k] results[k] = t[0]/t[1] print results I don't know if it's faster than yours, since you haven't posted your code. {'y': 0.5, 'x': 0.59999999999999998, 'z': 0.25, 'w': 0.5} I tried in tally to avoid storing all the values again, simply accumulating the sum and count I'd need to compute the average at the end. Often, the time bottleneck in a Python program is in the memory allocator, and using less memory can help a lot with speed. A: >>> def avg(items): ... return sum(items) / len(items) ... >>> hashes = [a, b] >>> dict([(k, avg([h.get(k) or 0 for h in hashes])) for k in set(sum((h.keys() for h in hashes), []))]) {'y': 0.25, 'x': 0.59999999999999998, 'z': 0.125, 'w': 0.25} Explanation: The set of keys in all of the hashes, no repeats. set(sum((h.keys() for h in hashes), [])) The average value for each key in the above set, using 0 if the value doesn't exist in a particular hash. (k, avg([h.get(k) or 0 for h in hashes])) A: It is possible that your bottleneck might be due to excessive memory use. Consider using iteritems to leverage the power of generators. Since you say your data is sparse, that will probably not be the most efficient. Consider this alternate usage of iterators: dicts = ... #Assume this is your dataset totals = {} lengths = {} means = {} for d in dicts: for key,value in d.iteritems(): totals.setdefault(key,0) lengths.setdefault(key,0) totals[key] += value length[key] += 1 for key,value in totals.iteritems(): means[key] = value / lengths[key] Here totals, lengths, and means are the only data structures you create. This ought to be fairly speedy, since it avoids having to create auxiliary lists and only loops through each dictionary exactly once per key it contains. Here's a second approach that I doubt will be an improvement in performance over the first, but it theoretically could, depending on your data and machine, since it will require less memory allocation: dicts = ... #Assume this is your dataset key_set = Set([]) for d in dicts: key_set.update(d.keys()) means = {} def get_total(dicts, key): vals = (dict[key] for dict in dicts if dict.has_key(key)) return sum(vals) def get_length(dicts, key): vals = (1 for dict in dicts if dict.has_key(key)) return sum(vals) def get_mean(dicts,key): return get_total(dicts,key)/get_length(dicts,key) for key in key_set: means[key] = get_mean(dicts,key) You do end up looping through all dictionaries twice for each key, but need no intermediate data structures other than the key_set. A: scipy.sparse supports sparse matrices -- the dok_matrix form seems reasonably suited to your needs (you'll have to use integer coordinates, though, so a separate pass will be needed to collect and put in any arbitrary but definite order the string keys you currently have). If you have a huge number of very large and sparse "arrays", the performance gains might possibly be worth the complications. A: It's simple but this could work: a = { 'x': 1.0, 'y': 0.5, 'z': 0.25 } b = { 'w': 0.5, 'x': 0.2 } ds = [a, b] result = {} for d in ds: for k, v in d.iteritems(): result[k] = v + result.get(k, 0) n = len(ds) result = dict((k, amt/n) for k, amt in result.iteritems()) print result I have no idea how it compares to your method since you didn't post any code.
Fastest ways to key-wise add a list of dicts together in python
Say I have a bunch of dictionaries a = {'x': 1.0, 'y': 0.5, 'z': 0.25 } b = {'w': 0.5, 'x': 0.2 } There's only two there, but the question is regarding an arbitary amount. What's the fastest way to find the mean value for each key? The dicts are quite sparse, so there will be a lot of cases where lots of keys aren't present in various dicts. The result I'm looking for is a new dictionary which has all the keys and the mean values for each one. The values are always floats, I'm happy to dip into ctypes. The approach I have is slower than I'd like, possibly because in my case I'm using defaultdicts which means I'm actually initialising values even if they're not there. If this is the cause of the slowness I'm happy to refactor, just want to make sure I'm not missing anything obvious. Edit: I think I was misleading with what the result should be, if the value isn't present it should act as 0.0, so the result for the above example would be: {'w':0.25,'x':0.6,'y':0.25,'z':0.125} So the division is by the total number of unique keys. The main thing I'm wondering is if there's a sneaky way to divide the whole dict by the length in one step, or do the additions in one step. Basically a very fast vector addition and division. I've looked briefly at numpy arrays, but they don't seem to apply to dicts and if I converted the dicts to lists I'd have to remove the sparseness property (by explicitly setting absent values to 0).
[ "It may be proven through profiling that this isn't quite the fastest but...\nimport collections\n\na = {'x': 1.0, 'y': 0.5, 'z': 0.25 }\nb = {'w': 0.5, 'x': 0.2 }\ndicts = [a,b]\n\ntotals = collections.defaultdict(list)\navg = {}\n\nfor D in dicts:\n for key,value in D.iteritems():\n totals[key].append(value)\n\nfor key,values in totals.iteritems():\n avg[key] = sum(values) / len(values)\n\nI'm guessing that allowing Python to use the built-ins sum() and len() is going to gain some performance over calculating the mean as you see new values, but I could sure be wrong about that.\n", "This works:\nimport collections\n\ndata= [\n {'x': 1.0, 'y': 0.5, 'z': 0.25 },\n {'w': 0.5, 'x': 0.2 }\n ]\n\ntally = collections.defaultdict(lambda: (0.0, 0))\n\nfor d in data:\n for k,v in d.items():\n sum, count = tally[k]\n tally[k] = (sum+v, count+1)\n\nresults = {}\nfor k, v in tally.items():\n t = tally[k]\n results[k] = t[0]/t[1]\n\nprint results\n\nI don't know if it's faster than yours, since you haven't posted your code.\n{'y': 0.5, 'x': 0.59999999999999998, 'z': 0.25, 'w': 0.5}\n\nI tried in tally to avoid storing all the values again, simply accumulating the sum and count I'd need to compute the average at the end. Often, the time bottleneck in a Python program is in the memory allocator, and using less memory can help a lot with speed.\n", ">>> def avg(items):\n... return sum(items) / len(items)\n... \n>>> hashes = [a, b]\n>>> dict([(k, avg([h.get(k) or 0 for h in hashes])) for k in set(sum((h.keys() for h in hashes), []))])\n{'y': 0.25, 'x': 0.59999999999999998, 'z': 0.125, 'w': 0.25}\n\nExplanation:\n\nThe set of keys in all of the hashes, no repeats.\nset(sum((h.keys() for h in hashes), []))\n\nThe average value for each key in the above set, using 0 if the value doesn't exist in a particular hash.\n(k, avg([h.get(k) or 0 for h in hashes]))\n\n\n", "It is possible that your bottleneck might be due to excessive memory use. Consider using iteritems to leverage the power of generators.\nSince you say your data is sparse, that will probably not be the most efficient. Consider this alternate usage of iterators:\ndicts = ... #Assume this is your dataset\ntotals = {}\nlengths = {}\nmeans = {}\nfor d in dicts:\n for key,value in d.iteritems():\n totals.setdefault(key,0)\n lengths.setdefault(key,0)\n totals[key] += value\n length[key] += 1\nfor key,value in totals.iteritems():\n means[key] = value / lengths[key]\n\nHere totals, lengths, and means are the only data structures you create. This ought to be fairly speedy, since it avoids having to create auxiliary lists and only loops through each dictionary exactly once per key it contains.\nHere's a second approach that I doubt will be an improvement in performance over the first, but it theoretically could, depending on your data and machine, since it will require less memory allocation:\ndicts = ... #Assume this is your dataset\nkey_set = Set([])\nfor d in dicts: key_set.update(d.keys())\nmeans = {}\ndef get_total(dicts, key):\n vals = (dict[key] for dict in dicts if dict.has_key(key))\n return sum(vals)\ndef get_length(dicts, key):\n vals = (1 for dict in dicts if dict.has_key(key))\n return sum(vals)\ndef get_mean(dicts,key):\n return get_total(dicts,key)/get_length(dicts,key)\nfor key in key_set:\n means[key] = get_mean(dicts,key)\n\nYou do end up looping through all dictionaries twice for each key, but need no intermediate data structures other than the key_set.\n", "scipy.sparse supports sparse matrices -- the dok_matrix form seems reasonably suited to your needs (you'll have to use integer coordinates, though, so a separate pass will be needed to collect and put in any arbitrary but definite order the string keys you currently have). If you have a huge number of very large and sparse \"arrays\", the performance gains might possibly be worth the complications.\n", "It's simple but this could work:\na = { 'x': 1.0, 'y': 0.5, 'z': 0.25 }\nb = { 'w': 0.5, 'x': 0.2 }\n\nds = [a, b]\nresult = {}\n\nfor d in ds:\n for k, v in d.iteritems():\n result[k] = v + result.get(k, 0)\n\nn = len(ds)\nresult = dict((k, amt/n) for k, amt in result.iteritems())\n\nprint result\n\nI have no idea how it compares to your method since you didn't post any code.\n" ]
[ 2, 2, 1, 0, 0, 0 ]
[]
[]
[ "dictionary", "python" ]
stackoverflow_0001301149_dictionary_python.txt
Q: unable to import libxml2mod from the python script File "/usr/local/lib/python2.5/site-packages/libxml2.py", line 1, in <module> import libxml2mod ImportError: /usr/local/lib/python2.5/site-packages/libxml2mod.so: undefined symbol:xmlTextReaderSetup >>> import libxml2mod >>> import libxml2 >>> on Python Prompt it works fine !! can anyone has idea why my program is not working from .py file as import is working perfect from python prompt. A: I can only suggest that your paths are different for some reason. Either that, or you are not using the same python interpreter in both cases. I have experienced this when I happen to have a couple of interpreters, and the wrong one is either default, or specified in the #! section of the script.
unable to import libxml2mod from the python script
File "/usr/local/lib/python2.5/site-packages/libxml2.py", line 1, in <module> import libxml2mod ImportError: /usr/local/lib/python2.5/site-packages/libxml2mod.so: undefined symbol:xmlTextReaderSetup >>> import libxml2mod >>> import libxml2 >>> on Python Prompt it works fine !! can anyone has idea why my program is not working from .py file as import is working perfect from python prompt.
[ "I can only suggest that your paths are different for some reason. Either that, or you are not using the same python interpreter in both cases.\nI have experienced this when I happen to have a couple of interpreters, and the wrong one is either default, or specified in the #! section of the script.\n" ]
[ 2 ]
[]
[]
[ "importerror", "python" ]
stackoverflow_0001301167_importerror_python.txt
Q: Python PEP8 printing wrapped strings without indent There is probably an easy answer for this, just not sure how to tease it out of my searches. I adhere to PEP8 in my python code, and I'm currently using OptionParser for a script I'm writing. To prevent lines from going beyond a with of 80, I use the backslash where needed. For example: if __name__=='__main__': usage = '%prog [options]\nWithout any options, will display 10 random \ users of each type.' parser = OptionParser(usage) That indent after the backslash results in: ~$ ./er_usersearch -h Usage: er_usersearch [options] Without any options, will display 10 random users of each type. That gap after "random" bugs me. I could do: if __name__=='__main__': usage = '%prog [options]\nWithout any options, will display 10 random \ users of each type.' parser = OptionParser(usage) But that bugs me just as much. This seems silly: if __name__=='__main__': usage = ''.join(['%prog [options]\nWithout any options, will display', ' 10 random users of each type.']) parser = OptionParser(usage) There must be a better way? A: Use automatic string concatenation + implicit line continuation: long_string = ("Line 1 " "Line 2 " "Line 3 ") >>> long_string 'Line 1 Line 2 Line 3 ' A: This works: if __name__=='__main__': usage = ('%prog [options]\nWithout any options, will display 10 random ' 'users of each type.') parser = OptionParser(usage) Although I'd lay it out like this: if __name__=='__main__': usage = ('%prog [options]\n' 'Without any options, will display 10 random users ' 'of each type.') parser = OptionParser(usage) (So I start a new line when there's a \n in the string, as well as when I need to word wrap the source code.) A: try this: if __name__=='__main__': usage = '%prog [options]\nWithout any options, will display 10 random ' \ 'users of each type.' parser = OptionParser(usage)
Python PEP8 printing wrapped strings without indent
There is probably an easy answer for this, just not sure how to tease it out of my searches. I adhere to PEP8 in my python code, and I'm currently using OptionParser for a script I'm writing. To prevent lines from going beyond a with of 80, I use the backslash where needed. For example: if __name__=='__main__': usage = '%prog [options]\nWithout any options, will display 10 random \ users of each type.' parser = OptionParser(usage) That indent after the backslash results in: ~$ ./er_usersearch -h Usage: er_usersearch [options] Without any options, will display 10 random users of each type. That gap after "random" bugs me. I could do: if __name__=='__main__': usage = '%prog [options]\nWithout any options, will display 10 random \ users of each type.' parser = OptionParser(usage) But that bugs me just as much. This seems silly: if __name__=='__main__': usage = ''.join(['%prog [options]\nWithout any options, will display', ' 10 random users of each type.']) parser = OptionParser(usage) There must be a better way?
[ "Use automatic string concatenation + implicit line continuation:\nlong_string = (\"Line 1 \"\n \"Line 2 \"\n \"Line 3 \")\n\n\n>>> long_string\n'Line 1 Line 2 Line 3 '\n\n", "This works:\nif __name__=='__main__':\n usage = ('%prog [options]\\nWithout any options, will display 10 random '\n 'users of each type.')\n parser = OptionParser(usage)\n\nAlthough I'd lay it out like this:\nif __name__=='__main__':\n usage = ('%prog [options]\\n'\n 'Without any options, will display 10 random users '\n 'of each type.')\n parser = OptionParser(usage)\n\n(So I start a new line when there's a \\n in the string, as well as when I need to word wrap the source code.)\n", "try this:\nif __name__=='__main__':\n usage = '%prog [options]\\nWithout any options, will display 10 random ' \\\n 'users of each type.'\n parser = OptionParser(usage)\n\n" ]
[ 28, 3, 1 ]
[]
[]
[ "pep8", "python", "word_wrap" ]
stackoverflow_0001302364_pep8_python_word_wrap.txt
Q: Produce multiple files from a single file in python I have a file like below. Sequence A.1.1 Bacteria ATGCGCGATATAGGCCT ATTATGCGCGCGCGC Sequence A.1.2 Virus ATATATGCGCCGCGCGTA ATATATATGCGCGCCGGC Sequence B.1.21 Chimpanzee ATATAGCGCGCGCGCGAT ATATATATGCGCG Sequence C.21.4 Human ATATATATGCCGCGCG ATATAATATC I want to make separate files for sequences of category A, B and C from one single file. Kindly suggest some reading material for breaking this code. Thanks. The output should be three files, one for 'A', second file for Sequences with 'B' and third file for sequences with 'C'. A: It's not 100% clear what you want to do, but something like: currout = None seqname2file = dict() for line in open('thefilewhosenameyoudonottellus.txt'): if line.startswith('Sequence '): seqname = line[9] # A or B or C if seqname not in seqname2file: filename = 'outputfileforsequence_%s.txt' % seqname seqname2file[seqname] = open(filename, 'w') currout = seqname2file[seqname] currout.write(line) for f in seqname2file.values(): f.close() should get you pretty close -- if you want three separate files (one each for A, B and C) that among them contain all the lines from the input file, it's just about done except you'll probably need better filenames (but you don't let us in on the secret of what those might be;-), otherwise some tweaks should get it there. BTW, it always helps immensely (to help you more effectively rather than stumbling in the dark and guessing) if you also give examples of what output results you want for the input data example you give!-) A: I'm not sure exactly what you want the output to be, but it sounds like you need something like: #!/usr/bin/python # Open the input file fhIn = open("input_file.txt", "r") # Open the output files and store their handles in a dictionary fhOut = {} fhOut['A'] = open("sequence_a.txt", "w") fhOut['B'] = open("sequence_b.txt", "w") fhOut['C'] = open("sequence_c.txt", "w") # Create a regexp to find the line naming the sequence Matcher = re.compile(r'^Sequence (?P<sequence>[A-C])') # Iterate through each line in the file CurrentSequence = None for line in fhIn: # If the line is a sequence identifier... m = Matcher.match(line) if m is not None: # Select the appropriate sequence from the regexp match CurrentSequence = m.group('sequence') # Uncomment the following two lines to skip blank lines # elif len(line.strip()) == 0: # pass # Print out the line to the current sequence output file # (change to else if you don't want to print the sequence titles) if CurrentSequence is not None: fhOut[CurrentSequence].write(line) # Close all the file handles fhIn.close() fhOut['A'].close() fhOut['B'].close() fhOut['C'].close() Completely untested though...
Produce multiple files from a single file in python
I have a file like below. Sequence A.1.1 Bacteria ATGCGCGATATAGGCCT ATTATGCGCGCGCGC Sequence A.1.2 Virus ATATATGCGCCGCGCGTA ATATATATGCGCGCCGGC Sequence B.1.21 Chimpanzee ATATAGCGCGCGCGCGAT ATATATATGCGCG Sequence C.21.4 Human ATATATATGCCGCGCG ATATAATATC I want to make separate files for sequences of category A, B and C from one single file. Kindly suggest some reading material for breaking this code. Thanks. The output should be three files, one for 'A', second file for Sequences with 'B' and third file for sequences with 'C'.
[ "It's not 100% clear what you want to do, but something like:\ncurrout = None\nseqname2file = dict()\n\nfor line in open('thefilewhosenameyoudonottellus.txt'):\n if line.startswith('Sequence '): \n seqname = line[9] # A or B or C\n if seqname not in seqname2file:\n filename = 'outputfileforsequence_%s.txt' % seqname\n seqname2file[seqname] = open(filename, 'w')\n currout = seqname2file[seqname]\n currout.write(line)\n\nfor f in seqname2file.values():\n f.close()\n\nshould get you pretty close -- if you want three separate files (one each for A, B and C) that among them contain all the lines from the input file, it's just about done except you'll probably need better filenames (but you don't let us in on the secret of what those might be;-), otherwise some tweaks should get it there.\nBTW, it always helps immensely (to help you more effectively rather than stumbling in the dark and guessing) if you also give examples of what output results you want for the input data example you give!-)\n", "I'm not sure exactly what you want the output to be, but it sounds like you need something like:\n#!/usr/bin/python\n\n# Open the input file\nfhIn = open(\"input_file.txt\", \"r\")\n\n# Open the output files and store their handles in a dictionary\nfhOut = {}\nfhOut['A'] = open(\"sequence_a.txt\", \"w\")\nfhOut['B'] = open(\"sequence_b.txt\", \"w\")\nfhOut['C'] = open(\"sequence_c.txt\", \"w\")\n\n# Create a regexp to find the line naming the sequence\nMatcher = re.compile(r'^Sequence (?P<sequence>[A-C])')\n\n# Iterate through each line in the file\nCurrentSequence = None\nfor line in fhIn:\n # If the line is a sequence identifier...\n m = Matcher.match(line)\n if m is not None:\n # Select the appropriate sequence from the regexp match\n CurrentSequence = m.group('sequence')\n # Uncomment the following two lines to skip blank lines\n # elif len(line.strip()) == 0:\n # pass\n # Print out the line to the current sequence output file\n # (change to else if you don't want to print the sequence titles)\n if CurrentSequence is not None:\n fhOut[CurrentSequence].write(line)\n\n# Close all the file handles\nfhIn.close()\nfhOut['A'].close()\nfhOut['B'].close()\nfhOut['C'].close()\n\nCompletely untested though...\n" ]
[ 2, 0 ]
[]
[]
[ "file", "python" ]
stackoverflow_0001302499_file_python.txt
Q: python for firefox extensions? Can I use python in firefox extensions? Does it work? A: Yes, through an extension for mozilla, Python Extension (pythonext). Originally hosted in mozdev, PythonExt project have move to Google code, you can see it in PythonExt in Google code.
python for firefox extensions?
Can I use python in firefox extensions? Does it work?
[ "Yes, through an extension for mozilla, Python Extension (pythonext).\nOriginally hosted in mozdev, PythonExt project have move to Google code, you can see it in PythonExt in Google code.\n" ]
[ 25 ]
[]
[]
[ "firefox", "firefox_addon", "plugins", "python" ]
stackoverflow_0001302567_firefox_firefox_addon_plugins_python.txt
Q: cutdown uuid further to make short string I need to generate unique record id for the given unique string. I tried using uuid format which seems to be good. But we feel that is lengthly. so we need to cutdown the uuid string 9f218a38-12cd-5942-b877-80adc0589315 to smaller. By removing '-' we can save 4 chars. What is the safest part to remove from uuid? We don't need universally unique id but we like to use uuid as a source but cut down strings. We need unique id specific to site/database (SQL Server/ADO.NET Data services). Any idea or sample from any language is fine Thanks in advance A: Why not instead just convert it to a base 64 string? You can cut it down to 22 characters that way. Storing UUID as base64 String A: If you are using MS-SQL you should probably just use the uniqueindentifier datatype, it is both compact (16 bytes) and since the SQL engine knows about it it can optimize indexes and queries using it. A: An UUID provides (almost) 128 bits of uniqueness. You may shorten it to 16 binary bytes, or 22 base64-encoded characters. I wouldn't recommend removing any part of a UUID, otherwise, it just loses its sense. UUIDs were designed so that all the 128 bits have meaning. If you want less than that, you should use some other schema. For example, if you could guarantee that only version 4 UUIDs are used, then you could take just the first 32 bits, or just the last 32 bits. You lose uniqueness, but you have pretty random numbers. Just avoid the bits that are fixed (version and variant). But if you can't guarantee that, you will have real problems. For version 1 UUIDs, the first bits will not be unique for UUIDs generated in the same day, and the last bits will not be unique for UUIDs generated in the same system. Even if you CRC the UUID, it is not guaranteed that you will have 16 or 32 bits of uniqueness. In this case, just use some other scheme. Generate a 32-bit random number using the system random number generator and use that as your unique ID. Don't rely on UUIDs if you intend on stripping its length. A: The UUID is 128 bits or 16 bytes. With no encoding, you could get it as low as 16 bytes. UUIDs are commonly written in hexadecimal, making them 32 byte readable strings. With other encodings, you get different results: base-64 turns 3 8-bit bytes into 4 6-bit characters, so 16 bytes of data becomes 22 characters long base-85 turns 4 8-bit bytes into 5 6.4-bit characters, so 16 bytes of data becomes 20 characters long It all depends on if you want readable strings and how standard/common an encoding you want to use. A: A UUID has 128 bits. Have you considered doing a CRC of it? That could get it down to 16 or 32 bits easily, and would use all the original information. If a CRC isn't good enough, you could always use the first few bytes of a proper hash (SHA256, for example). If you really want to just cut down the UUID, the format of it is described in RFC 4122. You should be able to figure out what parts your implementation doesn't need from that.
cutdown uuid further to make short string
I need to generate unique record id for the given unique string. I tried using uuid format which seems to be good. But we feel that is lengthly. so we need to cutdown the uuid string 9f218a38-12cd-5942-b877-80adc0589315 to smaller. By removing '-' we can save 4 chars. What is the safest part to remove from uuid? We don't need universally unique id but we like to use uuid as a source but cut down strings. We need unique id specific to site/database (SQL Server/ADO.NET Data services). Any idea or sample from any language is fine Thanks in advance
[ "Why not instead just convert it to a base 64 string? You can cut it down to 22 characters that way.\nStoring UUID as base64 String\n", "If you are using MS-SQL you should probably just use the uniqueindentifier datatype, it is both compact (16 bytes) and since the SQL engine knows about it it can optimize indexes and queries using it. \n", "An UUID provides (almost) 128 bits of uniqueness. You may shorten it to 16 binary bytes, or 22 base64-encoded characters. I wouldn't recommend removing any part of a UUID, otherwise, it just loses its sense. UUIDs were designed so that all the 128 bits have meaning. If you want less than that, you should use some other schema.\nFor example, if you could guarantee that only version 4 UUIDs are used, then you could take just the first 32 bits, or just the last 32 bits. You lose uniqueness, but you have pretty random numbers. Just avoid the bits that are fixed (version and variant).\nBut if you can't guarantee that, you will have real problems. For version 1 UUIDs, the first bits will not be unique for UUIDs generated in the same day, and the last bits will not be unique for UUIDs generated in the same system. Even if you CRC the UUID, it is not guaranteed that you will have 16 or 32 bits of uniqueness.\nIn this case, just use some other scheme. Generate a 32-bit random number using the system random number generator and use that as your unique ID. Don't rely on UUIDs if you intend on stripping its length.\n", "The UUID is 128 bits or 16 bytes. With no encoding, you could get it as low as 16 bytes. UUIDs are commonly written in hexadecimal, making them 32 byte readable strings. With other encodings, you get different results:\n\nbase-64 turns 3 8-bit bytes into 4 6-bit characters, so 16 bytes of data becomes 22 characters long\nbase-85 turns 4 8-bit bytes into 5 6.4-bit characters, so 16 bytes of data becomes 20 characters long\n\nIt all depends on if you want readable strings and how standard/common an encoding you want to use.\n", "A UUID has 128 bits. Have you considered doing a CRC of it? That could get it down to 16 or 32 bits easily, and would use all the original information. If a CRC isn't good enough, you could always use the first few bytes of a proper hash (SHA256, for example).\nIf you really want to just cut down the UUID, the format of it is described in RFC 4122. You should be able to figure out what parts your implementation doesn't need from that.\n" ]
[ 10, 3, 2, 2, 0 ]
[]
[]
[ "c#", "python", "string", "uniqueidentifier", "uuid" ]
stackoverflow_0001302057_c#_python_string_uniqueidentifier_uuid.txt
Q: What possible values does datetime.strptime() accept for %Z? Python's datetime.strptime() is documented as supporting a timezone in the %Z field. So, for example: In [1]: datetime.strptime('2009-08-19 14:20:36 UTC', "%Y-%m-%d %H:%M:%S %Z") Out[1]: datetime.datetime(2009, 8, 19, 14, 20, 36) However, "UTC" seems to be the only timezone I can get it to support: In [2]: datetime.strptime('2009-08-19 14:20:36 EDT', "%Y-%m-%d %H:%M:%S %Z") ValueError: time data '2009-08-19 14:20:36 EDT' does not match format '%Y-%m-%d %H:%M:%S %Z' In [3]: datetime.strptime('2009-08-19 14:20:36 America/Phoenix', "%Y-%m-%d %H:%M:%S %Z") ValueError: time data '2009-08-19 14:20:36 America/Phoenix' does not match format '%Y-%m-%d %H:%M:%S %Z' In [4]: datetime.strptime('2009-08-19 14:20:36 -0700', "%Y-%m-%d %H:%M:%S %Z") ValueError: time data '2009-08-19 14:20:36 -0700' does not match format '%Y-%m-%d %H:%M:%S %Z' What format is it expecting for %Z? Or, how do I represent a timezone other than UTC? A: I gather they are GMT, UTC, and whatever is listed in time.tzname. >>> for t in time.tzname: ... print t ... Eastern Standard Time Eastern Daylight Time >>> datetime.strptime('2009-08-19 14:20:36 Eastern Standard Time', "%Y-%m-%d %H:%M:%S %Z") datetime.datetime(2009, 8, 19, 14, 20, 36) >>> datetime.strptime('2009-08-19 14:20:36 UTC', "%Y-%m-%d %H:%M:%S %Z") datetime.datetime(2009, 8, 19, 14, 20, 36) >>> datetime.strptime('2009-08-19 14:20:36 GMT', "%Y-%m-%d %H:%M:%S %Z") datetime.datetime(2009, 8, 19, 14, 20, 36) These settings are machine-specific, of course, and yours will be different in all likelihood. A: This is from the time module, but I'm almost certain it applies to datetime: Support for the %Z directive is based on the values contained in tzname and whether daylight is true. Because of this, it is platform-specific except for recognizing UTC and GMT which are always known (and are considered to be non-daylight savings timezones). https://docs.python.org/library/time.html On my system: >>> import time >>> time.tzname ('PST', 'PDT') Using anything but these in datetime.strptime results in an exception. So, look to see what you have available on your machine.
What possible values does datetime.strptime() accept for %Z?
Python's datetime.strptime() is documented as supporting a timezone in the %Z field. So, for example: In [1]: datetime.strptime('2009-08-19 14:20:36 UTC', "%Y-%m-%d %H:%M:%S %Z") Out[1]: datetime.datetime(2009, 8, 19, 14, 20, 36) However, "UTC" seems to be the only timezone I can get it to support: In [2]: datetime.strptime('2009-08-19 14:20:36 EDT', "%Y-%m-%d %H:%M:%S %Z") ValueError: time data '2009-08-19 14:20:36 EDT' does not match format '%Y-%m-%d %H:%M:%S %Z' In [3]: datetime.strptime('2009-08-19 14:20:36 America/Phoenix', "%Y-%m-%d %H:%M:%S %Z") ValueError: time data '2009-08-19 14:20:36 America/Phoenix' does not match format '%Y-%m-%d %H:%M:%S %Z' In [4]: datetime.strptime('2009-08-19 14:20:36 -0700', "%Y-%m-%d %H:%M:%S %Z") ValueError: time data '2009-08-19 14:20:36 -0700' does not match format '%Y-%m-%d %H:%M:%S %Z' What format is it expecting for %Z? Or, how do I represent a timezone other than UTC?
[ "I gather they are GMT, UTC, and whatever is listed in time.tzname.\n>>> for t in time.tzname:\n... print t\n...\nEastern Standard Time\nEastern Daylight Time\n>>> datetime.strptime('2009-08-19 14:20:36 Eastern Standard Time', \"%Y-%m-%d %H:%M:%S %Z\")\ndatetime.datetime(2009, 8, 19, 14, 20, 36)\n>>> datetime.strptime('2009-08-19 14:20:36 UTC', \"%Y-%m-%d %H:%M:%S %Z\")\ndatetime.datetime(2009, 8, 19, 14, 20, 36)\n>>> datetime.strptime('2009-08-19 14:20:36 GMT', \"%Y-%m-%d %H:%M:%S %Z\")\ndatetime.datetime(2009, 8, 19, 14, 20, 36)\n\nThese settings are machine-specific, of course, and yours will be different in all likelihood.\n", "This is from the time module, but I'm almost certain it applies to datetime:\n\nSupport for the %Z directive is based\n on the values contained in tzname and\n whether daylight is true. Because of\n this, it is platform-specific except\n for recognizing UTC and GMT which are\n always known (and are considered to be\n non-daylight savings timezones).\n\nhttps://docs.python.org/library/time.html\nOn my system:\n>>> import time\n>>> time.tzname\n('PST', 'PDT')\n\nUsing anything but these in datetime.strptime results in an exception. So, look to see what you have available on your machine.\n" ]
[ 9, 4 ]
[]
[]
[ "datetime", "python" ]
stackoverflow_0001302701_datetime_python.txt
Q: Django popup box Error?? Running development web server I have my Django site up and running, and everything works fine EXCEPT: When I first go to my site http://127.0.0.1:8000 A popup box comes up and says "The page at http://127.0.0.1:8000 says" And just sits there You have to hit OK before anything is displayed. What is going on here? A: You must have a Javascript alert box in your template somewhere.
Django popup box Error?? Running development web server
I have my Django site up and running, and everything works fine EXCEPT: When I first go to my site http://127.0.0.1:8000 A popup box comes up and says "The page at http://127.0.0.1:8000 says" And just sits there You have to hit OK before anything is displayed. What is going on here?
[ "You must have a Javascript alert box in your template somewhere.\n" ]
[ 1 ]
[]
[]
[ "django", "python", "webserver" ]
stackoverflow_0001302040_django_python_webserver.txt
Q: Convert a nested dataset to a flat dataset, while retaining enough data to convert it back to nested set Say I have a dataset like (1, 2, (3, 4), (5, 6), (7, 8, (9, 0))) I want to convert it to a (semi) flat representation like, ( (1, 2), (1, 2, 3, 4), (1, 2, 5, 6), (1, 2, 7, 8), (1, 2, 7, 8, 9, 0), ) If you use this, (taken from SO) def flatten(iterable): for i, item in enumerate(iterable): if hasattr(item, '__iter__'): for nested in flatten(item): yield nested else: yield item this will convert it to a list like(after iterating) [1, 2, 3, 4, 5, 6, 7, 8, 9] But I cant get the original from this reperenstation, while I can get the original back from the first. (If every tuple has 2 elements only) A: This will give the example output. Don't know if that's really the best way of representing the model you want, though... def combineflatten(seq): items= tuple(item for item in seq if not isinstance(item, tuple)) yield items for item in seq: if isinstance(item, tuple): for yielded in combineflatten(item): yield items+yielded >>> tuple(combineflatten((1, 2, (3, 4), (5, 6), (7, 8, (9, 0))))) ((1, 2), (1, 2, 3, 4), (1, 2, 5, 6), (1, 2, 7, 8), (1, 2, 7, 8, 9, 0)) A: How about a using a different "flat" representation, one which can be converted back: [1, 2, '(', 3, 4, ')', '(', 5, 6, ')', '(', 7, 8, '(', 9, 0, ')', ')']
Convert a nested dataset to a flat dataset, while retaining enough data to convert it back to nested set
Say I have a dataset like (1, 2, (3, 4), (5, 6), (7, 8, (9, 0))) I want to convert it to a (semi) flat representation like, ( (1, 2), (1, 2, 3, 4), (1, 2, 5, 6), (1, 2, 7, 8), (1, 2, 7, 8, 9, 0), ) If you use this, (taken from SO) def flatten(iterable): for i, item in enumerate(iterable): if hasattr(item, '__iter__'): for nested in flatten(item): yield nested else: yield item this will convert it to a list like(after iterating) [1, 2, 3, 4, 5, 6, 7, 8, 9] But I cant get the original from this reperenstation, while I can get the original back from the first. (If every tuple has 2 elements only)
[ "This will give the example output. Don't know if that's really the best way of representing the model you want, though...\ndef combineflatten(seq):\n items= tuple(item for item in seq if not isinstance(item, tuple))\n yield items\n for item in seq:\n if isinstance(item, tuple):\n for yielded in combineflatten(item):\n yield items+yielded\n\n>>> tuple(combineflatten((1, 2, (3, 4), (5, 6), (7, 8, (9, 0)))))\n((1, 2), (1, 2, 3, 4), (1, 2, 5, 6), (1, 2, 7, 8), (1, 2, 7, 8, 9, 0))\n\n", "How about a using a different \"flat\" representation, one which can be converted back:\n[1, 2, '(', 3, 4, ')', '(', 5, 6, ')', '(', 7, 8, '(', 9, 0, ')', ')']\n\n" ]
[ 2, 0 ]
[]
[]
[ "algorithm", "python" ]
stackoverflow_0001302653_algorithm_python.txt
Q: How to set initial size for a dictionary in Python? I'm putting around 4 million different keys into a Python dictionary. Creating this dictionary takes about 15 minutes and consumes about 4GB of memory on my machine. After the dictionary is fully created, querying the dictionary is fast. I suspect that dictionary creation is so resource consuming because the dictionary is very often rehashed (as it grows enormously). Is is possible to create a dictionary in Python with some initial size or bucket number? My dictionary points from a number to an object. class MyObject: def __init__(self): # some fields... d = {} d[i] = MyObject() # 4M times on different key... A: With performance issues it's always best to measure. Here are some timings: d = {} for i in xrange(4000000): d[i] = None # 722ms d = dict(itertools.izip(xrange(4000000), itertools.repeat(None))) # 634ms dict.fromkeys(xrange(4000000)) # 558ms s = set(xrange(4000000)) dict.fromkeys(s) # Not including set construction 353ms The last option doesn't do any resizing, it just copies the hashes from the set and increments references. As you can see, the resizing isn't taking a lot of time. It's probably your object creation that is slow. A: I tried : a = dict.fromkeys((range(4000000))) It creates a dictionary with 4 000 000 entries in about 3 seconds. After that, setting values are really fast. So I guess dict.fromkey is definitly the way to go. A: If you know C, you can take a look at dictobject.c and the Notes on Optimizing Dictionaries. There you'll notice the parameter PyDict_MINSIZE: PyDict_MINSIZE. Currently set to 8. This parameter is defined in dictobject.h. So you could change it when compiling Python but this probably is a bad idea. A: You can try to separate key hashing from the content filling with dict.fromkeys classmethod. It'll create a dict of a known size with all values defaulting to either None or a value of your choice. After that you could iterate over it to fill with the values. It'll help you to time the actual hashing of all keys. Not sure if you'd be able significantly increase the speed though. A: If your datas need/can be stored on disc perhaps you can store your datas in a BSDDB database or use Cpickle to load/store your dictionnary A: Do you initialize all keys with new "empty" instances of the same type? Is it not possible to write a defaultdict or something that will create the object when it is accessed?
How to set initial size for a dictionary in Python?
I'm putting around 4 million different keys into a Python dictionary. Creating this dictionary takes about 15 minutes and consumes about 4GB of memory on my machine. After the dictionary is fully created, querying the dictionary is fast. I suspect that dictionary creation is so resource consuming because the dictionary is very often rehashed (as it grows enormously). Is is possible to create a dictionary in Python with some initial size or bucket number? My dictionary points from a number to an object. class MyObject: def __init__(self): # some fields... d = {} d[i] = MyObject() # 4M times on different key...
[ "With performance issues it's always best to measure. Here are some timings:\n d = {}\n for i in xrange(4000000):\n d[i] = None\n # 722ms\n\n d = dict(itertools.izip(xrange(4000000), itertools.repeat(None)))\n # 634ms\n\n dict.fromkeys(xrange(4000000))\n # 558ms\n\n s = set(xrange(4000000))\n dict.fromkeys(s)\n # Not including set construction 353ms\n\nThe last option doesn't do any resizing, it just copies the hashes from the set and increments references. As you can see, the resizing isn't taking a lot of time. It's probably your object creation that is slow. \n", "I tried :\na = dict.fromkeys((range(4000000)))\n\nIt creates a dictionary with 4 000 000 entries in about 3 seconds. After that, setting values are really fast. So I guess dict.fromkey is definitly the way to go.\n", "If you know C, you can take a look at dictobject.c and the Notes on Optimizing Dictionaries. There you'll notice the parameter PyDict_MINSIZE:\n\nPyDict_MINSIZE. Currently set to 8.\n\nThis parameter is defined in dictobject.h. So you could change it when compiling Python but this probably is a bad idea.\n", "You can try to separate key hashing from the content filling with dict.fromkeys classmethod. It'll create a dict of a known size with all values defaulting to either None or a value of your choice. After that you could iterate over it to fill with the values. It'll help you to time the actual hashing of all keys. Not sure if you'd be able significantly increase the speed though.\n", "If your datas need/can be stored on disc perhaps you can store your datas in a BSDDB database or use Cpickle to load/store your dictionnary\n", "Do you initialize all keys with new \"empty\" instances of the same type? Is it not possible to write a defaultdict or something that will create the object when it is accessed?\n" ]
[ 43, 11, 7, 4, 2, 1 ]
[]
[]
[ "dictionary", "performance", "python" ]
stackoverflow_0001298636_dictionary_performance_python.txt
Q: Python / ADOX: 'The specified module could not be found.' (win32 extensions) I'm running pywin32 for python 2.5. I'm following the instructions for python ADO given at http://www.ecp.cc/pyado.html. Creating an ADODB.Recordset object works fine. But when I try to create an ADOX.Catalog object I get an error: >>> cat=win32com.client.Dispatch(r'ADOX.Catalog') Traceback (most recent call last): File "<interactive input>", line 1, in <module> File "C:\Python25\lib\site-packages\win32com\client\__init__.py", line 95, in Dispatch dispatch, userName = dynamic._GetGoodDispatchAndUserName(dispatch,userName,clsctx) File "C:\Python25\lib\site-packages\win32com\client\dynamic.py", line 98, in _GetGoodDispatchAndUserName return (_GetGoodDispatch(IDispatch, clsctx), userName) File "C:\Python25\lib\site-packages\win32com\client\dynamic.py", line 78, in _GetGoodDispatch IDispatch = pythoncom.CoCreateInstance(IDispatch, None, clsctx, pythoncom.IID_IDispatch) com_error: (-2147024770, 'The specified module could not be found.', None, None) Any ideas what I might be missing? A: Solution: even though ADOX was showing up in the COM browser as an available library, it wasn't "registered" properly. Following the instructions here, I executed the following at the Start->Run prompt: regsvr32 "C:\Program Files\Common Files\System\ado\msadox.dll" Note that this is on a WinXP SP2 machine. I guess the registry had become corrupt somehow? In any case, I'm new to Windows programming so my explanation may be off, but maybe the fix will help someone.
Python / ADOX: 'The specified module could not be found.' (win32 extensions)
I'm running pywin32 for python 2.5. I'm following the instructions for python ADO given at http://www.ecp.cc/pyado.html. Creating an ADODB.Recordset object works fine. But when I try to create an ADOX.Catalog object I get an error: >>> cat=win32com.client.Dispatch(r'ADOX.Catalog') Traceback (most recent call last): File "<interactive input>", line 1, in <module> File "C:\Python25\lib\site-packages\win32com\client\__init__.py", line 95, in Dispatch dispatch, userName = dynamic._GetGoodDispatchAndUserName(dispatch,userName,clsctx) File "C:\Python25\lib\site-packages\win32com\client\dynamic.py", line 98, in _GetGoodDispatchAndUserName return (_GetGoodDispatch(IDispatch, clsctx), userName) File "C:\Python25\lib\site-packages\win32com\client\dynamic.py", line 78, in _GetGoodDispatch IDispatch = pythoncom.CoCreateInstance(IDispatch, None, clsctx, pythoncom.IID_IDispatch) com_error: (-2147024770, 'The specified module could not be found.', None, None) Any ideas what I might be missing?
[ "Solution: even though ADOX was showing up in the COM browser as an available library, it wasn't \"registered\" properly. Following the instructions here, I executed the following at the Start->Run prompt:\n\nregsvr32 \"C:\\Program Files\\Common Files\\System\\ado\\msadox.dll\"\n\nNote that this is on a WinXP SP2 machine. I guess the registry had become corrupt somehow? In any case, I'm new to Windows programming so my explanation may be off, but maybe the fix will help someone.\n" ]
[ 4 ]
[]
[]
[ "adodb", "adox", "python", "pywin32" ]
stackoverflow_0001290472_adodb_adox_python_pywin32.txt
Q: Mutex in Python Twisted I'm using the Twisted framework, and am getting RPCs asynchronously. I have another function which does a task every 2 seconds, and sleeps in between. This is called through reactor.callInThread. These depend on a shared resources, so I need some thread-safe way of accessing them. How does one go about using critical sections / mutexes / locks in twisted? A: Though you can use threads in twisted, the usual idiom with twisted is to do RPC asyncronously using a single thread. Thats one of its advantages. The twisted framework will run the reactor and call your handler events when RPC results are ready for you. Then your code runs, and when your handler exits, control goes back to the reactor which will call the next handler that has code ready. So even though a lot of things are going on in paralell, twisted ensures that only one of your functions is running at a time, so you shouldn't need any mutexing, just maintaining state variables so your callbacks know what the current context they are operating in is enough. If you are explictly create threads and use them with the twisted framework running you'd probably need something like Standard Python Mutex, though you'd need to be very careful not to ever have your main Reactor callback thread waiting on a mutex for any length of time as callbacks inside the reactor aren't supposed to block. A: Twisted lets you write event-driven code in a single thread. Multiple events can write to standard Python non-thread-safe data structures in a safe matter, and non-thread-safe data structures can be used as mutexes. If you do start using threads, then you have to worry about these things. But you don't have to use them. So, as commented: use task.LoopingCall or reactor.CallLater for your task. Never call time.sleep(), let the reactor call your task at the right time (and do other work in between). Respond to your RPCs as they come. There won't be two threads running your code at once. However, you don't know the order in which your callbacks will be called. Once you relinquish control to a Deferred, application state may have changed by the time you get it back.
Mutex in Python Twisted
I'm using the Twisted framework, and am getting RPCs asynchronously. I have another function which does a task every 2 seconds, and sleeps in between. This is called through reactor.callInThread. These depend on a shared resources, so I need some thread-safe way of accessing them. How does one go about using critical sections / mutexes / locks in twisted?
[ "Though you can use threads in twisted, the usual idiom with twisted is to do RPC asyncronously using a single thread. Thats one of its advantages. The twisted framework will run the reactor and call your handler events when RPC results are ready for you. Then your code runs, and when your handler exits, control goes back to the reactor which will call the next handler that has code ready. So even though a lot of things are going on in paralell, twisted ensures that only one of your functions is running at a time, so you shouldn't need any mutexing, just maintaining state variables so your callbacks know what the current context they are operating in is enough.\nIf you are explictly create threads and use them with the twisted framework running you'd probably need something like Standard Python Mutex, though you'd need to be very careful not to ever have your main Reactor callback thread waiting on a mutex for any length of time as callbacks inside the reactor aren't supposed to block.\n", "Twisted lets you write event-driven code in a single thread. Multiple events can write to standard Python non-thread-safe data structures in a safe matter, and non-thread-safe data structures can be used as mutexes. If you do start using threads, then you have to worry about these things. But you don't have to use them.\nSo, as commented: use task.LoopingCall or reactor.CallLater for your task. Never call time.sleep(), let the reactor call your task at the right time (and do other work in between). Respond to your RPCs as they come.\nThere won't be two threads running your code at once. However, you don't know the order in which your callbacks will be called. Once you relinquish control to a Deferred, application state may have changed by the time you get it back.\n" ]
[ 2, 0 ]
[]
[]
[ "locking", "multithreading", "mutex", "python", "twisted" ]
stackoverflow_0001051652_locking_multithreading_mutex_python_twisted.txt
Q: Interface with remote computers using Python I've just become the system admin for my research group's cluster and, in this respect, am a novice. I'm trying to make a few tools to monitor the network and need help getting started implementing them with python (my native tongue). For example, I would like to view who is logged onto remote machines. By hand, I'd ssh and who, but how would I get this info into a script for manipulation? Something like, import remote_info as ri ri.open("foo05.bar.edu") ri.who() Out[1]: hutchinson tty7 2009-08-19 13:32 (:0) hutchinson pts/1 2009-08-19 13:33 (:0.0) Similarly for things like cat /proc/cpuinfo to get the processor information of a node. A starting point would be really great. Thanks. A: Here's a simple, cheap solution to get you started from subprocess import * p = Popen('ssh servername who', shell=True, stdout=PIPE) p.wait() print p.stdout.readlines() returns (eg) ['usr pts/0 2009-08-19 16:03 (kakapo)\n', 'usr pts/1 2009-08-17 15:51 (kakapo)\n', 'usr pts/5 2009-08-17 17:00 (kakapo)\n'] and for cpuinfo: p = Popen('ssh servername cat /proc/cpuinfo', shell=True, stdout=PIPE) A: I've been using Pexpect, which let's you ssh into machines, send commands, read the output, and react to it, with success. I even started an open-source project around it, Proxpect - which haven't been updated in ages, but I digress... A: The pexpect module can help you interface with ssh. More or less, here is what your example would look like. child = pexpect.spawn('ssh servername') child.expect('Password:') child.sendline('ABCDEF') (output,status) = child.sendline('who') A: If your needs overgrow simple "ssh remote-host.example.org who" then there is an awesome python library, called RPyC. It has so called "classic" mode which allows to almost transparently execute Python code over the network with several lines of code. Very useful tool for trusted environments. Here's an example from Wikipedia: import rpyc # assuming a classic server is running on 'hostname' conn = rpyc.classic.connect("hostname") # runs os.listdir() and os.stat() remotely, printing results locally def remote_ls(path): ros = conn.modules.os for filename in ros.listdir(path): stats = ros.stat(ros.path.join(path, filename)) print "%d\t%d\t%s" % (stats.st_size, stats.st_uid, filename) remote_ls("/usr/bin") If you're interested, there's a good tutorial on their wiki. But, of course, if you're perfectly fine with ssh calls using Popen or just don't want to run separate "RPyC" daemon, then this is definitely an overkill. A: This covers the bases. Notice the use of sudo for things that needed more privileges. We configured sudo to allow those commands for that user without needing a password typed. Also, keep in mind that you should run ssh-agent to make this "make sense". But all in all, it works really well. Running deploy-control httpd configtest will check the apache configuration on all the remote servers. #!/usr/local/bin/python import subprocess import sys # The user@host: for the SourceURLs (NO TRAILING SLASH) RemoteUsers = [ "[email protected]", "[email protected]", ] ################################################################################################### # Global Variables Arg = None # Implicitly verified below in if/else Command = tuple(sys.argv[1:]) ResultList = [] ################################################################################################### for UH in RemoteUsers: print "-"*80 print "Running %s command on: %s" % (Command, UH) #---------------------------------------------------------------------------------------------- if Command == ('httpd', 'configtest'): CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd configtest')) #---------------------------------------------------------------------------------------------- elif Command == ('httpd', 'graceful'): CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd graceful')) #---------------------------------------------------------------------------------------------- elif Command == ('httpd', 'status'): CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd status')) #---------------------------------------------------------------------------------------------- elif Command == ('disk', 'usage'): CommandResult = subprocess.call(('ssh', UH, 'df -h')) #---------------------------------------------------------------------------------------------- elif Command == ('uptime',): CommandResult = subprocess.call(('ssh', UH, 'uptime')) #---------------------------------------------------------------------------------------------- else: print print "#"*80 print print "Error: invalid command" print HelpAndExit() #---------------------------------------------------------------------------------------------- ResultList.append(CommandResult) print ################################################################################################### if any(ResultList): print "#"*80 print "#"*80 print "#"*80 print print "ERRORS FOUND. SEE ABOVE" print sys.exit(0) else: print "-"*80 print print "Looks OK!" print sys.exit(1) A: Fabric is a simple way to automate some simple tasks like this, the version I'm currently using allows you to wrap up commands like so: run('whoami', fail='ignore') you can specify config options (config.fab_user, config.fab_password) for each machine you need (if you want to automate username password handling). More info on Fabric here: http://www.nongnu.org/fab/ There is a new version which is more Pythonic - I'm not sure whether that is going to be better for you int his case... works fine for me at present...
Interface with remote computers using Python
I've just become the system admin for my research group's cluster and, in this respect, am a novice. I'm trying to make a few tools to monitor the network and need help getting started implementing them with python (my native tongue). For example, I would like to view who is logged onto remote machines. By hand, I'd ssh and who, but how would I get this info into a script for manipulation? Something like, import remote_info as ri ri.open("foo05.bar.edu") ri.who() Out[1]: hutchinson tty7 2009-08-19 13:32 (:0) hutchinson pts/1 2009-08-19 13:33 (:0.0) Similarly for things like cat /proc/cpuinfo to get the processor information of a node. A starting point would be really great. Thanks.
[ "Here's a simple, cheap solution to get you started\nfrom subprocess import *\np = Popen('ssh servername who', shell=True, stdout=PIPE)\np.wait()\nprint p.stdout.readlines()\n\nreturns (eg)\n['usr pts/0 2009-08-19 16:03 (kakapo)\\n',\n 'usr pts/1 2009-08-17 15:51 (kakapo)\\n',\n 'usr pts/5 2009-08-17 17:00 (kakapo)\\n']\n\nand for cpuinfo:\np = Popen('ssh servername cat /proc/cpuinfo', shell=True, stdout=PIPE)\n\n", "I've been using Pexpect, which let's you ssh into machines, send commands, read the output, and react to it, with success. I even started an open-source project around it, Proxpect - which haven't been updated in ages, but I digress...\n", "The pexpect module can help you interface with ssh. More or less, here is what your example would look like.\nchild = pexpect.spawn('ssh servername')\nchild.expect('Password:')\nchild.sendline('ABCDEF')\n(output,status) = child.sendline('who')\n\n", "If your needs overgrow simple \"ssh remote-host.example.org who\" then there is an awesome python library, called RPyC. It has so called \"classic\" mode which allows to almost transparently execute Python code over the network with several lines of code. Very useful tool for trusted environments.\nHere's an example from Wikipedia:\nimport rpyc\n# assuming a classic server is running on 'hostname'\nconn = rpyc.classic.connect(\"hostname\")\n\n# runs os.listdir() and os.stat() remotely, printing results locally\ndef remote_ls(path):\n ros = conn.modules.os\n for filename in ros.listdir(path):\n stats = ros.stat(ros.path.join(path, filename))\n print \"%d\\t%d\\t%s\" % (stats.st_size, stats.st_uid, filename)\n\nremote_ls(\"/usr/bin\")\n\nIf you're interested, there's a good tutorial on their wiki.\nBut, of course, if you're perfectly fine with ssh calls using Popen or just don't want to run separate \"RPyC\" daemon, then this is definitely an overkill.\n", "This covers the bases. Notice the use of sudo for things that needed more privileges. We configured sudo to allow those commands for that user without needing a password typed.\nAlso, keep in mind that you should run ssh-agent to make this \"make sense\". But all in all, it works really well. Running deploy-control httpd configtest will check the apache configuration on all the remote servers.\n#!/usr/local/bin/python\n\nimport subprocess\nimport sys\n\n# The user@host: for the SourceURLs (NO TRAILING SLASH)\nRemoteUsers = [\n \"[email protected]\",\n \"[email protected]\",\n ]\n\n###################################################################################################\n# Global Variables\nArg = None\n\n\n# Implicitly verified below in if/else\nCommand = tuple(sys.argv[1:])\n\nResultList = []\n###################################################################################################\nfor UH in RemoteUsers:\n print \"-\"*80\n print \"Running %s command on: %s\" % (Command, UH)\n\n #----------------------------------------------------------------------------------------------\n if Command == ('httpd', 'configtest'):\n CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd configtest'))\n\n #----------------------------------------------------------------------------------------------\n elif Command == ('httpd', 'graceful'):\n CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd graceful'))\n\n #----------------------------------------------------------------------------------------------\n elif Command == ('httpd', 'status'):\n CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd status'))\n\n #----------------------------------------------------------------------------------------------\n elif Command == ('disk', 'usage'):\n CommandResult = subprocess.call(('ssh', UH, 'df -h'))\n\n #----------------------------------------------------------------------------------------------\n elif Command == ('uptime',):\n CommandResult = subprocess.call(('ssh', UH, 'uptime'))\n\n #----------------------------------------------------------------------------------------------\n else:\n print\n print \"#\"*80\n print\n print \"Error: invalid command\"\n print\n HelpAndExit()\n\n #----------------------------------------------------------------------------------------------\n ResultList.append(CommandResult)\n print\n\n\n###################################################################################################\nif any(ResultList):\n print \"#\"*80\n print \"#\"*80\n print \"#\"*80\n print\n print \"ERRORS FOUND. SEE ABOVE\"\n print\n sys.exit(0)\n\nelse:\n print \"-\"*80\n print\n print \"Looks OK!\"\n print\n sys.exit(1)\n\n", "Fabric is a simple way to automate some simple tasks like this, the version I'm currently using allows you to wrap up commands like so:\nrun('whoami', fail='ignore')\n\nyou can specify config options (config.fab_user, config.fab_password) for each machine you need (if you want to automate username password handling).\nMore info on Fabric here:\nhttp://www.nongnu.org/fab/\nThere is a new version which is more Pythonic - I'm not sure whether that is going to be better for you int his case... works fine for me at present...\n" ]
[ 2, 2, 1, 1, 0, 0 ]
[]
[]
[ "monitoring", "networking", "python" ]
stackoverflow_0001303047_monitoring_networking_python.txt
Q: Django RSS Feed Problems I'm working on a blogging application, and trying to made just a simple RSS feed system function. However, I'm running into an odd bug that doesn't make a lot of sense to me. I understand what's likely going on, but I don't understand why. My RSS Feed class is below: class RSSFeed(Feed): title = settings.BLOG_NAME description = "Recent Posts" def items(self): return Story.objects.all().order_by('-created')[:10] def link(self, obj): return obj.get_absolute_url() However I received the following error (full stack trace at http://dpaste.com/82510/): AttributeError: 'NoneType' object has no attribute 'startswith' That leads me to believe that it's not receiving any objects whatsoever. However, I can drop to a shell and grab those Story objects, and I can iterate through them returning the absolute url without any problems. So it would seem both portions of the Feed work, just not when it's in feed form. Furthermore, I added some logging, and can confirm that the items function is never entered when visiting the feeds link. I'm hoping I'm just overlooking something simple. Thanks in advance for any/all help. A: Changing to: class RSSFeed(Feed): title = settings.BLOG_NAME link = "/blog/" description = "Recent Posts" def items(self): return Story.objects.all().order_by('-created')[:10] Fixed it. Not sure I totally understand it.. but whatev. :) A: have you defined def get_absolute_url(self): in the model? also, it's nice to if not obj: raise FeedDoesNotExist to avoid errors when feed result is not present
Django RSS Feed Problems
I'm working on a blogging application, and trying to made just a simple RSS feed system function. However, I'm running into an odd bug that doesn't make a lot of sense to me. I understand what's likely going on, but I don't understand why. My RSS Feed class is below: class RSSFeed(Feed): title = settings.BLOG_NAME description = "Recent Posts" def items(self): return Story.objects.all().order_by('-created')[:10] def link(self, obj): return obj.get_absolute_url() However I received the following error (full stack trace at http://dpaste.com/82510/): AttributeError: 'NoneType' object has no attribute 'startswith' That leads me to believe that it's not receiving any objects whatsoever. However, I can drop to a shell and grab those Story objects, and I can iterate through them returning the absolute url without any problems. So it would seem both portions of the Feed work, just not when it's in feed form. Furthermore, I added some logging, and can confirm that the items function is never entered when visiting the feeds link. I'm hoping I'm just overlooking something simple. Thanks in advance for any/all help.
[ "Changing to:\nclass RSSFeed(Feed):\n title = settings.BLOG_NAME\n link = \"/blog/\"\n description = \"Recent Posts\"\n\n def items(self):\n return Story.objects.all().order_by('-created')[:10]\n\nFixed it. Not sure I totally understand it.. but whatev. :)\n", "have you defined\ndef get_absolute_url(self):\n\nin the model?\nalso, it's nice to \nif not obj:\n raise FeedDoesNotExist\n\nto avoid errors when feed result is not present\n" ]
[ 4, 1 ]
[]
[]
[ "django", "django_rss", "python" ]
stackoverflow_0001297426_django_django_rss_python.txt
Q: Python thread dump Is there a way to get a thread dump from a running Python process? Similar to kill -3 on a Java process. A: I havent seen anything built-in, but I have seen a solution here which can be exposed via http console. The solution iterates over all threads and outputs the stack.
Python thread dump
Is there a way to get a thread dump from a running Python process? Similar to kill -3 on a Java process.
[ "I havent seen anything built-in, but I have seen a solution here which can be exposed via http console. The solution iterates over all threads and outputs the stack.\n" ]
[ 5 ]
[]
[]
[ "python" ]
stackoverflow_0001302991_python.txt
Q: Shortest hash in python to name cache files What is the shortest hash (in filename-usable form, like a hexdigest) available in python? My application wants to save cache files for some objects. The objects must have unique repr() so they are used to 'seed' the filename. I want to produce a possibly unique filename for each object (not that many). They should not collide, but if they do my app will simply lack cache for that object (and will have to reindex that object's data, a minor cost for the application). So, if there is one collision we lose one cache file, but it is the collected savings of caching all objects makes the application startup much faster, so it does not matter much. Right now I'm actually using abs(hash(repr(obj))); that's right, the string hash! Haven't found any collisions yet, but I would like to have a better hash function. hashlib.md5 is available in the python library, but the hexdigest is really long if put in a filename. Alternatives, with reasonable collision resistance? Edit: Use case is like this: The data loader gets a new instance of a data-carrying object. Unique types have unique repr. so if a cache file for hash(repr(obj)) exists, I unpickle that cache file and replace obj with the unpickled object. If there was a collision and the cache was a false match I notice. So if we don't have cache or have a false match, I instead init obj (reloading its data). Conclusions (?) The str hash in python may be good enough, I was only worried about its collision resistance. But if I can hash 2**16 objects with it, it's going to be more than good enough. I found out how to take a hex hash (from any hash source) and store it compactly with base64: # 'h' is a string of hex digits bytes = "".join(chr(int(h[i:i+2], 16)) for i in xrange(0, len(h), 2)) hashstr = base64.urlsafe_b64encode(bytes).rstrip("=") A: The birthday paradox applies: given a good hash function, the expected number of hashes before a collision occurs is about sqrt(N), where N is the number of different values that the hash function can take. (The wikipedia entry I've pointed to gives the exact formula). So, for example, if you want to use no more than 32 bits, your collision worries are serious for around 64K objects (i.e., 2**16 objects -- the square root of the 2**32 different values your hash function can take). How many objects do you expect to have, as an order of magnitude? Since you mention that a collision is a minor annoyance, I recommend you aim for a hash length that's roughly the square of the number of objects you'll have, or a bit less but not MUCH less than that. You want to make a filename - is that on a case-sensitive filesystem, as typical on Unix, or do you have to cater for case-insensitive systems too? This matters because you aim for short filenames, but the number of bits per character you can use to represent your hash as a filename changes dramatically on case-sensive vs insensitive systems. On a case-sensitive system, you can use the standard library's base64 module (I recommend the "urlsafe" version of the encoding, i.e. this function, as avoiding '/' characters that could be present in plain base64 is important in Unix filenames). This gives you 6 usable bits per character, much better than the 4 bits/char in hex. Even on a case-insensitive system, you can still do better than hex -- use base64.b32encode and get 5 bits per character. These functions take and return strings; use the struct module to turn numbers into strings if your chosen hash function generates numbers. If you do have a few tens of thousands of objects I think you'll be fine with builtin hash (32 bits, so 6-7 characters depending on your chosen encoding). For a million objects you'd want 40 bits or so (7 or 8 characters) -- you can fold (xor, don't truncate;-) a sha256 down to a long with a reasonable number of bits, say 128 or so, and use the % operator to cut it further to your desired length before encoding. A: The builtin hash function of strings is fairly collision free, and also fairly short. It has 2**32 values, so it is fairly unlikely that you encounter collisions (if you use its abs value, it will have only 2**31 values). You have been asking for the shortest hash function. That would certainly be def hash(s): return 0 but I guess you didn't really mean it that way... A: You can make any hash you like shorter by simply truncating it. md5 is always 32 hex digits, but an arbitrary substring of it (or any other hash) has the proper qualities of a hash: equal values produce equal hashes, and the values are spread around a bunch. A: I'm sure that there's a CRC32 implementation in Python, but that may be too short (8 hex digits). On the upside, it's very quick. Found it, binascii.crc32 A: If you do have a collision, how are you going to tell that it actually happened? If I were you, I would use hashlib to sha1() the repr(), and then just get a limited substring of it (first 16 characters, for example). Unless you are talking about huge numbers of these objects, I would suggest that you just use the full hash. Then the opportunity for collision is so, so, so, so small, that you will never live to see it happen (likely). Also, if you are dealing with that many files, I'm guessing that your caching technique should be adjusted to accommodate it. A: We use hashlib.sha1.hexdigest(), which produces even longer strings, for cache objects with good success. Nobody is actually looking at cache files anyway. A: Condsidering your use case, if you don't have your heart set on using separate cache files and you are not too far down that development path, you might consider using the shelve module. This will give you a persistent dictionary (stored in a single dbm file) in which you store your objects. Pickling/unpickling is performed transparently, and you don't have to concern yourself with hashing, collisions, file I/O, etc. For the shelve dictionary keys, you would just use repr(obj) and let shelve deal with stashing your objects for you. A simple example: import shelve cache = shelve.open('cache') t = (1,2,3) i = 10 cache[repr(t)] = t cache[repr(i)] = i print cache # {'(1, 2, 3)': (1, 2, 3), '10': 10} cache.close() cache = shelve.open('cache') print cache #>>> {'(1, 2, 3)': (1, 2, 3), '10': 10} print cache[repr(10)] #>>> 10 A: Short hashes mean you may have same hash for two different files. Same may happen for big hashes too, but its way more rare. Maybe these file names should vary based on other references, like microtime (unless these files may be created too quickly).
Shortest hash in python to name cache files
What is the shortest hash (in filename-usable form, like a hexdigest) available in python? My application wants to save cache files for some objects. The objects must have unique repr() so they are used to 'seed' the filename. I want to produce a possibly unique filename for each object (not that many). They should not collide, but if they do my app will simply lack cache for that object (and will have to reindex that object's data, a minor cost for the application). So, if there is one collision we lose one cache file, but it is the collected savings of caching all objects makes the application startup much faster, so it does not matter much. Right now I'm actually using abs(hash(repr(obj))); that's right, the string hash! Haven't found any collisions yet, but I would like to have a better hash function. hashlib.md5 is available in the python library, but the hexdigest is really long if put in a filename. Alternatives, with reasonable collision resistance? Edit: Use case is like this: The data loader gets a new instance of a data-carrying object. Unique types have unique repr. so if a cache file for hash(repr(obj)) exists, I unpickle that cache file and replace obj with the unpickled object. If there was a collision and the cache was a false match I notice. So if we don't have cache or have a false match, I instead init obj (reloading its data). Conclusions (?) The str hash in python may be good enough, I was only worried about its collision resistance. But if I can hash 2**16 objects with it, it's going to be more than good enough. I found out how to take a hex hash (from any hash source) and store it compactly with base64: # 'h' is a string of hex digits bytes = "".join(chr(int(h[i:i+2], 16)) for i in xrange(0, len(h), 2)) hashstr = base64.urlsafe_b64encode(bytes).rstrip("=")
[ "The birthday paradox applies: given a good hash function, the expected number of hashes before a collision occurs is about sqrt(N), where N is the number of different values that the hash function can take. (The wikipedia entry I've pointed to gives the exact formula). So, for example, if you want to use no more than 32 bits, your collision worries are serious for around 64K objects (i.e., 2**16 objects -- the square root of the 2**32 different values your hash function can take). How many objects do you expect to have, as an order of magnitude?\nSince you mention that a collision is a minor annoyance, I recommend you aim for a hash length that's roughly the square of the number of objects you'll have, or a bit less but not MUCH less than that.\nYou want to make a filename - is that on a case-sensitive filesystem, as typical on Unix, or do you have to cater for case-insensitive systems too? This matters because you aim for short filenames, but the number of bits per character you can use to represent your hash as a filename changes dramatically on case-sensive vs insensitive systems.\nOn a case-sensitive system, you can use the standard library's base64 module (I recommend the \"urlsafe\" version of the encoding, i.e. this function, as avoiding '/' characters that could be present in plain base64 is important in Unix filenames). This gives you 6 usable bits per character, much better than the 4 bits/char in hex.\nEven on a case-insensitive system, you can still do better than hex -- use base64.b32encode and get 5 bits per character.\nThese functions take and return strings; use the struct module to turn numbers into strings if your chosen hash function generates numbers.\nIf you do have a few tens of thousands of objects I think you'll be fine with builtin hash (32 bits, so 6-7 characters depending on your chosen encoding). For a million objects you'd want 40 bits or so (7 or 8 characters) -- you can fold (xor, don't truncate;-) a sha256 down to a long with a reasonable number of bits, say 128 or so, and use the % operator to cut it further to your desired length before encoding.\n", "The builtin hash function of strings is fairly collision free, and also fairly short. It has 2**32 values, so it is fairly unlikely that you encounter collisions (if you use its abs value, it will have only 2**31 values).\nYou have been asking for the shortest hash function. That would certainly be\ndef hash(s):\n return 0\n\nbut I guess you didn't really mean it that way...\n", "You can make any hash you like shorter by simply truncating it. md5 is always 32 hex digits, but an arbitrary substring of it (or any other hash) has the proper qualities of a hash: equal values produce equal hashes, and the values are spread around a bunch.\n", "I'm sure that there's a CRC32 implementation in Python, but that may be too short (8 hex digits). On the upside, it's very quick.\nFound it, binascii.crc32\n", "If you do have a collision, how are you going to tell that it actually happened?\nIf I were you, I would use hashlib to sha1() the repr(), and then just get a limited substring of it (first 16 characters, for example).\nUnless you are talking about huge numbers of these objects, I would suggest that you just use the full hash. Then the opportunity for collision is so, so, so, so small, that you will never live to see it happen (likely).\nAlso, if you are dealing with that many files, I'm guessing that your caching technique should be adjusted to accommodate it.\n", "We use hashlib.sha1.hexdigest(), which produces even longer strings, for cache objects with good success. Nobody is actually looking at cache files anyway.\n", "Condsidering your use case, if you don't have your heart set on using separate cache files and you are not too far down that development path, you might consider using the shelve module.\nThis will give you a persistent dictionary (stored in a single dbm file) in which you store your objects. Pickling/unpickling is performed transparently, and you don't have to concern yourself with hashing, collisions, file I/O, etc.\nFor the shelve dictionary keys, you would just use repr(obj) and let shelve deal with stashing your objects for you. A simple example:\nimport shelve\ncache = shelve.open('cache')\nt = (1,2,3)\ni = 10\ncache[repr(t)] = t\ncache[repr(i)] = i\nprint cache\n# {'(1, 2, 3)': (1, 2, 3), '10': 10}\ncache.close()\n\ncache = shelve.open('cache')\nprint cache\n#>>> {'(1, 2, 3)': (1, 2, 3), '10': 10}\nprint cache[repr(10)]\n#>>> 10\n\n", "Short hashes mean you may have same hash for two different files. Same may happen for big hashes too, but its way more rare.\nMaybe these file names should vary based on other references, like microtime (unless these files may be created too quickly).\n" ]
[ 38, 27, 8, 4, 1, 1, 1, 0 ]
[]
[]
[ "hash", "python" ]
stackoverflow_0001303021_hash_python.txt
Q: setting option in config file using SafeConfigParser I'm trying to set an option (xdebug.profiler_enable) in my php.ini file using python's ConfigParser object. here is the code: section in php.ini file im trying to modify [xdebug] ;XDEBUG SETTINGS ;turn on the profiler? xdebug.profiler_enable=0 xdebug.profiler_append=1 xdebug.profiler_enable_trigger=0 xdebug.trace_output_name="%R" xdebug.profiler_output_dir="/home/made_up_user/www/cachegrind/" Python import sys import ConfigParser if __name__ == '__main__': phpIniLocation = "/home/made_up_user/Desktop/phpinicopy.ini"; phpIni = ConfigParser.RawConfigParser(); phpIni.read(phpIniLocation); xdebugSetting = phpIni.getboolean("xdebug", "xdebug.profiler_enable"); if xdebugSetting: phpIni.set("xdebug", "xdebug.profiler_enable", "0"); else: phpIni.set("xdebug", "xdebug.profiler_enable", "1"); Environment: Ubuntu 9.04,python 2.6 Everything SEEMS to be working fine. The xdebugSetting variable returns the option's boolean value correctly, I can parse through the section and retrieve each of the options correct values, and the set method doesn't throw any exceptions, but when i check the file, the options have not been changed. I have used RawConfigParser, ConfigParser, and SafeConfigParser all with the same result. The script runs with roots permissions. Is there something I'm missing? How do I get the set method to work? A: phpIni.write(open(phpIniLocation, 'w')) docs.
setting option in config file using SafeConfigParser
I'm trying to set an option (xdebug.profiler_enable) in my php.ini file using python's ConfigParser object. here is the code: section in php.ini file im trying to modify [xdebug] ;XDEBUG SETTINGS ;turn on the profiler? xdebug.profiler_enable=0 xdebug.profiler_append=1 xdebug.profiler_enable_trigger=0 xdebug.trace_output_name="%R" xdebug.profiler_output_dir="/home/made_up_user/www/cachegrind/" Python import sys import ConfigParser if __name__ == '__main__': phpIniLocation = "/home/made_up_user/Desktop/phpinicopy.ini"; phpIni = ConfigParser.RawConfigParser(); phpIni.read(phpIniLocation); xdebugSetting = phpIni.getboolean("xdebug", "xdebug.profiler_enable"); if xdebugSetting: phpIni.set("xdebug", "xdebug.profiler_enable", "0"); else: phpIni.set("xdebug", "xdebug.profiler_enable", "1"); Environment: Ubuntu 9.04,python 2.6 Everything SEEMS to be working fine. The xdebugSetting variable returns the option's boolean value correctly, I can parse through the section and retrieve each of the options correct values, and the set method doesn't throw any exceptions, but when i check the file, the options have not been changed. I have used RawConfigParser, ConfigParser, and SafeConfigParser all with the same result. The script runs with roots permissions. Is there something I'm missing? How do I get the set method to work?
[ "phpIni.write(open(phpIniLocation, 'w'))\n\ndocs.\n" ]
[ 2 ]
[]
[]
[ "file_io", "linux", "python", "ubuntu" ]
stackoverflow_0001303697_file_io_linux_python_ubuntu.txt
Q: How can I convert a URL query string into a list of tuples using Python? I am struggling to convert a url to a nested tuple. # Convert this string str = 'http://somesite.com/?foo=bar&key=val' # to a tuple like this: [(u'foo', u'bar'), (u'key', u'val')] I assume I need to be doing something like: url = 'http://somesite.com/?foo=bar&key=val' url = url.split('?') get = () for param in url[1].split('&'): get = get + param.split('=') What am I doing wrong? Thanks! A: I believe you are looking for the urlparse module. This module defines a standard interface to break Uniform Resource Locator (URL) strings up in components (addressing scheme, network location, path etc.), to combine the components back into a URL string, and to convert a “relative URL” to an absolute URL given a “base URL.” Here is an example: from urlparse import urlparse, parse_qsl url = 'http://somesite.com/?foo=bar&key=val' print parse_qsl(urlparse(url)[4]) Output: [('foo', 'bar'), ('key', 'val')] In this example I first use the urlparse function to parse the entire URL then I use the parse_qsl function to break the querystring (the fifth element returned from urlparse) into a list of tuples. A: Andrew's answer was really informative and helpful. A less adept way to grab those params would be with a regular expression--something like this: import re re_param = re.compile(r'(?P<key>w\+)=(?P<value>w\+)') url = 'http://somesite.com/?foo=bar&key=val'' params_list = re_param.findall(url) Also, in your code it looks like you're trying to concatenate a list and tuple-- for param in url[1].split('&'): get = get + param.split('=') You created get as a tuple, but str.split returns a list. Maybe this would fix your code: for param in url[1].split('&'): get = get + tuple(param.split('='))
How can I convert a URL query string into a list of tuples using Python?
I am struggling to convert a url to a nested tuple. # Convert this string str = 'http://somesite.com/?foo=bar&key=val' # to a tuple like this: [(u'foo', u'bar'), (u'key', u'val')] I assume I need to be doing something like: url = 'http://somesite.com/?foo=bar&key=val' url = url.split('?') get = () for param in url[1].split('&'): get = get + param.split('=') What am I doing wrong? Thanks!
[ "I believe you are looking for the urlparse module.\n\nThis module defines a standard\n interface to break Uniform Resource\n Locator (URL) strings up in components\n (addressing scheme, network location,\n path etc.), to combine the components\n back into a URL string, and to convert\n a “relative URL” to an absolute URL\n given a “base URL.”\n\nHere is an example:\nfrom urlparse import urlparse, parse_qsl\n\nurl = 'http://somesite.com/?foo=bar&key=val'\nprint parse_qsl(urlparse(url)[4])\n\nOutput:\n[('foo', 'bar'), ('key', 'val')]\n\nIn this example I first use the urlparse function to parse the entire URL then I use the parse_qsl function to break the querystring (the fifth element returned from urlparse) into a list of tuples.\n", "Andrew's answer was really informative and helpful. A less adept way to grab those params would be with a regular expression--something like this:\nimport re\nre_param = re.compile(r'(?P<key>w\\+)=(?P<value>w\\+)')\n\nurl = 'http://somesite.com/?foo=bar&key=val''\nparams_list = re_param.findall(url)\n\nAlso, in your code it looks like you're trying to concatenate a list and tuple--\nfor param in url[1].split('&'):\n get = get + param.split('=')\n\nYou created get as a tuple, but str.split returns a list. Maybe this would fix your code:\nfor param in url[1].split('&'):\n get = get + tuple(param.split('='))\n\n" ]
[ 29, 0 ]
[]
[]
[ "parsing", "python", "url" ]
stackoverflow_0001302688_parsing_python_url.txt
Q: how to install new packages with python 3.1.1? I've tried to install pip on windows, but it's not working: giving me ImportError: No module named pkg_resources easy_install doesn't have version 3.1 or so, just 2.5, and should be replaced by pim. is there easy way to install it on windows? A: setuptools doesn't quite work on Python 3.1 yet. Try installing packages with regular distutils, or use binary packages (.exe, .msi) provided by the package author.
how to install new packages with python 3.1.1?
I've tried to install pip on windows, but it's not working: giving me ImportError: No module named pkg_resources easy_install doesn't have version 3.1 or so, just 2.5, and should be replaced by pim. is there easy way to install it on windows?
[ "setuptools doesn't quite work on Python 3.1 yet. Try installing packages with regular distutils, or use binary packages (.exe, .msi) provided by the package author.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0001304638_python.txt
Q: User-specific model in Django I have a model containing items, which has many different fields. There is another model which assigns a set of this field to each user using a m2m-relation. I want to achieve, that in the end, every user has access to a defined set of fields of the item model, and he only sees these field in views, he can only edit these field etc. Is there any generic way to set this up? A: One way to do this would be to break the Item model up into the parts that are individually assignable to a user. If you have fixed user types (admin, customer, team etc.) who can always see the same set of fields, these parts would be whole groups of fields. If it's very dynamic and you want to be able to set up individual fields for each user, each field is a part of its own. That way, you would have a meta-Item which consists solely of an Id that the parts can refer to. This holds together the parts. Then, you would map a user not to the Item but to the parts and reconstruct the item view from the common Id of the parts. A: A second approach would be to not include the filtering in the model layer. I. e., you leave the mapping on the model layer as it is and retrieve the full set of item fields for each user. Then you pass the items through a filter that implements the rules. Which approach is better for you depends on how you want to filter. If it's fixed types of users, I would probably implement a rules-based post-processor, if it's very dynamic, I would suggest the approach from my earlier answer. Another reason to put the filtering rules in the model would be if you want to reuse the model in applications that couldn't reuse your filter engine (for example if you have applications in different languages sharing the same database).
User-specific model in Django
I have a model containing items, which has many different fields. There is another model which assigns a set of this field to each user using a m2m-relation. I want to achieve, that in the end, every user has access to a defined set of fields of the item model, and he only sees these field in views, he can only edit these field etc. Is there any generic way to set this up?
[ "One way to do this would be to break the Item model up into the parts that are individually assignable to a user. If you have fixed user types (admin, customer, team etc.) who can always see the same set of fields, these parts would be whole groups of fields. If it's very dynamic and you want to be able to set up individual fields for each user, each field is a part of its own.\nThat way, you would have a meta-Item which consists solely of an Id that the parts can refer to. This holds together the parts. Then, you would map a user not to the Item but to the parts and reconstruct the item view from the common Id of the parts.\n", "A second approach would be to not include the filtering in the model layer. I. e., you leave the mapping on the model layer as it is and retrieve the full set of item fields for each user. Then you pass the items through a filter that implements the rules.\nWhich approach is better for you depends on how you want to filter. If it's fixed types of users, I would probably implement a rules-based post-processor, if it's very dynamic, I would suggest the approach from my earlier answer. Another reason to put the filtering rules in the model would be if you want to reuse the model in applications that couldn't reuse your filter engine (for example if you have applications in different languages sharing the same database).\n" ]
[ 0, 0 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001304608_django_django_models_python.txt
Q: How to populate a list with items.count() from a queryset sorted by a datetime field I had a hard time formulating the title, so please edit it if you have a better one :) I'm trying to display some statistics using the pygooglechart. And I am using Django to get the database items out of the database. The database items has a datetime field wich i want to "sort on". What i really want is to populate a list like this. data = [10, 12, 51, 50] Where each list item is the number(count) of database items within an hour. So lets say i do a query that gets all items in the last 72 hours, i want to collect the count of each hour into a list item. Anybody have a good way to do this? A: Assuming you're running Django 1.1 or a fairly recent checkout, you can use the new aggregation features. Something like: counts = MyModel.objects.values('datettimefield').annotate(Count('datettimefield')) This actually gets you a list of dictionaries: [{'datetimefield':<date1>, 'datettimefield__count':<count1>}, {'datetimefield':<date2>, 'datettimefield__count':<count2>}, ...] but it should be fairly easy to write a list comprehension to get the format you want. Edited after comment: If you're on 1.0.2, the most efficient thing to do is to fall back to raw SQL. cursor = connection.cursor() cursor.execute( "SELECT COUNT(0) FROM `mymodel_table` " "GROUP BY `mydatetimefield`;" ) counts = cursor.fetchall() A: django aggregation
How to populate a list with items.count() from a queryset sorted by a datetime field
I had a hard time formulating the title, so please edit it if you have a better one :) I'm trying to display some statistics using the pygooglechart. And I am using Django to get the database items out of the database. The database items has a datetime field wich i want to "sort on". What i really want is to populate a list like this. data = [10, 12, 51, 50] Where each list item is the number(count) of database items within an hour. So lets say i do a query that gets all items in the last 72 hours, i want to collect the count of each hour into a list item. Anybody have a good way to do this?
[ "Assuming you're running Django 1.1 or a fairly recent checkout, you can use the new aggregation features. Something like:\ncounts = MyModel.objects.values('datettimefield').annotate(Count('datettimefield'))\n\nThis actually gets you a list of dictionaries:\n[{'datetimefield':<date1>, 'datettimefield__count':<count1>},\n {'datetimefield':<date2>, 'datettimefield__count':<count2>}, ...]\n\nbut it should be fairly easy to write a list comprehension to get the format you want.\nEdited after comment: If you're on 1.0.2, the most efficient thing to do is to fall back to raw SQL.\ncursor = connection.cursor()\ncursor.execute(\n \"SELECT COUNT(0) FROM `mymodel_table` \"\n \"GROUP BY `mydatetimefield`;\"\n)\ncounts = cursor.fetchall()\n\n", "django aggregation \n" ]
[ 2, 0 ]
[]
[]
[ "django", "pygooglechart", "python" ]
stackoverflow_0001302999_django_pygooglechart_python.txt
Q: python 3.1 with pydev I am now moving to eclipse for my python development. I have pydev installed but it is showing grammar support up to python version 3.0. My question is can I use python 3.1 with 3.0 grammar? Has the grammar changed from version 3.0 to 3.1? I am using eclipse 3.4.2 and pydev 1.4.7 A: grammar hasn't changed, some modules have.
python 3.1 with pydev
I am now moving to eclipse for my python development. I have pydev installed but it is showing grammar support up to python version 3.0. My question is can I use python 3.1 with 3.0 grammar? Has the grammar changed from version 3.0 to 3.1? I am using eclipse 3.4.2 and pydev 1.4.7
[ "grammar hasn't changed, some modules have.\n" ]
[ 10 ]
[]
[]
[ "eclipse", "pydev", "python", "python_3.x" ]
stackoverflow_0001305218_eclipse_pydev_python_python_3.x.txt
Q: Django: do I need to restart Apache when deploying? I just noted an annoying factor: Django requires either a restart of the server or CGI access to work. The first option is not feasible if you don't have access to the Apache server process. The second, as far as I know, is detrimental to performance, and in general the idea of running a CGI makes me uncomfortable. I also recently saw a presentation titled "why I hate Django". Although I did not really shared most of the speaker's (a Flickr guy) points, this fact of re-starting the server sounded very annoying. I would like to know your motivated experience in this regard. Should I continue working with Django and use it as a CGI, or favor another Python framework ? Is the CGI option that bad, and should I be concerned about it, or it's a viable option (for performance and scalability) ? A: Use the WSGI standard, through mod_wsgi. You don't have to restart Apache, merely update the mtime on the .wsgi file. A: I usually don't restart the server, but force-reload the configuration. On an Ubuntu Hardy server, that is sudo /etc/init.d/apache2 force-reload and it's done almost immediately. A: For how to deal with source code reloading when using Apache/mod_wsgi, read: http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode http://blog.dscpl.com.au/2008/12/using-modwsgi-when-developing-django.html http://blog.dscpl.com.au/2009/02/source-code-reloading-with-modwsgi-on.html Documentation is more useful when it is read. ;-)
Django: do I need to restart Apache when deploying?
I just noted an annoying factor: Django requires either a restart of the server or CGI access to work. The first option is not feasible if you don't have access to the Apache server process. The second, as far as I know, is detrimental to performance, and in general the idea of running a CGI makes me uncomfortable. I also recently saw a presentation titled "why I hate Django". Although I did not really shared most of the speaker's (a Flickr guy) points, this fact of re-starting the server sounded very annoying. I would like to know your motivated experience in this regard. Should I continue working with Django and use it as a CGI, or favor another Python framework ? Is the CGI option that bad, and should I be concerned about it, or it's a viable option (for performance and scalability) ?
[ "Use the WSGI standard, through mod_wsgi. You don't have to restart Apache, merely update the mtime on the .wsgi file. \n", "I usually don't restart the server, but force-reload the configuration. On an Ubuntu Hardy server, that is\nsudo /etc/init.d/apache2 force-reload\n\nand it's done almost immediately.\n", "For how to deal with source code reloading when using Apache/mod_wsgi, read:\nhttp://code.google.com/p/modwsgi/wiki/ReloadingSourceCode\nhttp://blog.dscpl.com.au/2008/12/using-modwsgi-when-developing-django.html\nhttp://blog.dscpl.com.au/2009/02/source-code-reloading-with-modwsgi-on.html\nDocumentation is more useful when it is read. ;-)\n" ]
[ 6, 0, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001302411_django_python.txt
Q: How to delete entries in a dictionary with a given flag in python? I have a dictionary, lets call it myDict, in Python that contains a set of similar dictionaries which all have the entry "turned_on : True" or "turned_on : False". I want to remove all the entries in myDict that are off, e.g. where "turned_on : False". In Ruby I would do something like this: myDict.delete_if { |id,dict| not dict[:turned_on] } How should I do this in Python? A: You mean like this? myDict = {"id1" : {"turned_on": True}, "id2" : {"turned_on": False}} result = dict((a, b) for a, b in myDict.items() if b["turned_on"]) output: {'id1': {'turned_on': True}} A: Straight-forward way: def delete_if_not(predicate_key, some_dict): for key, subdict in some_dict.items(): if not subdict.get(predicate_key, True): del some_dict[key] Testing: mydict = { 'test1': { 'turned_on': True, 'other_data': 'foo', }, 'test2': { 'turned_on': False, 'other_data': 'bar', }, } delete_if_not('turned_on', mydict) print mydict The other answers on this page so far create another dict. They don't delete the keys in your actual dict. A: It's not clear what you want, but my guess is: myDict = {i: j for i, j in myDict.items() if j['turned_on']} or for older version of python: myDict = dict((i, j) for i, j in myDict.iteritems() if j['turned_on']) A: d = { 'id1':{'turned_on':True}, 'id2':{'turned_on':False}} dict((i,j) for i, j in d.items() if not j['turned_on'])
How to delete entries in a dictionary with a given flag in python?
I have a dictionary, lets call it myDict, in Python that contains a set of similar dictionaries which all have the entry "turned_on : True" or "turned_on : False". I want to remove all the entries in myDict that are off, e.g. where "turned_on : False". In Ruby I would do something like this: myDict.delete_if { |id,dict| not dict[:turned_on] } How should I do this in Python?
[ "You mean like this?\nmyDict = {\"id1\" : {\"turned_on\": True}, \"id2\" : {\"turned_on\": False}}\nresult = dict((a, b) for a, b in myDict.items() if b[\"turned_on\"])\n\noutput:\n{'id1': {'turned_on': True}}\n\n", "Straight-forward way:\ndef delete_if_not(predicate_key, some_dict):\n for key, subdict in some_dict.items():\n if not subdict.get(predicate_key, True):\n del some_dict[key]\n\nTesting:\nmydict = {\n 'test1': {\n 'turned_on': True,\n 'other_data': 'foo',\n },\n 'test2': {\n 'turned_on': False,\n 'other_data': 'bar',\n },\n }\ndelete_if_not('turned_on', mydict)\nprint mydict\n\nThe other answers on this page so far create another dict. They don't delete the keys in your actual dict.\n", "It's not clear what you want, but my guess is:\nmyDict = {i: j for i, j in myDict.items() if j['turned_on']}\n\nor for older version of python:\nmyDict = dict((i, j) for i, j in myDict.iteritems() if j['turned_on'])\n\n", "d = { 'id1':{'turned_on':True}, 'id2':{'turned_on':False}}\ndict((i,j) for i, j in d.items() if not j['turned_on'])\n\n" ]
[ 7, 5, 3, 1 ]
[]
[]
[ "dictionary", "python", "ruby" ]
stackoverflow_0001305437_dictionary_python_ruby.txt
Q: Python japanese module is not found I run following python script. pygame2exe.py ImportError: No module named japanese What's wrong? Do not you know solutions? A: The script makes use of japanese encoding # -*- coding: sjis -*- [...] args.append('japanese,encodings'); It's a shame cause it could use UTF-8 that works out of the box. You can't run this script unless you install the japanese module. I can't find any reference of it on the web, and I can read in the code : # make standalone, needs at least pygame-1.5.3 and py2exe-0.3.1 # fixed for py2exe-0.6.x by RyoN3 at 03/15/2006 If you haven't installed the last version of pygame and py2exe, I would start by that since they may embed the module you need. A: To add to e-satis' explanation, the "japanese" module is provided by the Japan PUG, but I don't think you've actually needed it since around Python 2.2. I believe that all the Japanese codecs are included in a standard Python install these days. I certainly don't use this module, and I handle SJIS in my programs just fine. So I think you could just get rid if the forced import, and do fine. That is, delete these lines: args.append('-p') args.append('japanese,encodings') # JapaneseCodecを強制的に含める Since you don't have the "japanese" module on your system, if the program runs OK on your system, then the frozen version should be OK without this module. However, I would recommend using Unicode throughout instead of byte strings, and if you insist on byte strings, I'd at least put them in UTF-8.
Python japanese module is not found
I run following python script. pygame2exe.py ImportError: No module named japanese What's wrong? Do not you know solutions?
[ "The script makes use of japanese encoding\n# -*- coding: sjis -*-\n\n[...]\n\nargs.append('japanese,encodings');\n\nIt's a shame cause it could use UTF-8 that works out of the box. \nYou can't run this script unless you install the japanese module. I can't find any reference of it on the web, and I can read in the code :\n# make standalone, needs at least pygame-1.5.3 and py2exe-0.3.1\n# fixed for py2exe-0.6.x by RyoN3 at 03/15/2006\n\nIf you haven't installed the last version of pygame and py2exe, I would start by that since they may embed the module you need.\n", "To add to e-satis' explanation, the \"japanese\" module is provided by the Japan PUG, but I don't think you've actually needed it since around Python 2.2. I believe that all the Japanese codecs are included in a standard Python install these days. I certainly don't use this module, and I handle SJIS in my programs just fine.\nSo I think you could just get rid if the forced import, and do fine. That is, delete these lines:\nargs.append('-p')\nargs.append('japanese,encodings') # JapaneseCodecを強制的に含める\n\nSince you don't have the \"japanese\" module on your system, if the program runs OK on your system, then the frozen version should be OK without this module.\nHowever, I would recommend using Unicode throughout instead of byte strings, and if you insist on byte strings, I'd at least put them in UTF-8.\n" ]
[ 1, 0 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0001305042_pygame_python.txt
Q: upload file with Python Mechanize When I run the following script: from mechanize import Browser br = Browser() br.open(url) br.select_form(name="edit_form") br['file'] = 'file.txt' br.submit() I get: ValueError: value attribute is readonly And I still get the same error when I add: br.form.set_all_readonly(False) So, how can I use Python Mechanize to interact with a HTML form to upload a file? Richard A: This is how to do it properly with Mechanize: br.form.add_file(open(filename), 'text/plain', filename) A: twill is built on mechanize and makes scripting web forms a breeze. See python-www-macro. >>> from twill import commands >>> print commands.formfile.__doc__ >> formfile <form> <field> <filename> [ <content_type> ] Upload a file via an "upload file" form field. >>>
upload file with Python Mechanize
When I run the following script: from mechanize import Browser br = Browser() br.open(url) br.select_form(name="edit_form") br['file'] = 'file.txt' br.submit() I get: ValueError: value attribute is readonly And I still get the same error when I add: br.form.set_all_readonly(False) So, how can I use Python Mechanize to interact with a HTML form to upload a file? Richard
[ "This is how to do it properly with Mechanize:\nbr.form.add_file(open(filename), 'text/plain', filename)\n\n", "twill is built on mechanize and makes scripting web forms a breeze. See python-www-macro.\n>>> from twill import commands\n>>> print commands.formfile.__doc__\n\n>> formfile <form> <field> <filename> [ <content_type> ]\n\nUpload a file via an \"upload file\" form field.\n\n>>> \n\n" ]
[ 18, 2 ]
[]
[]
[ "file", "forms", "mechanize", "python", "upload" ]
stackoverflow_0001299855_file_forms_mechanize_python_upload.txt
Q: Difference between accessing an instance attribute and a class attribute I have a Python class class pytest: i = 34 def func(self): return "hello world" When I access pytest.i, I get 34. I can also do this another way: a = pytest() a.i This gives 34 as well. If I try to access the (non-existing) pytest.j, I get Traceback (most recent call last): File "<pyshell#6>", line 1, in <module> pytest.j AttributeError: class pytest has no attribute 'j' while when I try a.j, the error is Traceback (most recent call last): File "<pyshell#8>", line 1, in <module> a.j AttributeError: pytest instance has no attribute 'j' So my question is: What exactly happens in the two cases and what is the difference? A: No, these are two different things. In Python, everything is an object. Classes are objects, functions are objects and instances are objects. Since everything is an object, everything behaves in a similar way. In your case, you create a class instance (== an object with the type "Class") with the name "pytest". That object has two attributes: i and fuc. i is an instance of "Integer" or "Number", fuc is an instance of "Function". When you use "pytest.j", you tell python "look up the object pytest and when you have it, look up i". "pytest" is a class instance but that doesn't matter. When you create an instance of "pytest" (== an object with the type "pytest"), then you have an object which has "defaults". In your case, a is an instance of pytest which means that anything that can't be found in a will be searched in pytest, next. So a.j means: "Look in a. When it's not there, also look in pytest". But j doesn't exist and Python now has to give you a meaningful error message. It could say "class pytest has no attribute 'j'". This would be correct but meaningless: You would have to figure out yourself that you tried to access j via a. It would be confusing. Guido won't have that. Therefore, python uses a different error message. Since it does not always have the name of the instance (a), the designers decided to use the type instead, so you get "pytest instance...". A: To summarize, there are two types of variables associated with classes and objects: class variables and instance variables. Class variables are associated with classes, but instance variables are associated with objects. Here's an example: class TestClass: classVar = 0 def __init__(self): self.instanceVar = 0 classVar is a class variable associated with the class TestClass. instanceVar is an instance variable associated with objects of the type TestClass. print(TestClass.classVar) # prints 0 instance1 = TestClass() # creates new instance of TestClass instance2 = TestClass() # creates another new instance of TestClass instance1 and instance2 share classVar because they're both objects of the type TestClass. print(instance1.classVar) # prints 0 TestClass.classVar = 1 print(instance1.classVar) # prints 1 print(instance2.classVar) # prints 1 However, they both have copies of instanceVar because it is an instance variable associated with individual instances, not the class. print(instance1.instanceVar) # prints 0 print(TestClass.instanceVar) # error! instanceVar is not a class variable instance1.instanceVar = 1 print(instance1.instanceVar) # prints 1 print(instance2.instanceVar) # prints 0 As Aaron said, if you try to access an instance variable, Python first checks the instance variables of that object, then the class variables of the object's type. Class variables function as default values for instance variables.
Difference between accessing an instance attribute and a class attribute
I have a Python class class pytest: i = 34 def func(self): return "hello world" When I access pytest.i, I get 34. I can also do this another way: a = pytest() a.i This gives 34 as well. If I try to access the (non-existing) pytest.j, I get Traceback (most recent call last): File "<pyshell#6>", line 1, in <module> pytest.j AttributeError: class pytest has no attribute 'j' while when I try a.j, the error is Traceback (most recent call last): File "<pyshell#8>", line 1, in <module> a.j AttributeError: pytest instance has no attribute 'j' So my question is: What exactly happens in the two cases and what is the difference?
[ "No, these are two different things.\nIn Python, everything is an object. Classes are objects, functions are objects and instances are objects. Since everything is an object, everything behaves in a similar way. In your case, you create a class instance (== an object with the type \"Class\") with the name \"pytest\". That object has two attributes: i and fuc. i is an instance of \"Integer\" or \"Number\", fuc is an instance of \"Function\".\nWhen you use \"pytest.j\", you tell python \"look up the object pytest and when you have it, look up i\". \"pytest\" is a class instance but that doesn't matter.\nWhen you create an instance of \"pytest\" (== an object with the type \"pytest\"), then you have an object which has \"defaults\". In your case, a is an instance of pytest which means that anything that can't be found in a will be searched in pytest, next.\nSo a.j means: \"Look in a. When it's not there, also look in pytest\". But j doesn't exist and Python now has to give you a meaningful error message. It could say \"class pytest has no attribute 'j'\". This would be correct but meaningless: You would have to figure out yourself that you tried to access j via a. It would be confusing. Guido won't have that.\nTherefore, python uses a different error message. Since it does not always have the name of the instance (a), the designers decided to use the type instead, so you get \"pytest instance...\".\n", "To summarize, there are two types of variables associated with classes and objects: class variables and instance variables. Class variables are associated with classes, but instance variables are associated with objects. Here's an example:\nclass TestClass:\n classVar = 0\n def __init__(self):\n self.instanceVar = 0\n\nclassVar is a class variable associated with the class TestClass. instanceVar is an instance variable associated with objects of the type TestClass.\nprint(TestClass.classVar) # prints 0\ninstance1 = TestClass() # creates new instance of TestClass\ninstance2 = TestClass() # creates another new instance of TestClass\n\ninstance1 and instance2 share classVar because they're both objects of the type TestClass. \nprint(instance1.classVar) # prints 0\nTestClass.classVar = 1\nprint(instance1.classVar) # prints 1\nprint(instance2.classVar) # prints 1\n\nHowever, they both have copies of instanceVar because it is an instance variable associated with individual instances, not the class.\nprint(instance1.instanceVar) # prints 0\nprint(TestClass.instanceVar) # error! instanceVar is not a class variable\ninstance1.instanceVar = 1\nprint(instance1.instanceVar) # prints 1\nprint(instance2.instanceVar) # prints 0\n\nAs Aaron said, if you try to access an instance variable, Python first checks the instance variables of that object, then the class variables of the object's type. Class variables function as default values for instance variables.\n" ]
[ 7, 0 ]
[]
[]
[ "class", "instance", "python" ]
stackoverflow_0001304868_class_instance_python.txt
Q: Dynamic list slicing Good day code knights, I have a tricky problem that I cannot see a simple solution for. And the history of the humankind states that there is a simple solution for everything (excluding buying presents) Here is the problem: I need an algorithm that takes multidimensional lists and a filter dictionary, processes them and returns lists based on the filters. For example: Bathymetry ('x', 'y')=(182, 149) #notation for (dimensions)=(size) Chl ('time', 'z', 'y', 'x')=(4, 31, 149, 182) filters {'x':(0,20), 'y':(3), 'z':(1,2), time:()} #no filter stands for all values Would return: readFrom.variables['Bathymetry'][0:21, 3] readFrom.variables['Chl'][:, 1:3, 3, 0:21] I was thinking about a for loop for the dimensions, reading the filters from the filter list but I cannot get my head around actually passing the attributes to the slicing machine. Any help much appreciated. A: I'm not sure I understood your question. But I think the slice object is what you are looking for: First instead of an empty tuple use None to include all values in time filters= {'x':(0,20), 'y':(3), 'z':(1,2), 'time':None} Then build a slice dictionary like this: d = dict( (k, slice(*v) if isinstance(v, tuple) else slice(v)) for k, v in filters.iteritems() ) Here is the output: { 'y': slice(None, 3, None), 'x': slice(0, 20, None), 'z': slice(1, 2, None), 'time': slice(None, None, None) } Then you can use the slice objects to extract the range from the list A: Something like the following should work: def doit(nam, filters): alldims = [] for dimname in getDimNames(nam): filt = filters.get(dimname, ()) howmany = len(filt) if howmany == 0: sliciflt = slice() elif howmany == 1: sliciflt = filt[0] elif howmany in (2, 3): sliciflt = slice(*filt) else: raise RuntimeError("%d items in slice for dim %r (%r)!" % (howmany, dimname, nam)) alldims.append(sliciflt) return readFrom.variables[nam][tuple(alldims)]
Dynamic list slicing
Good day code knights, I have a tricky problem that I cannot see a simple solution for. And the history of the humankind states that there is a simple solution for everything (excluding buying presents) Here is the problem: I need an algorithm that takes multidimensional lists and a filter dictionary, processes them and returns lists based on the filters. For example: Bathymetry ('x', 'y')=(182, 149) #notation for (dimensions)=(size) Chl ('time', 'z', 'y', 'x')=(4, 31, 149, 182) filters {'x':(0,20), 'y':(3), 'z':(1,2), time:()} #no filter stands for all values Would return: readFrom.variables['Bathymetry'][0:21, 3] readFrom.variables['Chl'][:, 1:3, 3, 0:21] I was thinking about a for loop for the dimensions, reading the filters from the filter list but I cannot get my head around actually passing the attributes to the slicing machine. Any help much appreciated.
[ "I'm not sure I understood your question. But I think the slice object is what you are looking for:\nFirst instead of an empty tuple use None to include all values in time\nfilters= {'x':(0,20), 'y':(3), 'z':(1,2), 'time':None}\n\nThen build a slice dictionary like this:\nd = dict(\n (k, slice(*v) if isinstance(v, tuple) else slice(v))\n for k, v in filters.iteritems()\n )\n\nHere is the output:\n{\n 'y': slice(None, 3, None),\n 'x': slice(0, 20, None),\n 'z': slice(1, 2, None),\n 'time': slice(None, None, None)\n}\n\nThen you can use the slice objects to extract the range from the list\n", "Something like the following should work:\ndef doit(nam, filters):\n alldims = []\n for dimname in getDimNames(nam):\n filt = filters.get(dimname, ())\n howmany = len(filt)\n if howmany == 0:\n sliciflt = slice()\n elif howmany == 1:\n sliciflt = filt[0]\n elif howmany in (2, 3):\n sliciflt = slice(*filt)\n else:\n raise RuntimeError(\"%d items in slice for dim %r (%r)!\"\n % (howmany, dimname, nam))\n alldims.append(sliciflt)\n\n\nreturn readFrom.variables[nam][tuple(alldims)]\n\n" ]
[ 3, 2 ]
[]
[]
[ "algorithm", "list", "multidimensional_array", "python" ]
stackoverflow_0001307019_algorithm_list_multidimensional_array_python.txt
Q: Extracting the To: header from an attachment of an email I am using python to open an email on the server (POP3). Each email has an attachment which is a forwarded email itself. I need to get the "To:" address out of the attachment. I am using python to try and help me learn the language and I'm not that good yet ! The code I have already is this import poplib, email, mimetypes oPop = poplib.POP3( 'xx.xxx.xx.xx' ) oPop.user( '[email protected]' ) oPop.pass_( 'xxxxxx' ) (iNumMessages, iTotalSize ) = oPop.stat() for thisNum in range(1, iNumMessages + 1): (server_msg, body, octets) = oPop.retr(thisNum) sMail = "\n".join( body ) oMsg = email.message_from_string( sMail ) # now what ?? I understand that I have the email as an instance of the email class but I'm not sure how to get to the attachment I know that using sData = 'To' if sData in oMsg: print sData + "", oMsg[sData] gets me the 'To:' header from the main message but how do I get that from the attachment ? I've tried for part in oMsg.walk(): oAttach = part.get_payload(1) But I'm not sure what to do with the oAttach object. I tried turning it into a string and then passing it to oMsgAttach = email.message_from_string( oAttach ) But that does nothing. I'm a little overwhelmed by the python docs and need some help. Thanks in advance. A: Without having an email in my inbox that is representative, it's difficult to work this one through (I've never used poplib). Having said that, some things that might help from my little bit of investigation: First of all, make lots of use of the command line interface to python and the dir() and help() functions: these can tell you lots about what's coming out. You can always insert help(oAttach), dir(oAttach) and print oAttach in your code to get an idea of what's going on as it loops round. If you're typing it into the command line interface line-by-line, it's even easier in this case. What I think you need to do is to go through each attachment and work out what it is. For a conventional email attachment, it's probably base64 encoded, so something like this might help: #!/usr/bin/python import poplib, email, mimetypes # Do everything you've done in the first code block of your question # ... # ... import base64 for part in oMsg.walk(): # I've removed the '1' from the argument as I think you always get the # the first entry (in my test, it was the third iteration that did it). # However, I could be wrong... oAttach = part.get_payload() # Decode the base64 encoded attachment oContent = b64decode(oAttach) # then maybe...? oMsgAttach = email.message_from_string(oContent) Note that you probably need to check oAttach in each case to check that it looks like a message. When you've got your sMail variable, print it out to the screen. Then you can look for something like Content-Transfer-Encoding: base64 in there, which will give you a clue to how the attachment is encoded. As I said, I've not used any of the poplib, email or mimetypes modules, so I'm not sure whether that'll help, but I thought it might point you in the right direction.
Extracting the To: header from an attachment of an email
I am using python to open an email on the server (POP3). Each email has an attachment which is a forwarded email itself. I need to get the "To:" address out of the attachment. I am using python to try and help me learn the language and I'm not that good yet ! The code I have already is this import poplib, email, mimetypes oPop = poplib.POP3( 'xx.xxx.xx.xx' ) oPop.user( '[email protected]' ) oPop.pass_( 'xxxxxx' ) (iNumMessages, iTotalSize ) = oPop.stat() for thisNum in range(1, iNumMessages + 1): (server_msg, body, octets) = oPop.retr(thisNum) sMail = "\n".join( body ) oMsg = email.message_from_string( sMail ) # now what ?? I understand that I have the email as an instance of the email class but I'm not sure how to get to the attachment I know that using sData = 'To' if sData in oMsg: print sData + "", oMsg[sData] gets me the 'To:' header from the main message but how do I get that from the attachment ? I've tried for part in oMsg.walk(): oAttach = part.get_payload(1) But I'm not sure what to do with the oAttach object. I tried turning it into a string and then passing it to oMsgAttach = email.message_from_string( oAttach ) But that does nothing. I'm a little overwhelmed by the python docs and need some help. Thanks in advance.
[ "Without having an email in my inbox that is representative, it's difficult to work this one through (I've never used poplib). Having said that, some things that might help from my little bit of investigation:\nFirst of all, make lots of use of the command line interface to python and the dir() and help() functions: these can tell you lots about what's coming out. You can always insert help(oAttach), dir(oAttach) and print oAttach in your code to get an idea of what's going on as it loops round. If you're typing it into the command line interface line-by-line, it's even easier in this case.\nWhat I think you need to do is to go through each attachment and work out what it is. For a conventional email attachment, it's probably base64 encoded, so something like this might help:\n#!/usr/bin/python\nimport poplib, email, mimetypes\n\n# Do everything you've done in the first code block of your question\n# ...\n# ...\n\nimport base64\nfor part in oMsg.walk():\n # I've removed the '1' from the argument as I think you always get the\n # the first entry (in my test, it was the third iteration that did it).\n # However, I could be wrong...\n oAttach = part.get_payload()\n # Decode the base64 encoded attachment\n oContent = b64decode(oAttach)\n # then maybe...?\n oMsgAttach = email.message_from_string(oContent)\n\nNote that you probably need to check oAttach in each case to check that it looks like a message. When you've got your sMail variable, print it out to the screen. Then you can look for something like Content-Transfer-Encoding: base64 in there, which will give you a clue to how the attachment is encoded.\nAs I said, I've not used any of the poplib, email or mimetypes modules, so I'm not sure whether that'll help, but I thought it might point you in the right direction.\n" ]
[ 1 ]
[]
[]
[ "email", "mime", "pop3", "python" ]
stackoverflow_0001306026_email_mime_pop3_python.txt
Q: How do I programmatically pull lists/arrays of (itunes urls to) apps in the iphone app store? I'd like to know how to pragmatically pull lists of apps from the iphone app store. I'd code this in python (via the google app engine) or in an iphone app. My goal would be to select maybe 5 of them and present them to the user. (for instance a top 5 kind of thing, or advanced filtering or queries) A: Unfortunately the only API that seems to be around for Apple's app store is a commercial offering from ABTO; nobody seems to have developed a free one. I'm afraid you'll have to resort to "screen scraping" -- urlget things, use beautifulsoup or the like for interpreting the HTML you get, and be ready to fix breakages whenever Apple tweaks their formats &c. It seems Apple has no interest in making such a thing available to developers (although as far as I can't tell they're not actively fighting against it either, they appear to just not care).
How do I programmatically pull lists/arrays of (itunes urls to) apps in the iphone app store?
I'd like to know how to pragmatically pull lists of apps from the iphone app store. I'd code this in python (via the google app engine) or in an iphone app. My goal would be to select maybe 5 of them and present them to the user. (for instance a top 5 kind of thing, or advanced filtering or queries)
[ "Unfortunately the only API that seems to be around for Apple's app store is a commercial offering from ABTO; nobody seems to have developed a free one. I'm afraid you'll have to resort to \"screen scraping\" -- urlget things, use beautifulsoup or the like for interpreting the HTML you get, and be ready to fix breakages whenever Apple tweaks their formats &c. It seems Apple has no interest in making such a thing available to developers (although as far as I can't tell they're not actively fighting against it either, they appear to just not care).\n" ]
[ 1 ]
[]
[]
[ "app_store", "arrays", "google_app_engine", "iphone", "python" ]
stackoverflow_0001307322_app_store_arrays_google_app_engine_iphone_python.txt
Q: Python question regarding a server listener I wrote a plug-in for the jetbrains tool teamcity. It is pretty much just a server listener that listens for a build being triggered and outputs some text files with information about different builds like what triggered it, how many changes there where etc etc. After I finished that I wrote a python script that could input info into teamcity while the server is running and kick of a build. I would like to be able to get the artifacts for that build after the build is ran, but the problem is I don't know how long it takes each build to run. Sometimes it is 30 sec other times 30 minutes. So I am kicking off the build with this line in python. urllib.urlopen('http://'+username+':'+password+'@localhost/httpAuth/action.html?add2Queue='+btid+'&system.name=<btid>&system.value=<'+btid+'>&system.name=<buildNumber>&system.value=<'+buildNumber+'>') After the build runs (some indetermined amount of time) I would like to use this line to get my text file. urllib.urlopen('http://'+username+':'+password+'@localhost/httpAuth/action.html?add2Queue='+btid+'&system.name=<btid>&system.value=<'+btid+'>&system.name=<buildNumber>&system.value=<'+buildNumber+'>') Again the problem is I don't know how long to wait before executing the second line. Usually in Java I would do a second thread of sorts that sleeps for a certain amount of time and waits for the build to be done. I am not sure how to do this in python. So if anyone has an idea of either how to do this in python OR a better way to do this I would appreciate it. If I need to explain myself better please let me know. Grant- A: Unless you get get notified by having the build server contact you, the only way to do it is to poll. You can either spawn a thread as indicated in other comments, you just have your main script sleep and poll. Something like: wait=True while wait: url=urllib.urlopen('http://'+username+':'+password+'@localhost/httpAuth/action.html?add2Queue='+btid+'&system.name=<btid>&system.value=<'+btid+'>&system.name=<buildNumber>&system.value=<'+buildNumber+'>') if url.getcode()!=404: wait=False else: time.sleep(60) As an alternative, you could use CherryPy. Then, when the build is done, you could have curl or wget connect to the listening CherryPy server and trigger your app to download the url. You could also use xmlrpclib to do something similar. A: Python actually has a threading system that is fairly similar to Java, so you should be able to use that without much trouble. But if all you need to do is wait for some predetermined amount of time, use import time time.sleep(300) # sleep for 300 seconds
Python question regarding a server listener
I wrote a plug-in for the jetbrains tool teamcity. It is pretty much just a server listener that listens for a build being triggered and outputs some text files with information about different builds like what triggered it, how many changes there where etc etc. After I finished that I wrote a python script that could input info into teamcity while the server is running and kick of a build. I would like to be able to get the artifacts for that build after the build is ran, but the problem is I don't know how long it takes each build to run. Sometimes it is 30 sec other times 30 minutes. So I am kicking off the build with this line in python. urllib.urlopen('http://'+username+':'+password+'@localhost/httpAuth/action.html?add2Queue='+btid+'&system.name=<btid>&system.value=<'+btid+'>&system.name=<buildNumber>&system.value=<'+buildNumber+'>') After the build runs (some indetermined amount of time) I would like to use this line to get my text file. urllib.urlopen('http://'+username+':'+password+'@localhost/httpAuth/action.html?add2Queue='+btid+'&system.name=<btid>&system.value=<'+btid+'>&system.name=<buildNumber>&system.value=<'+buildNumber+'>') Again the problem is I don't know how long to wait before executing the second line. Usually in Java I would do a second thread of sorts that sleeps for a certain amount of time and waits for the build to be done. I am not sure how to do this in python. So if anyone has an idea of either how to do this in python OR a better way to do this I would appreciate it. If I need to explain myself better please let me know. Grant-
[ "Unless you get get notified by having the build server contact you, the only way to do it is to poll. You can either spawn a thread as indicated in other comments, you just have your main script sleep and poll.\nSomething like:\nwait=True\nwhile wait:\n url=urllib.urlopen('http://'+username+':'+password+'@localhost/httpAuth/action.html?add2Queue='+btid+'&system.name=<btid>&system.value=<'+btid+'>&system.name=<buildNumber>&system.value=<'+buildNumber+'>')\n if url.getcode()!=404:\n wait=False\n else:\n time.sleep(60)\n\nAs an alternative, you could use CherryPy. Then, when the build is done, you could have curl or wget connect to the listening CherryPy server and trigger your app to download the url.\nYou could also use xmlrpclib to do something similar.\n", "Python actually has a threading system that is fairly similar to Java, so you should be able to use that without much trouble.\nBut if all you need to do is wait for some predetermined amount of time, use\nimport time\ntime.sleep(300) # sleep for 300 seconds\n\n" ]
[ 2, 0 ]
[]
[]
[ "multithreading", "plugins", "python", "sleep", "teamcity" ]
stackoverflow_0001307371_multithreading_plugins_python_sleep_teamcity.txt
Q: Python logging SMTPHandler - handling offline SMTP server I have setup the logging module for my new python script. I have two handlers, one sending stuff to a file, and one for email alerts. The SMTPHandler is setup to mail anything at the ERROR level or above. Everything works great, unless the SMTP connection fails. If the SMTP server does not respond or authentication fails (it requires SMTP auth), then the whole script dies. I am fairly new to python, so I am trying to figure out how to capture the exception that the SMTPHandler is raising so that any problems sending the log message via email won't bring down my entire script. Since I am also writing errors to a log file, if the SMTP alert fails, I just want to keep going, not halt anything. If I need a "try:" statement, would it go around the logging.handlers.SMTPHandler setup, or around the individual calls to my_logger.error()? A: Exceptions which occur during logging should not stop your script, though they may cause a traceback to be printed to sys.stderr. In order to prevent this printout, do the following: logging.raiseExceptions = 0 This is not the default (because in development you typically want to know about failures) but in production, raiseExceptions should not be set. You should find that the SMTPHandler will attempt a re-connection the next time an ERROR (or higher) is logged. logging does pass through SystemExit and KeyboardInterrupt exceptions, but all others should be handled so that they do not cause termination of an application which uses logging. If you find that this is not the case, please post specific details of the exceptions which are getting through and causing your script to terminate, and about your version of Python/operating system. Bear in mind that the script may appear to hang if there is a network timeout which is causing the handler to block (e.g. if a DNS lookup takes a long time, or if the SMTP connection takes a long time). A: You probably need to do both. To figure this out, I suggest to install a local mail server and use that. This way, you can shut it down while your script runs and note down the error message. To keep the code maintainable, you should extends SMTPHandler in such a way that you can handle the exceptions in a single place (instead of wrapping every logger call with try-except).
Python logging SMTPHandler - handling offline SMTP server
I have setup the logging module for my new python script. I have two handlers, one sending stuff to a file, and one for email alerts. The SMTPHandler is setup to mail anything at the ERROR level or above. Everything works great, unless the SMTP connection fails. If the SMTP server does not respond or authentication fails (it requires SMTP auth), then the whole script dies. I am fairly new to python, so I am trying to figure out how to capture the exception that the SMTPHandler is raising so that any problems sending the log message via email won't bring down my entire script. Since I am also writing errors to a log file, if the SMTP alert fails, I just want to keep going, not halt anything. If I need a "try:" statement, would it go around the logging.handlers.SMTPHandler setup, or around the individual calls to my_logger.error()?
[ "Exceptions which occur during logging should not stop your script, though they may cause a traceback to be printed to sys.stderr. In order to prevent this printout, do the following:\nlogging.raiseExceptions = 0\n\nThis is not the default (because in development you typically want to know about failures) but in production, raiseExceptions should not be set.\nYou should find that the SMTPHandler will attempt a re-connection the next time an ERROR (or higher) is logged.\nlogging does pass through SystemExit and KeyboardInterrupt exceptions, but all others should be handled so that they do not cause termination of an application which uses logging. If you find that this is not the case, please post specific details of the exceptions which are getting through and causing your script to terminate, and about your version of Python/operating system. Bear in mind that the script may appear to hang if there is a network timeout which is causing the handler to block (e.g. if a DNS lookup takes a long time, or if the SMTP connection takes a long time).\n", "You probably need to do both. To figure this out, I suggest to install a local mail server and use that. This way, you can shut it down while your script runs and note down the error message.\nTo keep the code maintainable, you should extends SMTPHandler in such a way that you can handle the exceptions in a single place (instead of wrapping every logger call with try-except).\n" ]
[ 5, 0 ]
[]
[]
[ "handler", "logging", "python" ]
stackoverflow_0001304593_handler_logging_python.txt
Q: In Django how to show a list of objects by year I have theses models: class Year(models.Model): name = models.CharField(max_length=15) date = models.DateField() class Period(models.Model): name = models.CharField(max_length=15) date = models.DateField() class Notice(models.Model): year = models.ForeignKey(Year) period = models.ForeignKey(Period, blank=True, null=True) text = models.TextField() order = models.PositiveSmallIntegerField(default=1) And in my view I would like to show all the Notices ordered by year and period but I don't want to repeat the year and/or the period for a notice if they are the same as the previous. I would like this kind of result: 1932-1940 Mid-summer Text Lorem ipsum from notice 1 ... Text Lorem ipsum from notice 2 ... September Text Lorem ipsum from notice 3 ... 1950 January Text Lorem ipsum from notice 4 ... etc. I found a solution by looping over all the rows to built nested lists like this: years = [('1932-1940', [ ('Mid-summer', [Notice1, Notice2]), ('September', [Notice3]) ]), ('1950', [ ('January', [Notice4]) ]) ] Here's the code in the view: years = [] year = [] period = [] prev_year = '' prev_period = '' for notice in Notice.objects.all(): if notice.year != prev_year: prev_year = notice.year year = [] years.append((prev_year.name, year)) prev_period = '' if notice.periode != prev_period: prev_period = notice.periode period = [] if prev_period: name = prev_period.name else: name = None year.append((name, period)) period.append(notice) But this is slow and inelegant. What's the good way to do this ? If I could set some variables inside the template I could only iterate over all the notices and only print the year and period by checking if they are the same as the previous one. But it's not possible to set some some temp variables in templates. A: Luckily Django has some built-in template tags that will help you. Probably the main one you want is regroup: {% regroup notices by year as year_list %} {% for year in year_list %} <h2>{{ year.grouper }}<h2> <ul> {% for notice in year.list %} <li>{{ notice.text }}</li> {% endfor %} </ul> {% endfor %} There's also {% ifchanged %}, which can help with looping over lists when one value stays the same. A: Your problem would not actually be that hard if not for the unfortunate denormalization you have going on. I'm going to answer your question ignoring the Year class, because I don't understand how that logic relates to the period logic. Most simply, in the template, put: {% for period in periods %} period.name {% for notice in period.notice_set.all %} notice.text {% endfor %} {% endfor %} Now that completely leaves out ordering, so if you like, you could define in your Period model: def order_notices(self): return self.notice_set.order_by('order') Then use {% for period in periods %} period.name {% for notice in period.order_notices %} notice.text {% endfor %} {% endfor %} If you must use years, I strongly suggest defining a method in the Year model of the form def ordered_periods(self): ... #your logic goes here ... #should return an iterable (list or queryset) or periods Then in your template: {% for year in years %} year.name {% for period in year.ordered_periods %} period.name {% for notice in period.order_notices %} notice.text {% endfor %} {% endfor %} EDIT: Please keep in mind that the secret to all this success is that the template can only call methods that don't take any arguments (except self).
In Django how to show a list of objects by year
I have theses models: class Year(models.Model): name = models.CharField(max_length=15) date = models.DateField() class Period(models.Model): name = models.CharField(max_length=15) date = models.DateField() class Notice(models.Model): year = models.ForeignKey(Year) period = models.ForeignKey(Period, blank=True, null=True) text = models.TextField() order = models.PositiveSmallIntegerField(default=1) And in my view I would like to show all the Notices ordered by year and period but I don't want to repeat the year and/or the period for a notice if they are the same as the previous. I would like this kind of result: 1932-1940 Mid-summer Text Lorem ipsum from notice 1 ... Text Lorem ipsum from notice 2 ... September Text Lorem ipsum from notice 3 ... 1950 January Text Lorem ipsum from notice 4 ... etc. I found a solution by looping over all the rows to built nested lists like this: years = [('1932-1940', [ ('Mid-summer', [Notice1, Notice2]), ('September', [Notice3]) ]), ('1950', [ ('January', [Notice4]) ]) ] Here's the code in the view: years = [] year = [] period = [] prev_year = '' prev_period = '' for notice in Notice.objects.all(): if notice.year != prev_year: prev_year = notice.year year = [] years.append((prev_year.name, year)) prev_period = '' if notice.periode != prev_period: prev_period = notice.periode period = [] if prev_period: name = prev_period.name else: name = None year.append((name, period)) period.append(notice) But this is slow and inelegant. What's the good way to do this ? If I could set some variables inside the template I could only iterate over all the notices and only print the year and period by checking if they are the same as the previous one. But it's not possible to set some some temp variables in templates.
[ "Luckily Django has some built-in template tags that will help you. Probably the main one you want is regroup:\n{% regroup notices by year as year_list %}\n\n\n{% for year in year_list %}\n <h2>{{ year.grouper }}<h2>\n\n <ul>\n {% for notice in year.list %}\n <li>{{ notice.text }}</li>\n {% endfor %}\n </ul>\n{% endfor %}\n\nThere's also {% ifchanged %}, which can help with looping over lists when one value stays the same.\n", "Your problem would not actually be that hard if not for the unfortunate denormalization you have going on. I'm going to answer your question ignoring the Year class, because I don't understand how that logic relates to the period logic.\nMost simply, in the template, put:\n{% for period in periods %}\n period.name\n {% for notice in period.notice_set.all %}\n notice.text\n {% endfor %}\n{% endfor %}\n\nNow that completely leaves out ordering, so if you like, you could define in your Period model:\ndef order_notices(self):\n return self.notice_set.order_by('order')\n\nThen use\n{% for period in periods %}\n period.name\n {% for notice in period.order_notices %}\n notice.text\n {% endfor %}\n{% endfor %}\n\nIf you must use years, I strongly suggest defining a method in the Year model of the form\ndef ordered_periods(self):\n ... #your logic goes here\n ... #should return an iterable (list or queryset) or periods\n\nThen in your template:\n{% for year in years %}\n year.name\n {% for period in year.ordered_periods %}\n period.name\n {% for notice in period.order_notices %}\n notice.text\n {% endfor %}\n {% endfor %}\n\nEDIT:\nPlease keep in mind that the secret to all this success is that the template can only call methods that don't take any arguments (except self).\n" ]
[ 2, 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001308169_django_python.txt
Q: Calling Python from Objective-C I'm developing a Python/ObjC application and I need to call some methods in my Python classes from ObjC. I've tried several stuffs with no success. How can I call a Python method from Objective-C? My Python classes are being instantiated in Interface Builder. How can I call a method from that instance? A: Use PyObjC. It is included with Leopard & later. >>> from Foundation import * >>> a = NSArray.arrayWithObjects_("a", "b", "c", None) >>> a ( a, b, c ) >>> a[1] 'b' >>> a.objectAtIndex_(1) 'b' >>> type(a) <objective-c class NSCFArray at 0x7fff708bc178> It even works with iPython: In [1]: from Foundation import * In [2]: a = NSBundle.allFrameworks() In [3]: ?a Type: NSCFArray Base Class: <objective-c class NSCFArray at 0x1002adf40> ` To call from Objective-C into Python, the easiest way is to: declare an abstract superclass in Objective-C that contains the API you want to call create stub implementations of the methods in the class's @implementation subclass the class in Python and provide concrete implementations create a factory method on the abstract superclass that creates concrete subclass instances I.e. @interface Abstract : NSObject - (unsigned int) foo: (NSString *) aBar; + newConcrete; @end @implementation Abstract - (unsigned int) foo: (NSString *) aBar { return 42; } + newConcrete { return [[NSClassFromString(@"MyConcrete") new] autorelease]; } @end ..... class Concrete(Abstract): def foo_(self, s): return s.length() ..... x = [Abstract newFoo]; [x foo: @"bar"];
Calling Python from Objective-C
I'm developing a Python/ObjC application and I need to call some methods in my Python classes from ObjC. I've tried several stuffs with no success. How can I call a Python method from Objective-C? My Python classes are being instantiated in Interface Builder. How can I call a method from that instance?
[ "Use PyObjC.\nIt is included with Leopard & later.\n>>> from Foundation import *\n>>> a = NSArray.arrayWithObjects_(\"a\", \"b\", \"c\", None)\n>>> a\n(\n a,\n b,\n c\n)\n>>> a[1]\n'b'\n>>> a.objectAtIndex_(1)\n'b'\n>>> type(a)\n<objective-c class NSCFArray at 0x7fff708bc178>\n\nIt even works with iPython:\nIn [1]: from Foundation import *\n\nIn [2]: a = NSBundle.allFrameworks()\n\nIn [3]: ?a\nType: NSCFArray\nBase Class: <objective-c class NSCFArray at 0x1002adf40>\n\n`\nTo call from Objective-C into Python, the easiest way is to:\n\ndeclare an abstract superclass in Objective-C that contains the API you want to call\ncreate stub implementations of the methods in the class's @implementation\nsubclass the class in Python and provide concrete implementations\ncreate a factory method on the abstract superclass that creates concrete subclass instances\n\nI.e.\n@interface Abstract : NSObject\n- (unsigned int) foo: (NSString *) aBar;\n+ newConcrete;\n@end\n\n@implementation Abstract\n- (unsigned int) foo: (NSString *) aBar { return 42; }\n+ newConcrete { return [[NSClassFromString(@\"MyConcrete\") new] autorelease]; }\n@end\n\n.....\n\nclass Concrete(Abstract):\n def foo_(self, s): return s.length()\n\n.....\n\nx = [Abstract newFoo];\n[x foo: @\"bar\"];\n\n" ]
[ 17 ]
[]
[]
[ "cocoa", "objective_c", "python" ]
stackoverflow_0001308079_cocoa_objective_c_python.txt
Q: How I can add a Widget or a Region to an Status Icon in PyGTK This is my first question in StackOverflow, so I would try to explain my self the best I can. I made an small app trying to emularte the windows Procastination Killer Application, using pygtk and pygame for the sound alerts. Here is a video of my little app running http://www.youtube.com/watch?v=FmE-QPA9p-8 My Issue is that I want to get a widget in the tray icon area, and not jus the plain Icon. Something like an Icon and a Label, to made a counter, or at least extend the icon size to put more information in the status icon. So my Questions would be: How can I resize the status icon? for example to show a icon 44x22 pixels How can I add a Widget, Region, or something else instead the status icon Here is the code that use to get the status icon. self.status_icon = gtk.StatusIcon() self.status_icon.set_from_file(STATUS_ICON_FILE) self.status_icon.set_tooltip("Switch, a procastination killer app") self.status_icon.connect("activate", self.on_toggle_status_trayicon) self.status_icon.connect("popup-menu", lambda i, b, a: self.status_menu.popup( None, None, gtk.status_icon_position_menu, b, a, self.status_icon)) I am packaging the app for ubuntu soon as I find a name :), that maybe would be me third question. 3: How do I name my app? A: GTK+ doesn't support arbitrary widgets in the notification area, because these don't work well in Windows. You probably want to write a panel applet instead -- here's a tutorial for panel applets in PyGTK.
How I can add a Widget or a Region to an Status Icon in PyGTK
This is my first question in StackOverflow, so I would try to explain my self the best I can. I made an small app trying to emularte the windows Procastination Killer Application, using pygtk and pygame for the sound alerts. Here is a video of my little app running http://www.youtube.com/watch?v=FmE-QPA9p-8 My Issue is that I want to get a widget in the tray icon area, and not jus the plain Icon. Something like an Icon and a Label, to made a counter, or at least extend the icon size to put more information in the status icon. So my Questions would be: How can I resize the status icon? for example to show a icon 44x22 pixels How can I add a Widget, Region, or something else instead the status icon Here is the code that use to get the status icon. self.status_icon = gtk.StatusIcon() self.status_icon.set_from_file(STATUS_ICON_FILE) self.status_icon.set_tooltip("Switch, a procastination killer app") self.status_icon.connect("activate", self.on_toggle_status_trayicon) self.status_icon.connect("popup-menu", lambda i, b, a: self.status_menu.popup( None, None, gtk.status_icon_position_menu, b, a, self.status_icon)) I am packaging the app for ubuntu soon as I find a name :), that maybe would be me third question. 3: How do I name my app?
[ "GTK+ doesn't support arbitrary widgets in the notification area, because these don't work well in Windows. You probably want to write a panel applet instead -- here's a tutorial for panel applets in PyGTK.\n" ]
[ 2 ]
[]
[]
[ "gtk", "pygtk", "python", "tray", "trayicon" ]
stackoverflow_0001308679_gtk_pygtk_python_tray_trayicon.txt
Q: Python egg: where is it installed? I'm trying to install py-appscript on the mac using 'sudo easy_install appscript'. The command runs and I get a message saying "Installed /Library/Python/..../appscript=0.20.0-py2.5-maxosx-10.5-i386.egg". However, when I run a tool that required this (osaglue) I get an error that py-appscript isn't installed. My guess is that it's installed in some location that's not in my path, or that I need to do something more to install it. Any ideas? The exact text I see when running easy_install is: Processing appscript-0.20.0-py2.5-macosx-10.5-i386.egg appscript 0.20.0 is already the active version in easy-install.pth Installed /Library/Python/2.5/site-packages/appscript-0.20.0-py2.5-macosx-10.5-i386.egg Processing dependencies for appscript==0.20.0 Finished processing dependencies for appscript==0.20.0 and the error when trying to run osaglue is: Sorry, py-appscript 0.18.1 or later is required to use osaglue. Please see the following page for more information: http://appscript.sourceforge.net/py-appscript/install.html Please install py-appscript and run this command again. A: You can found all your avaiable packages in the sys.path. Start the pythonshell and type in this code: import sys print sys.path A: What does which python return? Update: Ok, so go into /usr/bin and do a ls -l | grep python. Does /usr/bin/python link to the OSX installation? If it does not, I had the same problem a while back. easy_install is installing the egg for the python package that came with OSX, but you're using a different version to execute code when you call python. I believe my solution was to reinstall setuptools and tell it to place the eggs in the location for the python distro that I wanted to use. A: Do you have more than one version of Python installed on your machine? (Apple install their own copy of Python in /System, with third-party modules at /Library/Python, by default.) If so, chances are, easy_install is installing appscript for one Python, while osaglue is using another. That said, if you're looking to generate glue code for objc-appscript, the simplest approach is to use ASDictionary (http://appscript.sourceforge.net/tools.html) to generate the glue directly; no need to mess around installing py-appscript and other dependencies.
Python egg: where is it installed?
I'm trying to install py-appscript on the mac using 'sudo easy_install appscript'. The command runs and I get a message saying "Installed /Library/Python/..../appscript=0.20.0-py2.5-maxosx-10.5-i386.egg". However, when I run a tool that required this (osaglue) I get an error that py-appscript isn't installed. My guess is that it's installed in some location that's not in my path, or that I need to do something more to install it. Any ideas? The exact text I see when running easy_install is: Processing appscript-0.20.0-py2.5-macosx-10.5-i386.egg appscript 0.20.0 is already the active version in easy-install.pth Installed /Library/Python/2.5/site-packages/appscript-0.20.0-py2.5-macosx-10.5-i386.egg Processing dependencies for appscript==0.20.0 Finished processing dependencies for appscript==0.20.0 and the error when trying to run osaglue is: Sorry, py-appscript 0.18.1 or later is required to use osaglue. Please see the following page for more information: http://appscript.sourceforge.net/py-appscript/install.html Please install py-appscript and run this command again.
[ "You can found all your avaiable packages in the sys.path. Start the pythonshell and type in this code:\nimport sys\nprint sys.path\n\n", "What does which python return?\nUpdate: Ok, so go into /usr/bin and do a ls -l | grep python. Does /usr/bin/python link to the OSX installation?\nIf it does not, I had the same problem a while back. easy_install is installing the egg for the python package that came with OSX, but you're using a different version to execute code when you call python. I believe my solution was to reinstall setuptools and tell it to place the eggs in the location for the python distro that I wanted to use.\n", "Do you have more than one version of Python installed on your machine? (Apple install their own copy of Python in /System, with third-party modules at /Library/Python, by default.) If so, chances are, easy_install is installing appscript for one Python, while osaglue is using another.\nThat said, if you're looking to generate glue code for objc-appscript, the simplest approach is to use ASDictionary (http://appscript.sourceforge.net/tools.html) to generate the glue directly; no need to mess around installing py-appscript and other dependencies.\n" ]
[ 4, 0, 0 ]
[]
[]
[ "macos", "python" ]
stackoverflow_0001304122_macos_python.txt