content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Change Flash source by Python I have flash with big image library inside, is there way to manipulate this content by python? A: Python Flash Tools intended to be a level up from the Ming SWF library and a step down from a Flash GUI see http://pyswftools.sourceforge.net/
Change Flash source by Python
I have flash with big image library inside, is there way to manipulate this content by python?
[ "Python Flash Tools intended to be a level up from the Ming SWF library and a step down from a Flash GUI see http://pyswftools.sourceforge.net/\n" ]
[ 0 ]
[]
[]
[ "flash", "python" ]
stackoverflow_0001171170_flash_python.txt
Q: How to solve this complex recursive problem, pyramid point system I'm trying to program a pyramid like score system for an ARG game and have come up with a problem. When users get into the game they start a new "pyramid" but if one start the game with a referer code from another player they become a child of this user and then kick points up the ladder. The issue here is not the point calculation, I've gotten that right with some good help from you guys, but if a user gets more point that it parent, they should switch places in the ladder. So that a users parent becomes it's child and so on. The python code I have now doesnt work proper, and I dont really know why. def verify_parents(user): """ This is a recursive function that checks to see if any of the parents should bump up because they got more points than its parent. """ from rottenetter.accounts.models import get_score try: parent = user.parent except: return False else: # If this players score is greater than its parent if get_score(user) > get_score(parent): # change parent user.parent = parent.parent user.save() # change parent's parent to current profile parent.parent = user parent.save() verify_parents(parent) In my mind this should work, if a user has a parent, check to see if the user got more points than its parent, if so, set the users parent to be the parents parent, and set the parents parent to be the user, and then they have switched places. And after that, call the same function with the parent as a target so that it can check it self, and then continue up the ladder. But this doesnt always work, in some cases people aren't bumbed to the right position of some reason. Edit: When one users steps up or down a step in the ladder, it's childs move with him so they still relate to the same parent, unless they to get more points and step up. So it should be unecessary to anything with the parents shouldn't it? A: I think your problem could be that you aren't setting the child(s) of profile to now have parent as it's/their parent, unless children with parents can't also be parents in your system (which I do not believe to be the case). Alternatively (or possibly together with the previous), you may want to just do parent = profile.parent rather than parent = profile.parent.get_profile(). EDIT: And I see that you did indeed switch to something of that second form, though using user instead of profile. Anyway, since each user can have multiple children (as you stated in a comment on another answer), you might want to keep track of each user's child from within that user's object. Something like... parent = user.parent user.parent = parent.parent parent.parent = user children = user.children for child in children: child.parent = parent user.children = parent.children for child in user.children: if child is user: child = parent child.parent = user parent.children = children for child in user.parent.children: if child is parent: child = user This probably needs refactoring, though, and I may have missed something. A: [correction according to your comment] If the result of the "move" never intends to change the topology of the tree(i.e. when X becomes the parent of its old parent Y, it gives all its children to Y), then the simplest might be to decouple the notion "pyramid nodes" from the "users", with one-to-one relationship. Then if the 'swap' operation does not change the topology of the tree, you would only need to swap the mapping between 'nodeA <=> userA' and 'nodeB <=> userB' so they become 'nodeA <=> userB' and 'nodeB <=> userA'. Since the topology of the tree does not change, this would automagically take care of the children having a correct parent. The downside, of course, is that you no longer can directly find out the 'parent' from a user record and would need to go via the nodes. No code as I am not sure of your application details - but hopefully if this is applicable it should be easy enough to turn into code.
How to solve this complex recursive problem, pyramid point system
I'm trying to program a pyramid like score system for an ARG game and have come up with a problem. When users get into the game they start a new "pyramid" but if one start the game with a referer code from another player they become a child of this user and then kick points up the ladder. The issue here is not the point calculation, I've gotten that right with some good help from you guys, but if a user gets more point that it parent, they should switch places in the ladder. So that a users parent becomes it's child and so on. The python code I have now doesnt work proper, and I dont really know why. def verify_parents(user): """ This is a recursive function that checks to see if any of the parents should bump up because they got more points than its parent. """ from rottenetter.accounts.models import get_score try: parent = user.parent except: return False else: # If this players score is greater than its parent if get_score(user) > get_score(parent): # change parent user.parent = parent.parent user.save() # change parent's parent to current profile parent.parent = user parent.save() verify_parents(parent) In my mind this should work, if a user has a parent, check to see if the user got more points than its parent, if so, set the users parent to be the parents parent, and set the parents parent to be the user, and then they have switched places. And after that, call the same function with the parent as a target so that it can check it self, and then continue up the ladder. But this doesnt always work, in some cases people aren't bumbed to the right position of some reason. Edit: When one users steps up or down a step in the ladder, it's childs move with him so they still relate to the same parent, unless they to get more points and step up. So it should be unecessary to anything with the parents shouldn't it?
[ "I think your problem could be that you aren't setting the child(s) of profile to now have parent as it's/their parent, unless children with parents can't also be parents in your system (which I do not believe to be the case).\nAlternatively (or possibly together with the previous), you may want to just do parent = profile.parent rather than parent = profile.parent.get_profile().\nEDIT: And I see that you did indeed switch to something of that second form, though using user instead of profile.\nAnyway, since each user can have multiple children (as you stated in a comment on another answer), you might want to keep track of each user's child from within that user's object. Something like...\nparent = user.parent\nuser.parent = parent.parent\nparent.parent = user\n\nchildren = user.children\n\nfor child in children:\n child.parent = parent\n\nuser.children = parent.children\n\nfor child in user.children:\n if child is user:\n child = parent\n\n child.parent = user\n\nparent.children = children\n\nfor child in user.parent.children:\n if child is parent:\n child = user\n\nThis probably needs refactoring, though, and I may have missed something.\n", "[correction according to your comment]\nIf the result of the \"move\" never intends to change the topology of the tree(i.e. when X becomes the parent of its old parent Y, it gives all its children to Y), then the simplest might be to decouple the notion \"pyramid nodes\" from the \"users\", with one-to-one relationship. Then if the 'swap' operation does not change the topology of the tree, you would only need to swap the mapping between 'nodeA <=> userA' and 'nodeB <=> userB' so they become 'nodeA <=> userB' and 'nodeB <=> userA'.\nSince the topology of the tree does not change, this would automagically take care of the children having a correct parent. The downside, of course, is that you no longer can directly find out the 'parent' from a user record and would need to go via the nodes. \nNo code as I am not sure of your application details - but hopefully if this is applicable it should be easy enough to turn into code.\n" ]
[ 1, 1 ]
[]
[]
[ "django", "python", "recursion" ]
stackoverflow_0001171926_django_python_recursion.txt
Q: How can I run telnet command in the python GUI? How can I run telnet command in the python GUI? A: Sounds like you need telnetlib A: You may also be interested in Scapy, though it may be too low-level for what you want.
How can I run telnet command in the python GUI?
How can I run telnet command in the python GUI?
[ "Sounds like you need telnetlib \n", "You may also be interested in Scapy, though it may be too low-level for what you want.\n" ]
[ 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001170713_python.txt
Q: django - circular import problem when executing a command I'm developing a django application. Modules of importance to my problem are given below: globals.py --> contains constants that are used throughout the application. SITE_NAME and SITE_DOMAIN are two of those and are used to fill some strings. Here is how I define them: from django.contrib.sites.models import Site ... SITE_DOMAIN = Site.objects.get_current().domain SITE_NAME = Site.objects.get_current().name models.py --> models live inside this module. imports some constants from globals.py some_command.py --> a command that imports some constants from globals also. when executed, the command imports a constant from globals.py and runs into a circular import problem: inside globals.py, get_current() from sites app is called, and sites app in turn imports models.py which has imports from globals.py as well. EDIT: The application runs flawlessly, without encountering this circular import issue. Importing globals.py from shell brings no problems. Even the command can be executed from the shell without calling manage.py. So why does manage.py some_command fail due to a circular import? Thanks in advance. A: Is there any particular reason you need to store SITE_DOMAIN and SITE_NAME in globals.py? These are already available directly from the sites framework. According to the docs, the site object is cached the first time you access it, so importing it and using it there directly doesn't hurt.
django - circular import problem when executing a command
I'm developing a django application. Modules of importance to my problem are given below: globals.py --> contains constants that are used throughout the application. SITE_NAME and SITE_DOMAIN are two of those and are used to fill some strings. Here is how I define them: from django.contrib.sites.models import Site ... SITE_DOMAIN = Site.objects.get_current().domain SITE_NAME = Site.objects.get_current().name models.py --> models live inside this module. imports some constants from globals.py some_command.py --> a command that imports some constants from globals also. when executed, the command imports a constant from globals.py and runs into a circular import problem: inside globals.py, get_current() from sites app is called, and sites app in turn imports models.py which has imports from globals.py as well. EDIT: The application runs flawlessly, without encountering this circular import issue. Importing globals.py from shell brings no problems. Even the command can be executed from the shell without calling manage.py. So why does manage.py some_command fail due to a circular import? Thanks in advance.
[ "Is there any particular reason you need to store SITE_DOMAIN and SITE_NAME in globals.py? These are already available directly from the sites framework.\nAccording to the docs, the site object is cached the first time you access it, so importing it and using it there directly doesn't hurt.\n" ]
[ 1 ]
[]
[]
[ "circular_reference", "django", "import", "python" ]
stackoverflow_0001172386_circular_reference_django_import_python.txt
Q: Django-admin : How to display link to object info page instead of edit form , in records change list? I am customizing Django-admin for an application am working on . so far the customization is working file , added some views . but I am wondering how to change the records link in change_list display to display an info page instead of change form ?! in this blog post :http://www.theotherblog.com/Articles/2009/06/02/ extending-the-django-admin-interface/ Tom said : " You can add images or links in the listings view by defining a function then adding my_func.allow_tags = True " which i didn't fully understand !! right now i have profile function , which i need when i click on a member in the records list i can display it ( or adding another button called - Profile - ) , also how to add link for every member ( Edit : redirect me to edit form for this member ) . How i can achieve that ?! A: If I understand your question right you want to add your own link to the listing view, and you want that link to point to some info page you have created. To do that, create a function to return the link HTML in your Admin object. Then use that function in your list. Like this: class ModelAdmin(admin.ModelAdmin): def view_link(self): return u"<a href='view/%d/'>View</a>" % self.id view_link.short_description = '' view_link.allow_tags = True list_display = ('id', view_link) A: Take a look at: http://docs.djangoproject.com/en/dev/ref/contrib/admin/, ModelAdmin.list_display part, it says: A string representing an attribute on the model. This behaves almost the same as the callable, but self in this context is the model instance. Here's a full model example: class Person(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) color_code = models.CharField(max_length=6) def colored_name(self): return '<span style="color: #%s;">%s %s</span>' % (self.color_code, self.first_name, self.last_name) colored_name.allow_tags = True class PersonAdmin(admin.ModelAdmin): list_display = ('first_name', 'last_name', 'colored_name') So I guess, if you add these two methods to Person def get_absolute_url(self): return '/profiles/%s/' % (self.id) def profile_link(self): return '<a href="%s">%s</a>' % (self.get_absolute_url(), self.username) profile_link.allow_tags = True and changes PersonAdmin to class PersonAdmin(admin.ModelAdmin): list_display = ('first_name', 'last_name', 'colored_name', 'profile_link') Then you done
Django-admin : How to display link to object info page instead of edit form , in records change list?
I am customizing Django-admin for an application am working on . so far the customization is working file , added some views . but I am wondering how to change the records link in change_list display to display an info page instead of change form ?! in this blog post :http://www.theotherblog.com/Articles/2009/06/02/ extending-the-django-admin-interface/ Tom said : " You can add images or links in the listings view by defining a function then adding my_func.allow_tags = True " which i didn't fully understand !! right now i have profile function , which i need when i click on a member in the records list i can display it ( or adding another button called - Profile - ) , also how to add link for every member ( Edit : redirect me to edit form for this member ) . How i can achieve that ?!
[ "If I understand your question right you want to add your own link to the listing view, and you want that link to point to some info page you have created.\nTo do that, create a function to return the link HTML in your Admin object. Then use that function in your list. Like this:\nclass ModelAdmin(admin.ModelAdmin):\n def view_link(self):\n return u\"<a href='view/%d/'>View</a>\" % self.id\n view_link.short_description = ''\n view_link.allow_tags = True\n list_display = ('id', view_link)\n\n", "Take a look at: http://docs.djangoproject.com/en/dev/ref/contrib/admin/, ModelAdmin.list_display part, it says: A string representing an attribute on the model. This behaves almost the same as the callable, but self in this context is the model instance. Here's a full model example:\nclass Person(models.Model):\n first_name = models.CharField(max_length=50)\n last_name = models.CharField(max_length=50)\n color_code = models.CharField(max_length=6)\n\ndef colored_name(self):\n return '<span style=\"color: #%s;\">%s %s</span>' % (self.color_code, self.first_name, self.last_name)\ncolored_name.allow_tags = True\n\nclass PersonAdmin(admin.ModelAdmin):\n list_display = ('first_name', 'last_name', 'colored_name')\n\nSo I guess, if you add these two methods to Person\ndef get_absolute_url(self):\n return '/profiles/%s/' % (self.id)\n\ndef profile_link(self):\n return '<a href=\"%s\">%s</a>' % (self.get_absolute_url(), self.username)\nprofile_link.allow_tags = True\n\nand changes PersonAdmin to\nclass PersonAdmin(admin.ModelAdmin):\n list_display = ('first_name', 'last_name', 'colored_name', 'profile_link')\n\nThen you done\n" ]
[ 23, 10 ]
[]
[]
[ "admin", "django", "django_admin", "python" ]
stackoverflow_0001172584_admin_django_django_admin_python.txt
Q: Tokenizing blocks of code in Python I have this string: [a [a b] [c e f] d] and I want a list like this lst[0] = "a" lst[1] = "a b" lst[2] = "c e f" lst[3] = "d" My current implementation that I don't think is elegant/pythonic is two recursive functions (one splitting with '[' and the other with ']' ) but I am sure it can be done using list comprehensions or regular expressions (but I can't figure out a sane way to do it). Any ideas? A: Actually this really isn't a recursive data structure, note that a and d are in separate lists. You're just splitting the string over the bracket characters and getting rid of some white space. I'm sure somebody can find something cleaner, but if you want a one-liner something like the following should get you close: parse_str = '[a [a b] [c e f] d]' lst = [s.strip() for s in re.split('[\[\]]', parse_str) if s.strip()] >>>lst ['a', 'a b', 'c e f', 'd'] A: Well, if it's a recursive data structure you're going to need a recursive function to cleanly navigate it. But Python does have a tokenizer library which might be useful: http://docs.python.org/library/tokenize.html A: If it's a recursive data structure, then recursion is good to traverse it. However, parsing the string to create the structure does not need to be recursive. One alternative way I would do it is iterative: origString = "[a [a b] [c [x z] d e] f]".split(" ") stack = [] for element in origString: if element[0] == "[": newLevel = [ element[1:] ] stack.append(newLevel) elif element[-1] == "]": stack[-1].append(element[0:-1]) finished = stack.pop() if len(stack) != 0: stack[-1].append(finished) else: root = finished else: stack[-1].append(element) print root Of course, this can probably be improved, and it will create lists of lists of lists of ... of strings, which isn't exactly what your example wanted. However, it does handle arbitrary depth of the tree.
Tokenizing blocks of code in Python
I have this string: [a [a b] [c e f] d] and I want a list like this lst[0] = "a" lst[1] = "a b" lst[2] = "c e f" lst[3] = "d" My current implementation that I don't think is elegant/pythonic is two recursive functions (one splitting with '[' and the other with ']' ) but I am sure it can be done using list comprehensions or regular expressions (but I can't figure out a sane way to do it). Any ideas?
[ "Actually this really isn't a recursive data structure, note that a and d are in separate lists. You're just splitting the string over the bracket characters and getting rid of some white space.\nI'm sure somebody can find something cleaner, but if you want a one-liner something like the following should get you close:\nparse_str = '[a [a b] [c e f] d]'\nlst = [s.strip() for s in re.split('[\\[\\]]', parse_str) if s.strip()]\n\n>>>lst\n['a', 'a b', 'c e f', 'd']\n\n", "Well, if it's a recursive data structure you're going to need a recursive function to cleanly navigate it.\nBut Python does have a tokenizer library which might be useful:\nhttp://docs.python.org/library/tokenize.html\n", "If it's a recursive data structure, then recursion is good to traverse it. However, parsing the string to create the structure does not need to be recursive. One alternative way I would do it is iterative:\norigString = \"[a [a b] [c [x z] d e] f]\".split(\" \")\nstack = []\nfor element in origString:\n if element[0] == \"[\":\n newLevel = [ element[1:] ]\n stack.append(newLevel)\n elif element[-1] == \"]\":\n stack[-1].append(element[0:-1])\n finished = stack.pop()\n if len(stack) != 0:\n stack[-1].append(finished)\n else:\n root = finished\n else:\n stack[-1].append(element)\nprint root\n\nOf course, this can probably be improved, and it will create lists of lists of lists of ... of strings, which isn't exactly what your example wanted. However, it does handle arbitrary depth of the tree.\n" ]
[ 4, 1, 1 ]
[]
[]
[ "list_comprehension", "python", "regex", "tokenize" ]
stackoverflow_0001172738_list_comprehension_python_regex_tokenize.txt
Q: Is there a non-GPL Python Library for reading ID3 information from an mp3? I have found many GPL licensed libraries for reading information from mp3s in Python. Are there any non GPL libraries? A: pytagger is using a BSD license. A: There's Stagger (new BSD license), pure Python 3. A: You could use GStreamer (LGPL), but that might be a bit overkill if you only want the metadata and no playback.
Is there a non-GPL Python Library for reading ID3 information from an mp3?
I have found many GPL licensed libraries for reading information from mp3s in Python. Are there any non GPL libraries?
[ "pytagger is using a BSD license.\n", "There's Stagger (new BSD license), pure Python 3.\n", "You could use GStreamer (LGPL), but that might be a bit overkill if you only want the metadata and no playback.\n" ]
[ 3, 2, 0 ]
[]
[]
[ "gpl", "id3", "licensing", "mp3", "python" ]
stackoverflow_0001173025_gpl_id3_licensing_mp3_python.txt
Q: Python file at GAE I have added a python file at google app engine. how to send a request to this file. Is this file needed to b executed explicitly? A: Your app.yaml file decides which python script to run, depending on the request URL. See examples at the Google Docs. You can even use regexp.
Python file at GAE
I have added a python file at google app engine. how to send a request to this file. Is this file needed to b executed explicitly?
[ "Your app.yaml file decides which python script to run, depending on the request URL.\nSee examples at the Google Docs. You can even use regexp.\n" ]
[ 2 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001172725_google_app_engine_python.txt
Q: Python Server Pages Implementations I've been a PHP developer for quite awhile, and I've heard good things about using Python for web scripting. After a bit of research, I found mod_python, which integrates with Apache to allow Python Server Pages, which seem very similar to the PHP pages I'm used to. I also found a mod_wsgi which looks similar. I was wondering which implementation the good people of Stack Overflow would recommend for someone who wants good integration with Apache and MySQL and similar functionality to PHP. A: I believe mod_wsgi is the preferred option to mod_python: http://code.google.com/p/modwsgi/ Some performance benchmarks seem to suggest that mod_wsgi performs much better also. http://code.google.com/p/modwsgi/wiki/PerformanceEstimates
Python Server Pages Implementations
I've been a PHP developer for quite awhile, and I've heard good things about using Python for web scripting. After a bit of research, I found mod_python, which integrates with Apache to allow Python Server Pages, which seem very similar to the PHP pages I'm used to. I also found a mod_wsgi which looks similar. I was wondering which implementation the good people of Stack Overflow would recommend for someone who wants good integration with Apache and MySQL and similar functionality to PHP.
[ "I believe mod_wsgi is the preferred option to mod_python:\nhttp://code.google.com/p/modwsgi/\nSome performance benchmarks seem to suggest that mod_wsgi performs much better also. \nhttp://code.google.com/p/modwsgi/wiki/PerformanceEstimates\n" ]
[ 3 ]
[]
[]
[ "apache", "mod_python", "php", "python" ]
stackoverflow_0001173184_apache_mod_python_php_python.txt
Q: Randomness in Jython When using (pseudo) random numbers in Jython, would it be more efficient to use the Python random module or Java's random class? A: Python's version is much faster in a simple test on my Mac: jython -m timeit -s "import random" "random.random()" 1000000 loops, best of 3: 0.266 usec per loop vs jython -m timeit -s "import java.util.Random; random=java.util.Random()" "random.nextDouble()" 1000000 loops, best of 3: 1.65 usec per loop Jython version 2.5b3 and Java version 1.5.0_19. A: Java's Random class uses (and indeed must use by Java's specs) a linear congruential algorithm, while Python's uses Mersenne Twister. Mersenne guarantees extremely high quality (though not crypto quality!) random numbers and a ridiculously long period (53-bit precision floats, period 2**19937-1); linear congruential generators have well-known issues. If you don't really care about the random numbers' quality, and only care about speed, LCG is however likely to be faster exactly because it's less sophisticated.
Randomness in Jython
When using (pseudo) random numbers in Jython, would it be more efficient to use the Python random module or Java's random class?
[ "Python's version is much faster in a simple test on my Mac:\njython -m timeit -s \"import random\" \"random.random()\"\n\n1000000 loops, best of 3: 0.266 usec per loop\nvs\n jython -m timeit -s \"import java.util.Random; random=java.util.Random()\" \"random.nextDouble()\"\n\n1000000 loops, best of 3: 1.65 usec per loop\nJython version 2.5b3 and Java version 1.5.0_19.\n", "Java's Random class uses (and indeed must use by Java's specs) a linear congruential algorithm, while Python's uses Mersenne Twister. Mersenne guarantees extremely high quality (though not crypto quality!) random numbers and a ridiculously long period (53-bit precision floats, period 2**19937-1); linear congruential generators have well-known issues. If you don't really care about the random numbers' quality, and only care about speed, LCG is however likely to be faster exactly because it's less sophisticated.\n" ]
[ 9, 4 ]
[]
[]
[ "java", "jython", "python", "random" ]
stackoverflow_0001173520_java_jython_python_random.txt
Q: Does Python unittest report errors immediately? Does Python's unittest module always report errors in strict correspondence to the execution order of the lines in the code tested? Do errors create the possibility of unexpected changes into the code's variables? I was baffled by a KeyError reported by unittest. The line itself looks okay. On the last line before execution halted, debugging prints of the key requested and the dictionary showed the key was in the dictionary. The key referenced in the KeyError was a different key, but that too seemed to be in the dictionary. I inserted a counter variable into the outer loop to print the number of outer iterations just before the error line (inside an inner loop), and these do not output in the expected sequence. They come out something like 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 2, 2, 2 -- when I would expect something like 0, 0, 0, 1, 1, 1, 2, 2, 2. And debugging prints of internal data show unexpected changes from one loop to the next. Code (with many debugging lines): def onSave(screen_data): counter = 0 for table, flds_dct in self.target_tables.items(): print 'TABLE %s' % table print 'FIELDS: %s' % flds_dct['fields'] tbl_screen_data = {} for fld in flds_dct['fields']: print 'LOOP TOP' print 'FIELD: %' % fld print 'SCREEN DATA: %s' % screen_data print 'COUNTER: %s' % counter print 'SCREEN DATA OUTPUT: %s' % screen_data[fld] tbl_screen_data[fld] = screen_data[fld] print 'LOOP BOTTOM' self.tables[table].addEntry(tbl_screen_data) counter =+ 1 print 'OUTER LOOP BOTTOM' Just before the error, this outputs: TABLE: questions FIELDS: ['whatIsYourQuest', 'whatIsYourName', 'whatIsTheAirSpeedOfSwallow'] LOOP TOP FIELD: whatIsYourQuest SCREEN DATA: {'whatIsYourQuest': 'grail', 'whatIsYourName': 'arthur', 'whatIsYourFavouriteColour': 'blue', 'whatIsTheAirSpeedOfSwallow': 'african or european?', 'whatIsCapitalOfAssyria': 'Nineveh'} COUNTER: 1 SCREEN DATA OUTPUT: grail LOOP BOTTOM OUTER LOOP BOTTOM But then execution stops and I get this error message: line 100, in writeData print 'SCREEN DATA OUTPUT: %s screen_data[fld] KeyError: 'whatIsCapitalOfAssyria' But the error is attributed to a line that has already printed its output, and stops execution after the output of several lines after the line with the error. As I mentioned above, further debugging shows that the contents of screen_data are changed over the iterations of the loop. Crucially, the dictionary passed in has no key 'whatIsCapitalOfAssyria'. The absence of that key was the cause of the error. At some point the code asked the screen_data dictionary 'whatIsCapitalOfAssyria', which it couldn't answer, and so of course it was thrown from the Bridge of Death, err, failed. BUT it was kind of hard to see that when the screen_data object output in debugging lines does have the key; and the error condition reported isn't raised until after execution of many more lines, which confuses inspection of the values local to the error. So how does unittest handle code errors? What am I doing wrong here? How should I be using it to avoid this sort of thing? EDIT: It might help if I added that the method tested triggers calls on a number of other methods, which triggers calls themselves. I think all of those are reasonably well tested, but perhaps the number of interconnected calls matters. A: I think you're seeing the error at the NEXT leg of your for loop, compared to the one with which you see all the output -- try changing the plain print to print>>stderr, statements so that buffering and possible suppression of output is not a risk.
Does Python unittest report errors immediately?
Does Python's unittest module always report errors in strict correspondence to the execution order of the lines in the code tested? Do errors create the possibility of unexpected changes into the code's variables? I was baffled by a KeyError reported by unittest. The line itself looks okay. On the last line before execution halted, debugging prints of the key requested and the dictionary showed the key was in the dictionary. The key referenced in the KeyError was a different key, but that too seemed to be in the dictionary. I inserted a counter variable into the outer loop to print the number of outer iterations just before the error line (inside an inner loop), and these do not output in the expected sequence. They come out something like 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 2, 2, 2 -- when I would expect something like 0, 0, 0, 1, 1, 1, 2, 2, 2. And debugging prints of internal data show unexpected changes from one loop to the next. Code (with many debugging lines): def onSave(screen_data): counter = 0 for table, flds_dct in self.target_tables.items(): print 'TABLE %s' % table print 'FIELDS: %s' % flds_dct['fields'] tbl_screen_data = {} for fld in flds_dct['fields']: print 'LOOP TOP' print 'FIELD: %' % fld print 'SCREEN DATA: %s' % screen_data print 'COUNTER: %s' % counter print 'SCREEN DATA OUTPUT: %s' % screen_data[fld] tbl_screen_data[fld] = screen_data[fld] print 'LOOP BOTTOM' self.tables[table].addEntry(tbl_screen_data) counter =+ 1 print 'OUTER LOOP BOTTOM' Just before the error, this outputs: TABLE: questions FIELDS: ['whatIsYourQuest', 'whatIsYourName', 'whatIsTheAirSpeedOfSwallow'] LOOP TOP FIELD: whatIsYourQuest SCREEN DATA: {'whatIsYourQuest': 'grail', 'whatIsYourName': 'arthur', 'whatIsYourFavouriteColour': 'blue', 'whatIsTheAirSpeedOfSwallow': 'african or european?', 'whatIsCapitalOfAssyria': 'Nineveh'} COUNTER: 1 SCREEN DATA OUTPUT: grail LOOP BOTTOM OUTER LOOP BOTTOM But then execution stops and I get this error message: line 100, in writeData print 'SCREEN DATA OUTPUT: %s screen_data[fld] KeyError: 'whatIsCapitalOfAssyria' But the error is attributed to a line that has already printed its output, and stops execution after the output of several lines after the line with the error. As I mentioned above, further debugging shows that the contents of screen_data are changed over the iterations of the loop. Crucially, the dictionary passed in has no key 'whatIsCapitalOfAssyria'. The absence of that key was the cause of the error. At some point the code asked the screen_data dictionary 'whatIsCapitalOfAssyria', which it couldn't answer, and so of course it was thrown from the Bridge of Death, err, failed. BUT it was kind of hard to see that when the screen_data object output in debugging lines does have the key; and the error condition reported isn't raised until after execution of many more lines, which confuses inspection of the values local to the error. So how does unittest handle code errors? What am I doing wrong here? How should I be using it to avoid this sort of thing? EDIT: It might help if I added that the method tested triggers calls on a number of other methods, which triggers calls themselves. I think all of those are reasonably well tested, but perhaps the number of interconnected calls matters.
[ "I think you're seeing the error at the NEXT leg of your for loop, compared to the one with which you see all the output -- try changing the plain print to print>>stderr, statements so that buffering and possible suppression of output is not a risk.\n" ]
[ 1 ]
[]
[]
[ "python", "testing", "unit_testing" ]
stackoverflow_0001173310_python_testing_unit_testing.txt
Q: python namespace hierarchy above object For example, if this code were contained in a module called some_module class C: class C2: def g(self): @printNamespaceAbove def f(): pass then printNamespaceAbove would be defined so that this code would output something like [some_module,C,C2,g] A: There is no way to make this code, as presented, have any output -- the body of g (including the decorator you'd like to do the printing) simply DOESN'T execute until g is called. I assume you do not literally intend for "this code" on its own to output anything, but rather intend to add a call such as C.C2().g() [which will actually do the output]. There's not really a very efficient way to do this -- you (well, the decorator;-) must start at the module level (which you can identify through the globals of f, the decorator's argument: its name is f.func_globals['__name__'] and via its name you can look it up in sys.modules), then you must walk down every possible chain of names until you locate your calling function (e.g. via the inspect module in the standard library). Note also that nested functions are a particular headache in several corner cases.
python namespace hierarchy above object
For example, if this code were contained in a module called some_module class C: class C2: def g(self): @printNamespaceAbove def f(): pass then printNamespaceAbove would be defined so that this code would output something like [some_module,C,C2,g]
[ "There is no way to make this code, as presented, have any output -- the body of g (including the decorator you'd like to do the printing) simply DOESN'T execute until g is called. I assume you do not literally intend for \"this code\" on its own to output anything, but rather intend to add a call such as C.C2().g() [which will actually do the output].\nThere's not really a very efficient way to do this -- you (well, the decorator;-) must start at the module level (which you can identify through the globals of f, the decorator's argument: its name is f.func_globals['__name__'] and via its name you can look it up in sys.modules), then you must walk down every possible chain of names until you locate your calling function (e.g. via the inspect module in the standard library). Note also that nested functions are a particular headache in several corner cases.\n" ]
[ 2 ]
[]
[]
[ "namespaces", "python" ]
stackoverflow_0001173401_namespaces_python.txt
Q: What is a basic example of single inheritance using the super() keyword in Python? Let's say I have the following classes set up: class Foo: def __init__(self, frob, frotz): self.frobnicate = frob self.frotz = frotz class Bar: def __init__(self, frob, frizzle): self.frobnicate = frob self.frotz = 34 self.frazzle = frizzle How can I (if I can at all) use super() in this context to eliminate the duplicate code? A: Assuming you want class Bar to set the value 34 within its constructor, this would work: class Foo(object): def __init__(self, frob, frotz): self.frobnicate = frob self.frotz = frotz class Bar(Foo): def __init__(self, frob, frizzle): super(Bar, self).__init__(frob, frizzle) self.frotz = 34 self.frazzle = frizzle bar = Bar(1,2) print "frobnicate:", bar.frobnicate print "frotz:", bar.frotz print "frazzle:", bar.frazzle However, super introduces its own complications. See e.g. super considered harmful. For completeness, here's the equivalent version without super. class Foo(object): def __init__(self, frob, frotz): self.frobnicate = frob self.frotz = frotz class Bar(Foo): def __init__(self, frob, frizzle): Foo.__init__(self, frob, frizzle) self.frotz = 34 self.frazzle = frizzle bar = Bar(1,2) print "frobnicate:", bar.frobnicate print "frotz:", bar.frotz print "frazzle:", bar.frazzle A: In Python >=3.0, like this: class Foo(): def __init__(self, frob, frotz) self.frobnicate = frob self.frotz = frotz class Bar(Foo): def __init__(self, frob, frizzle) super().__init__(frob, 34) self.frazzle = frizzle Read more here: http://docs.python.org/3.1/library/functions.html#super EDIT: As said in another answer, sometimes just using Foo.__init__(self, frob, 34) can be the better solution. (For instance, when working with certain forms of multiple inheritance.)
What is a basic example of single inheritance using the super() keyword in Python?
Let's say I have the following classes set up: class Foo: def __init__(self, frob, frotz): self.frobnicate = frob self.frotz = frotz class Bar: def __init__(self, frob, frizzle): self.frobnicate = frob self.frotz = 34 self.frazzle = frizzle How can I (if I can at all) use super() in this context to eliminate the duplicate code?
[ "Assuming you want class Bar to set the value 34 within its constructor, this would work:\nclass Foo(object):\n def __init__(self, frob, frotz):\n self.frobnicate = frob\n self.frotz = frotz\n\nclass Bar(Foo):\n def __init__(self, frob, frizzle):\n super(Bar, self).__init__(frob, frizzle)\n self.frotz = 34\n self.frazzle = frizzle\n\n\nbar = Bar(1,2)\nprint \"frobnicate:\", bar.frobnicate\nprint \"frotz:\", bar.frotz\nprint \"frazzle:\", bar.frazzle\n\nHowever, super introduces its own complications. See e.g. super considered harmful. For completeness, here's the equivalent version without super.\nclass Foo(object):\n def __init__(self, frob, frotz):\n self.frobnicate = frob\n self.frotz = frotz\n\nclass Bar(Foo):\n def __init__(self, frob, frizzle):\n Foo.__init__(self, frob, frizzle)\n self.frotz = 34\n self.frazzle = frizzle\n\n\nbar = Bar(1,2)\nprint \"frobnicate:\", bar.frobnicate\nprint \"frotz:\", bar.frotz\nprint \"frazzle:\", bar.frazzle\n\n", "In Python >=3.0, like this:\nclass Foo():\n def __init__(self, frob, frotz)\n self.frobnicate = frob\n self.frotz = frotz\n\nclass Bar(Foo):\n def __init__(self, frob, frizzle)\n super().__init__(frob, 34)\n self.frazzle = frizzle\n\nRead more here: http://docs.python.org/3.1/library/functions.html#super\nEDIT: As said in another answer, sometimes just using Foo.__init__(self, frob, 34) can be the better solution. (For instance, when working with certain forms of multiple inheritance.)\n" ]
[ 29, 26 ]
[]
[]
[ "constructor", "inheritance", "python", "super" ]
stackoverflow_0001173992_constructor_inheritance_python_super.txt
Q: django comments: how to prevent form errors from redirecting the user to the preview page? Currently, django.contrib.comments sends the user to the preview page if there is any error on the form. I am using comments in the context of a blog and I would much rather that the user stayed on the page they were on if something went wrong with the submission. As far as I can tell though, this is hard-coded in django.contrib.comments.views.comments.post_comment: # If there are errors or if we requested a preview show the comment if form.errors or preview: template_list = [ "comments/%s_%s_preview.html" % tuple(str(model._meta).split(".")), "comments/%s_preview.html" % model._meta.app_label, "comments/preview.html", ] return render_to_response( template_list, { "comment" : form.data.get("comment", ""), "form" : form, "next": next, }, RequestContext(request, {}) ) Is there any way that I can change this behavior without changing the source code to django.contrib.comments? Any pointer would be appreciated... Thanks! A: Looks like you have two real options: Write your own view. Possibly copy that view's code to get started. Patch that view to take an extra parameter, such as 'preview_on_errors' which defaults to True but can be overridden. Contribute the patch back to Django so other people can benefit from it. A: Yes! There's now a way to customize the Comments app. Good luck!
django comments: how to prevent form errors from redirecting the user to the preview page?
Currently, django.contrib.comments sends the user to the preview page if there is any error on the form. I am using comments in the context of a blog and I would much rather that the user stayed on the page they were on if something went wrong with the submission. As far as I can tell though, this is hard-coded in django.contrib.comments.views.comments.post_comment: # If there are errors or if we requested a preview show the comment if form.errors or preview: template_list = [ "comments/%s_%s_preview.html" % tuple(str(model._meta).split(".")), "comments/%s_preview.html" % model._meta.app_label, "comments/preview.html", ] return render_to_response( template_list, { "comment" : form.data.get("comment", ""), "form" : form, "next": next, }, RequestContext(request, {}) ) Is there any way that I can change this behavior without changing the source code to django.contrib.comments? Any pointer would be appreciated... Thanks!
[ "Looks like you have two real options:\n\nWrite your own view. Possibly copy that view's code to get started.\nPatch that view to take an extra parameter, such as 'preview_on_errors' which defaults to True but can be overridden. Contribute the patch back to Django so other people can benefit from it.\n\n", "Yes! There's now a way to customize the Comments app. Good luck!\n" ]
[ 3, 0 ]
[]
[]
[ "django", "django_contrib", "python" ]
stackoverflow_0001174140_django_django_contrib_python.txt
Q: To calculate the sum of numbers in a list by Python My data 466.67 465.56 464.44 463.33 462.22 461.11 460.00 458.89 ... I run in Python sum(/tmp/1,0) I get an error. How can you calculate the sum of the values by Python? A: f=open('/tmp/1') print sum(map(float,f)) A: sum(float(i) for i in open('/tmp/1.0'))
To calculate the sum of numbers in a list by Python
My data 466.67 465.56 464.44 463.33 462.22 461.11 460.00 458.89 ... I run in Python sum(/tmp/1,0) I get an error. How can you calculate the sum of the values by Python?
[ "f=open('/tmp/1')\nprint sum(map(float,f))\n\n", "sum(float(i) for i in open('/tmp/1.0'))\n\n" ]
[ 13, 11 ]
[]
[]
[ "python" ]
stackoverflow_0001174435_python.txt
Q: appengine: cached reference property? How can I cache a Reference Property in Google App Engine? For example, let's say I have the following models: class Many(db.Model): few = db.ReferenceProperty(Few) class Few(db.Model): year = db.IntegerProperty() Then I create many Many's that point to only one Few: one_few = Few.get_or_insert(year=2009) Many.get_or_insert(few=one_few) Many.get_or_insert(few=one_few) Many.get_or_insert(few=one_few) Many.get_or_insert(few=one_few) Many.get_or_insert(few=one_few) Many.get_or_insert(few=one_few) Now, if I want to iterate over all the Many's, reading their few value, I would do this: for many in Many.all().fetch(1000): print "%s" % many.few.year The question is: Will each access to many.few trigger a database lookup? If yes, is it possible to cache somewhere, as only one lookup should be enough to bring the same entity every time? As noted in one comment: I know about memcache, but I'm not sure how I can "inject it" when I'm calling the other entity through a reference. In any case memcache wouldn't be useful, as I need caching within an execution, not between them. Using memcache wouldn't help optimizing this call. A: The first time you dereference any reference property, the entity is fetched - even if you'd previously fetched the same entity associated with a different reference property. This involves a datastore get operation, which isn't as expensive as a query, but is still worth avoiding if you can. There's a good module that adds seamless caching of entities available here. It works at a lower level of the datastore, and will cache all datastore gets, not just dereferencing ReferenceProperties. If you want to resolve a bunch of reference properties at once, there's another way: You can retrieve all the keys and fetch the entities in a single round trip, like so: keys = [MyModel.ref.get_value_for_datastore(x) for x in referers] referees = db.get(keys) Finally, I've written a library that monkeypatches the db module to locally cache entities on a per-request basis (no memcache involved). It's available, here. One warning, though: It's got unit tests, but it's not widely used, so it could be broken. A: The question is: Will each access to many.few trigger a database lookup? Yes. Not sure if its 1 or 2 calls If yes, is it possible to cache somewhere, as only one lookup should be enough to bring the same entity every time? You should be able to use the memcache repository to do this. This is in the google.appengine.api.memcache package. Details for memcache are in http://code.google.com/appengine/docs/python/memcache/usingmemcache.html
appengine: cached reference property?
How can I cache a Reference Property in Google App Engine? For example, let's say I have the following models: class Many(db.Model): few = db.ReferenceProperty(Few) class Few(db.Model): year = db.IntegerProperty() Then I create many Many's that point to only one Few: one_few = Few.get_or_insert(year=2009) Many.get_or_insert(few=one_few) Many.get_or_insert(few=one_few) Many.get_or_insert(few=one_few) Many.get_or_insert(few=one_few) Many.get_or_insert(few=one_few) Many.get_or_insert(few=one_few) Now, if I want to iterate over all the Many's, reading their few value, I would do this: for many in Many.all().fetch(1000): print "%s" % many.few.year The question is: Will each access to many.few trigger a database lookup? If yes, is it possible to cache somewhere, as only one lookup should be enough to bring the same entity every time? As noted in one comment: I know about memcache, but I'm not sure how I can "inject it" when I'm calling the other entity through a reference. In any case memcache wouldn't be useful, as I need caching within an execution, not between them. Using memcache wouldn't help optimizing this call.
[ "The first time you dereference any reference property, the entity is fetched - even if you'd previously fetched the same entity associated with a different reference property. This involves a datastore get operation, which isn't as expensive as a query, but is still worth avoiding if you can.\nThere's a good module that adds seamless caching of entities available here. It works at a lower level of the datastore, and will cache all datastore gets, not just dereferencing ReferenceProperties.\nIf you want to resolve a bunch of reference properties at once, there's another way: You can retrieve all the keys and fetch the entities in a single round trip, like so:\nkeys = [MyModel.ref.get_value_for_datastore(x) for x in referers]\nreferees = db.get(keys)\n\nFinally, I've written a library that monkeypatches the db module to locally cache entities on a per-request basis (no memcache involved). It's available, here. One warning, though: It's got unit tests, but it's not widely used, so it could be broken.\n", "The question is:\n\nWill each access to many.few trigger a database lookup? Yes. Not sure if its 1 or 2 calls\nIf yes, is it possible to cache somewhere, as only one lookup should be enough to bring the same entity every time? You should be able to use the memcache repository to do this. This is in the google.appengine.api.memcache package.\n\nDetails for memcache are in http://code.google.com/appengine/docs/python/memcache/usingmemcache.html\n" ]
[ 8, 1 ]
[]
[]
[ "database", "google_app_engine", "google_cloud_datastore", "performance", "python" ]
stackoverflow_0001174075_database_google_app_engine_google_cloud_datastore_performance_python.txt
Q: Wxpython: Positioning a menu under a toolbar button I have a CheckLabelTool in a wx.ToolBar and I want a menu to popup directly beneath it on mouse click. I'm trying to get the location of the tool so I can set the position of the menu, but everything I've tried (GetEventObject, GetPosition, etc) gives me the position of the toolbar, so consequently the menu pops under the toolbar, but very far from the associated tool. Any suggestions? I need the tool to have toggle and bitmap capability, but I'm not fixed on CheckLabelTool if there's something else that would work better. Thanks! A: Read the section on the PopupMenu method on wxpython.org: "Pops up the given menu at the specified coordinates, relative to this window, and returns control when the user has dismissed the menu. If a menu item is selected, the corresponding menu event is generated and will be processed as usual. If the default position is given then the current position of the mouse cursor will be used." You need to bind to the EVT_MENU event of your check tool. Once the tool button is checked, you can pop the menu up. If you don't specify the location of the popup, it will use the current position of the mouse, which is what you want. If you want the menu to pop up at a pre-determined location that is independent of the mouse, you can get the screen location of the toolbar and add an offset Let's look at code: [Edit: To show how to compute the position of any point on a tool, I have modified the code to compute and display various points on the tool bar once you click a tool. The menu appears on the lower right corner of the clicked button. It works for me on Windows. I'm curious to know if it doesn't behave on other platforms.] import wx class ViewApp(wx.App): def OnInit(self): self.frame = ToolFrame(None, -1, "Test App") self.frame.Show(True) return True class MyPopupMenu(wx.Menu): def __init__(self, parent): wx.Menu.__init__(self) self.parent = parent minimize = wx.MenuItem(self, wx.NewId(), 'Minimize') self.AppendItem(minimize) self.Bind(wx.EVT_MENU, self.OnMinimize, id=minimize.GetId()) def OnMinimize(self, event): self.parent.Iconize() class ToolFrame(wx.Frame): def __init__(self, parent, id, title): wx.Frame.__init__(self, parent, id, title, size=(350, 250)) self.toolbar = self.CreateToolBar() self.tool_id = wx.NewId() for i in range(3): tool_id = wx.NewId() self.toolbar.AddCheckLabelTool(tool_id, 'Tool', wx.EmptyBitmap(10,10)) self.toolbar.Bind(wx.EVT_MENU, self.OnTool, id=tool_id) self.toolbar.Realize() self.Centre() self.Show() def OnTool(self, event): if event.IsChecked(): # Get the position of the toolbar relative to # the frame. This will be the upper left corner of the first tool bar_pos = self.toolbar.GetScreenPosition()-self.GetScreenPosition() # This is the position of the tool along the tool bar (1st, 2nd, 3rd, etc...) tool_index = self.toolbar.GetToolPos(event.GetId()) # Get the size of the tool tool_size = self.toolbar.GetToolSize() # This is the upper left corner of the clicked tool upper_left_pos = (bar_pos[0]+tool_size[0]*tool_index, bar_pos[1]) # Menu position will be in the lower right corner lower_right_pos = (bar_pos[0]+tool_size[0]*(tool_index+1), bar_pos[1]+tool_size[1]) # Show upper left corner of first tool in black dc = wx.WindowDC(self) dc.SetPen(wx.Pen("BLACK", 4)) dc.DrawCircle(bar_pos[0], bar_pos[1], 4) # Show upper left corner of this tool in blue dc.SetPen(wx.Pen("BLUE", 4)) dc.DrawCircle(upper_left_pos[0], upper_left_pos[1], 4) # Show lower right corner of this tool in green dc.SetPen(wx.Pen("GREEN", 4)) dc.DrawCircle(lower_right_pos[0], lower_right_pos[1], 4) # Correct for the position of the tool bar menu_pos = (lower_right_pos[0]-bar_pos[0],lower_right_pos[1]-bar_pos[1]) # Pop up the menu self.PopupMenu(MyPopupMenu(self), menu_pos) if __name__ == "__main__": app = ViewApp(0) app.MainLoop() Parts of this code come from here.
Wxpython: Positioning a menu under a toolbar button
I have a CheckLabelTool in a wx.ToolBar and I want a menu to popup directly beneath it on mouse click. I'm trying to get the location of the tool so I can set the position of the menu, but everything I've tried (GetEventObject, GetPosition, etc) gives me the position of the toolbar, so consequently the menu pops under the toolbar, but very far from the associated tool. Any suggestions? I need the tool to have toggle and bitmap capability, but I'm not fixed on CheckLabelTool if there's something else that would work better. Thanks!
[ "Read the section on the PopupMenu method on wxpython.org:\n\n\"Pops up the given menu at the\n specified coordinates, relative to\n this window, and returns control when\n the user has dismissed the menu. If a\n menu item is selected, the\n corresponding menu event is generated\n and will be processed as usual. If the\n default position is given then the\n current position of the mouse cursor\n will be used.\"\n\nYou need to bind to the EVT_MENU event of your check tool. Once the tool button is checked, you can pop the menu up. If you don't specify the location of the popup, it will use the current position of the mouse, which is what you want.\nIf you want the menu to pop up at a pre-determined location that is independent of the mouse, you can get the screen location of the toolbar and add an offset \nLet's look at code:\n[Edit: To show how to compute the position of any point on a tool, I have modified the code to compute and display various points on the tool bar once you click a tool. The menu appears on the lower right corner of the clicked button. It works for me on Windows. I'm curious to know if it doesn't behave on other platforms.] \nimport wx\n\nclass ViewApp(wx.App):\n def OnInit(self):\n self.frame = ToolFrame(None, -1, \"Test App\") \n self.frame.Show(True)\n return True \n\nclass MyPopupMenu(wx.Menu):\n def __init__(self, parent):\n wx.Menu.__init__(self)\n\n self.parent = parent\n\n minimize = wx.MenuItem(self, wx.NewId(), 'Minimize')\n self.AppendItem(minimize)\n self.Bind(wx.EVT_MENU, self.OnMinimize, id=minimize.GetId())\n\n def OnMinimize(self, event):\n self.parent.Iconize()\n\nclass ToolFrame(wx.Frame):\n def __init__(self, parent, id, title):\n wx.Frame.__init__(self, parent, id, title, size=(350, 250))\n\n self.toolbar = self.CreateToolBar()\n self.tool_id = wx.NewId()\n for i in range(3):\n tool_id = wx.NewId()\n self.toolbar.AddCheckLabelTool(tool_id, 'Tool', wx.EmptyBitmap(10,10))\n self.toolbar.Bind(wx.EVT_MENU, self.OnTool, id=tool_id)\n self.toolbar.Realize()\n self.Centre()\n self.Show()\n\n def OnTool(self, event):\n if event.IsChecked():\n # Get the position of the toolbar relative to\n # the frame. This will be the upper left corner of the first tool\n bar_pos = self.toolbar.GetScreenPosition()-self.GetScreenPosition()\n\n # This is the position of the tool along the tool bar (1st, 2nd, 3rd, etc...)\n tool_index = self.toolbar.GetToolPos(event.GetId())\n\n # Get the size of the tool\n tool_size = self.toolbar.GetToolSize()\n\n # This is the upper left corner of the clicked tool\n upper_left_pos = (bar_pos[0]+tool_size[0]*tool_index, bar_pos[1])\n\n # Menu position will be in the lower right corner\n lower_right_pos = (bar_pos[0]+tool_size[0]*(tool_index+1), bar_pos[1]+tool_size[1])\n\n # Show upper left corner of first tool in black\n dc = wx.WindowDC(self)\n dc.SetPen(wx.Pen(\"BLACK\", 4))\n dc.DrawCircle(bar_pos[0], bar_pos[1], 4) \n\n # Show upper left corner of this tool in blue\n dc.SetPen(wx.Pen(\"BLUE\", 4))\n dc.DrawCircle(upper_left_pos[0], upper_left_pos[1], 4) \n\n # Show lower right corner of this tool in green\n dc.SetPen(wx.Pen(\"GREEN\", 4))\n dc.DrawCircle(lower_right_pos[0], lower_right_pos[1], 4) \n\n # Correct for the position of the tool bar\n menu_pos = (lower_right_pos[0]-bar_pos[0],lower_right_pos[1]-bar_pos[1])\n\n # Pop up the menu\n self.PopupMenu(MyPopupMenu(self), menu_pos)\n\nif __name__ == \"__main__\": \n app = ViewApp(0)\n app.MainLoop()\n\nParts of this code come from here.\n" ]
[ 6 ]
[]
[]
[ "menu", "python", "toolbar", "wxwidgets" ]
stackoverflow_0001173642_menu_python_toolbar_wxwidgets.txt
Q: How to increase connection pool size for Twisted? I'm using Twisted 8.1.0 as socket server engine. Reactor - epoll. Database server is MySQL 5.0.67. OS - Ubuntu Linux 8.10 32-bit in /etc/mysql/my.cnf : max_connections = 1000 in source code: adbapi.ConnectionPool("MySQLdb", ..., use_unicode=True, charset='utf8', cp_min=3, cp_max=700, cp_noisy=False) But in reality I can see only 200 (or less) open connections (SHOW PROCESSLIST) when application is running under heavy load. It is not enough for my app :( As I see this is limit for the thread pool. Any ideas? A: As you suspect, this is probably a threading issue. cp_max sets an upper limit for the number of threads in the thread pool, however, your process is very likely running out of memory well below this limit, in your case around 200 threads. Because each thread has its own stack, the total memory being used by your process hits the system limit and no more threads can be created. You can check this by adjusting the stack size ulimit setting (I'm using bash) prior to running your program, i.e. $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited max nice (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 32750 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 max rt priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 32750 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited You can see that the default stack size is 10240K on my machine and I have found that I can create about 300 threads with this setting. Adjusting the stack size down to 1024K (using ulimit -s 1024) I can create about 3000 threads. You can get some idea about the thread creation limits on you system using this script: from thread import start_new_thread from time import sleep def sleeper(): try: while 1: sleep(10000) except: if running: raise def test(): global running n = 0 running = True try: while 1: start_new_thread(sleeper, ()) n += 1 except Exception, e: running = False print 'Exception raised:', e print 'Biggest number of threads:', n if __name__ == '__main__': test() Whether this solves your problem will depend on the memory requirements of the ConnectionPool threads.
How to increase connection pool size for Twisted?
I'm using Twisted 8.1.0 as socket server engine. Reactor - epoll. Database server is MySQL 5.0.67. OS - Ubuntu Linux 8.10 32-bit in /etc/mysql/my.cnf : max_connections = 1000 in source code: adbapi.ConnectionPool("MySQLdb", ..., use_unicode=True, charset='utf8', cp_min=3, cp_max=700, cp_noisy=False) But in reality I can see only 200 (or less) open connections (SHOW PROCESSLIST) when application is running under heavy load. It is not enough for my app :( As I see this is limit for the thread pool. Any ideas?
[ "As you suspect, this is probably a threading issue. cp_max sets an upper limit for the number of threads in the thread pool, however, your process is very likely running out of memory well below this limit, in your case around 200 threads. Because each thread has its own stack, the total memory being used by your process hits the system limit and no more threads can be created.\nYou can check this by adjusting the stack size ulimit setting (I'm using bash) prior to running your program, i.e.\n$ ulimit -a\ncore file size (blocks, -c) 0\ndata seg size (kbytes, -d) unlimited\nmax nice (-e) 0\nfile size (blocks, -f) unlimited\npending signals (-i) 32750\nmax locked memory (kbytes, -l) 32\nmax memory size (kbytes, -m) unlimited\nopen files (-n) 1024\npipe size (512 bytes, -p) 8\nPOSIX message queues (bytes, -q) 819200\nmax rt priority (-r) 0\nstack size (kbytes, -s) 10240\ncpu time (seconds, -t) unlimited\nmax user processes (-u) 32750\nvirtual memory (kbytes, -v) unlimited\nfile locks (-x) unlimited\n\nYou can see that the default stack size is 10240K on my machine and I have found that I can create about 300 threads with this setting. Adjusting the stack size down to 1024K (using ulimit -s 1024) I can create about 3000 threads.\nYou can get some idea about the thread creation limits on you system using this script:\nfrom thread import start_new_thread\nfrom time import sleep\n\ndef sleeper():\n try:\n while 1:\n sleep(10000)\n except:\n if running: raise\n\ndef test():\n global running\n n = 0\n running = True\n try:\n while 1:\n start_new_thread(sleeper, ())\n n += 1\n except Exception, e:\n running = False\n print 'Exception raised:', e\n print 'Biggest number of threads:', n\n\nif __name__ == '__main__':\n test()\n\nWhether this solves your problem will depend on the memory requirements of the ConnectionPool threads.\n" ]
[ 8 ]
[]
[]
[ "connection_pooling", "python", "twisted" ]
stackoverflow_0001171519_connection_pooling_python_twisted.txt
Q: How to refactor this Python code? class MainPage(webapp.RequestHandler): def get(self): user = users.get_current_user() tasks_query = Task.all() tasks = tasks_query.fetch(1000) if user: url = users.create_logout_url(self.request.uri) else: url = users.create_login_url(self.request.uri) template_values = { 'tasks': tasks, 'url': url } path = os.path.join(os.path.dirname(__file__), 'index.html') self.response.out.write(template.render(path, template_values)) class Gadget(webapp.RequestHandler): def get(self): user = users.get_current_user() tasks_query = Task.all() tasks = tasks_query.fetch(1000) if user: url = users.create_logout_url(self.request.uri) else: url = users.create_login_url(self.request.uri) template_values = { 'tasks': tasks, 'url': url } path = os.path.join(os.path.dirname(__file__), 'gadget.xml') self.response.out.write(template.render(path, template_values)) A: Really it depends on what you expect to be common between the two classes in future. The purpose of refactoring is to identify common abstractions, not to minimise the number of lines of code. That said, assuming the two requests are expected to differ only in the template: class TaskListPage(webapp.RequestHandler): def get(self): user = users.get_current_user() tasks_query = Task.all() tasks = tasks_query.fetch(1000) if user: url = users.create_logout_url(self.request.uri) else: url = users.create_login_url(self.request.uri) template_values = { 'tasks': tasks, 'url': url } path = os.path.join(os.path.dirname(__file__), self.template_name()) self.response.out.write(template.render(path, template_values)) class MainPage(TaskListPage): def template_name(self): return 'index.html' class Gadget(TaskListPage): def template_name(self): return 'gadget.xml' A: Refactor for what purposes? Are you getting errors, want to do something else, or...? Assuming the proper imports and url dispatching around this, I don't see anything here that has to be refactored for app engine -- so, don't keep us guessing!-) A: Since both classes are identical except for one string ('index.html' vs 'gadget.xml') would it be possible to make one a subclass of the other and have that one string as a class constant in both? A: Make it the same class, and use a GET or POST parameter to decide which template to render.
How to refactor this Python code?
class MainPage(webapp.RequestHandler): def get(self): user = users.get_current_user() tasks_query = Task.all() tasks = tasks_query.fetch(1000) if user: url = users.create_logout_url(self.request.uri) else: url = users.create_login_url(self.request.uri) template_values = { 'tasks': tasks, 'url': url } path = os.path.join(os.path.dirname(__file__), 'index.html') self.response.out.write(template.render(path, template_values)) class Gadget(webapp.RequestHandler): def get(self): user = users.get_current_user() tasks_query = Task.all() tasks = tasks_query.fetch(1000) if user: url = users.create_logout_url(self.request.uri) else: url = users.create_login_url(self.request.uri) template_values = { 'tasks': tasks, 'url': url } path = os.path.join(os.path.dirname(__file__), 'gadget.xml') self.response.out.write(template.render(path, template_values))
[ "Really it depends on what you expect to be common between the two classes in future. The purpose of refactoring is to identify common abstractions, not to minimise the number of lines of code.\nThat said, assuming the two requests are expected to differ only in the template:\nclass TaskListPage(webapp.RequestHandler):\n def get(self):\n user = users.get_current_user()\n tasks_query = Task.all()\n tasks = tasks_query.fetch(1000)\n if user:\n url = users.create_logout_url(self.request.uri)\n else:\n url = users.create_login_url(self.request.uri)\n template_values = {\n 'tasks': tasks,\n 'url': url\n }\n path = os.path.join(os.path.dirname(__file__), self.template_name())\n self.response.out.write(template.render(path, template_values))\n\nclass MainPage(TaskListPage):\n def template_name(self):\n return 'index.html'\n\nclass Gadget(TaskListPage):\n def template_name(self):\n return 'gadget.xml'\n\n", "Refactor for what purposes? Are you getting errors, want to do something else, or...? Assuming the proper imports and url dispatching around this, I don't see anything here that has to be refactored for app engine -- so, don't keep us guessing!-)\n", "Since both classes are identical except for one string ('index.html' vs 'gadget.xml') would it be possible to make one a subclass of the other and have that one string as a class constant in both?\n", "Make it the same class, and use a GET or POST parameter to decide which template to render.\n" ]
[ 6, 1, 1, 1 ]
[]
[]
[ "google_app_engine", "python", "refactoring" ]
stackoverflow_0001175043_google_app_engine_python_refactoring.txt
Q: python sqlalchemy parallel operation HI,i got a multi-threading program which all threads will operate on oracle DB. So, can sqlalchemy support parallel operation on oracle? tks! A: OCI (oracle client interface) has a parameter OCI_THREADED which has the effect of connections being mutexed, such that concurrent access via multiple threads is safe. This is likely the setting the document you saw was referring to. cx_oracle, which is essentially a Python->OCI bridge, provides access to this setting in its connection function using the keyword argument "threaded", described at http://cx-oracle.sourceforge.net/html/module.html#cx_Oracle.connect . The docs state that it is False by default due to its resulting in a "10-15% performance penalty", though no source is given for this information (and performance stats should always be viewed suspiciously as a rule). As far as SQLAlchemy, the cx_oracle dialect provided with SQLAlchemy sets this value to True by default, with the option to set it back to False when setting up the engine via create_engine() - so at that level there's no issue. But beyond that, SQLAlchemy's recommended usage patterns (i.e. one Session per thread, keeping connections local to a pool where they are checked out by a function as needed) prevent concurrent access to a connection in any case. So you can likely turn off the "threaded" setting on create_engine() and enjoy the possibly-tangible performance increases provided regular usage patterns are followed. A: As long as each concurrent thread has it's own session you should be fine. Trying to use one shared session is where you'll get into trouble.
python sqlalchemy parallel operation
HI,i got a multi-threading program which all threads will operate on oracle DB. So, can sqlalchemy support parallel operation on oracle? tks!
[ "OCI (oracle client interface) has a parameter OCI_THREADED which has the effect of connections being mutexed, such that concurrent access via multiple threads is safe. This is likely the setting the document you saw was referring to.\ncx_oracle, which is essentially a Python->OCI bridge, provides access to this setting in its connection function using the keyword argument \"threaded\", described at http://cx-oracle.sourceforge.net/html/module.html#cx_Oracle.connect . The docs state that it is False by default due to its resulting in a \"10-15% performance penalty\", though no source is given for this information (and performance stats should always be viewed suspiciously as a rule).\nAs far as SQLAlchemy, the cx_oracle dialect provided with SQLAlchemy sets this value to True by default, with the option to set it back to False when setting up the engine via create_engine() - so at that level there's no issue. \nBut beyond that, SQLAlchemy's recommended usage patterns (i.e. one Session per thread, keeping connections local to a pool where they are checked out by a function as needed) prevent concurrent access to a connection in any case. So you can likely turn off the \"threaded\" setting on create_engine() and enjoy the possibly-tangible performance increases provided regular usage patterns are followed.\n", "As long as each concurrent thread has it's own session you should be fine. Trying to use one shared session is where you'll get into trouble.\n" ]
[ 4, 1 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0001117538_python_sqlalchemy.txt
Q: Using map() to get number of times list elements exist in a string in Python I'm trying to get the number of times each item in a list is in a string in Python: paragraph = "I eat bananas and a banana" def tester(x): return len(re.findall(x,paragraph)) map(tester, ['banana', 'loganberry', 'passion fruit']) Returns [2, 0, 0] What I'd like to do however is extend this so I can feed the paragraph value into the map() function. Right now, the tester() function has paragraph hardcoded. Does anybody have a way to do this (perhaps make an n-length list of paragraph values)? Any other ideas here? Keep in mind that each of the array values will have a weight at some point in the future - hence the need to keep the values in a list rather than crunching them all together. UPDATE: The paragraph will often be 20K and the list will often have 200+ members. My thinking is that map operates in parallel - so it will be much more efficient than any serial methods. A: A closure would be a quick solution: paragraph = "I eat bananas and a banana" def tester(s): def f(x): return len(re.findall(x,s)) return f print map(tester(paragraph), ['banana', 'loganberry', 'passion fruit']) A: targets = ['banana', 'loganberry', 'passion fruit'] paragraph = "I eat bananas and a banana" print [paragraph.count(target) for target in targets] No idea why you would use map() here. A: I know you didn't ask for list comprehension, but here it is anyway: paragraph = "I eat bananas and a banana" words = ['banana', 'loganberry', 'passion fruit'] [len(re.findall(word, paragraph)) for word in words] This returns [2, 0, 0] as well. A: This is basically just going out of your way to avoid a list comprehension, but if you like functional style programming, then you'll like functools.partial. >>> from functools import partial >>> def counter(text, paragraph): return len(re.findall(text, paragraph)) >>> tester = partial(counter, paragraph="I eat bananas and a banana") >>> map(tester, ['banana', 'loganberry', 'passion fruit']) [2, 0, 0] A: For Q query words of average length L bytes on large texts of size T bytes, you need something that's NOT O(QLT). You need a DFA-style approach which can give you O(T) ... after setup costs. If your query set is rather static, then the setup cost can be ignored. E.g. http://en.wikipedia.org/wiki/Aho-Corasick_algorithm which points to a C-extension for Python: http://hkn.eecs.berkeley.edu/~dyoo/python/ahocorasick/ A: Here's a response to the movement of the goalposts ("I probably need the regex because I'll need word delimiters in the near future"): This method parses the text once to obtain a list of all the "words". Each word is looked up in a dictionary of the target words, and if it is a target word it is counted. The time taken is O(P) + O(T) where P is the size of the paragraph and T is the number of target words. All other solutions to date (including the currently accepted solution) except my Aho-Corasick solution are O(PT). def counts_all(targets, paragraph, word_regex=r"\w+"): tally = dict((target, 0) for target in targets) for word in re.findall(word_regex, paragraph): if word in tally: tally[word] += 1 return [tally[target] for target in targets] def counts_iter(targets, paragraph, word_regex=r"\w+"): tally = dict((target, 0) for target in targets) for matchobj in re.finditer(word_regex, paragraph): word = matchobj.group() if word in tally: tally[word] += 1 return [tally[target] for target in targets] The finditer version is a strawman -- it's much slower than the findall version. Here's the currently accepted solution expressed in a standardised form and augmented with word delimiters: def currently_accepted_solution_augmented(targets, paragraph): def tester(s): def f(x): return len(re.findall(r"\b" + x + r"\b", s)) return f return map(tester(paragraph), targets) which goes overboard on closures and could be reduced to: # acknowledgement: # this is structurally the same as one of hughdbrown's benchmark functions def currently_accepted_solution_augmented_without_extra_closure(targets, paragraph): def tester(x): return len(re.findall(r"\b" + x + r"\b", paragraph)) return map(tester, targets) All variations on the currently accepted solution are O(PT). Unlike the currently accepted solution, the regex search with word delimiters is not equivalent to a simple paragraph.find(target). Because the re engine doesn't use the "fast search" in this case, adding the word delimiters changes it fron slow to very slow. A: Here's my version. paragraph = "I eat bananas and a banana" def tester(paragraph, x): return len(re.findall(x,paragraph)) print lambda paragraph: map( lambda x: tester(paragraph, x) , ['banana', 'loganberry', 'passion fruit'] )(paragraph)
Using map() to get number of times list elements exist in a string in Python
I'm trying to get the number of times each item in a list is in a string in Python: paragraph = "I eat bananas and a banana" def tester(x): return len(re.findall(x,paragraph)) map(tester, ['banana', 'loganberry', 'passion fruit']) Returns [2, 0, 0] What I'd like to do however is extend this so I can feed the paragraph value into the map() function. Right now, the tester() function has paragraph hardcoded. Does anybody have a way to do this (perhaps make an n-length list of paragraph values)? Any other ideas here? Keep in mind that each of the array values will have a weight at some point in the future - hence the need to keep the values in a list rather than crunching them all together. UPDATE: The paragraph will often be 20K and the list will often have 200+ members. My thinking is that map operates in parallel - so it will be much more efficient than any serial methods.
[ "A closure would be a quick solution:\nparagraph = \"I eat bananas and a banana\"\n\ndef tester(s): \n def f(x):\n return len(re.findall(x,s))\n return f\n\nprint map(tester(paragraph), ['banana', 'loganberry', 'passion fruit'])\n\n", "targets = ['banana', 'loganberry', 'passion fruit']\nparagraph = \"I eat bananas and a banana\"\n\nprint [paragraph.count(target) for target in targets]\n\nNo idea why you would use map() here.\n", "I know you didn't ask for list comprehension, but here it is anyway:\nparagraph = \"I eat bananas and a banana\"\nwords = ['banana', 'loganberry', 'passion fruit']\n[len(re.findall(word, paragraph)) for word in words]\n\nThis returns\n [2, 0, 0]\nas well.\n", "This is basically just going out of your way to avoid a list comprehension, but if you like functional style programming, then you'll like functools.partial.\n>>> from functools import partial\n>>> def counter(text, paragraph):\n return len(re.findall(text, paragraph))\n\n>>> tester = partial(counter, paragraph=\"I eat bananas and a banana\")\n>>> map(tester, ['banana', 'loganberry', 'passion fruit'])\n[2, 0, 0]\n\n", "For Q query words of average length L bytes on large texts of size T bytes, you need something that's NOT O(QLT). You need a DFA-style approach which can give you O(T) ... after setup costs. If your query set is rather static, then the setup cost can be ignored.\nE.g. http://en.wikipedia.org/wiki/Aho-Corasick_algorithm\nwhich points to a C-extension for Python:\nhttp://hkn.eecs.berkeley.edu/~dyoo/python/ahocorasick/\n", "Here's a response to the movement of the goalposts (\"I probably need the regex because I'll need word delimiters in the near future\"):\nThis method parses the text once to obtain a list of all the \"words\". Each word is looked up in a dictionary of the target words, and if it is a target word it is counted. The time taken is O(P) + O(T) where P is the size of the paragraph and T is the number of target words. All other solutions to date (including the currently accepted solution) except my Aho-Corasick solution are O(PT).\ndef counts_all(targets, paragraph, word_regex=r\"\\w+\"):\n tally = dict((target, 0) for target in targets)\n for word in re.findall(word_regex, paragraph):\n if word in tally:\n tally[word] += 1\n return [tally[target] for target in targets]\n\ndef counts_iter(targets, paragraph, word_regex=r\"\\w+\"):\n tally = dict((target, 0) for target in targets)\n for matchobj in re.finditer(word_regex, paragraph):\n word = matchobj.group()\n if word in tally:\n tally[word] += 1\n return [tally[target] for target in targets] \n\nThe finditer version is a strawman -- it's much slower than the findall version.\nHere's the currently accepted solution expressed in a standardised form and augmented with word delimiters:\ndef currently_accepted_solution_augmented(targets, paragraph):\n def tester(s): \n def f(x):\n return len(re.findall(r\"\\b\" + x + r\"\\b\", s))\n return f\n return map(tester(paragraph), targets)\n\nwhich goes overboard on closures and could be reduced to:\n# acknowledgement:\n# this is structurally the same as one of hughdbrown's benchmark functions\ndef currently_accepted_solution_augmented_without_extra_closure(targets, paragraph):\n def tester(x):\n return len(re.findall(r\"\\b\" + x + r\"\\b\", paragraph))\n return map(tester, targets)\n\nAll variations on the currently accepted solution are O(PT). Unlike the currently accepted solution, the regex search with word delimiters is not equivalent to a simple paragraph.find(target). Because the re engine doesn't use the \"fast search\" in this case, adding the word delimiters changes it fron slow to very slow.\n", "Here's my version. \nparagraph = \"I eat bananas and a banana\"\n\ndef tester(paragraph, x): return len(re.findall(x,paragraph))\n\nprint lambda paragraph: map(\n lambda x: tester(paragraph, x) , ['banana', 'loganberry', 'passion fruit']\n )(paragraph)\n\n" ]
[ 8, 3, 2, 2, 1, 1, 0 ]
[]
[]
[ "mapreduce", "python", "regex" ]
stackoverflow_0001168517_mapreduce_python_regex.txt
Q: Google Wave Sandbox Is anyone developing robots and/or gadgets for Google Wave? I have been a part of the sandbox development for a few days and I was interested in seeing what others have thought about the Google Wave APIs. I was also wondering what everyone has been working on. Please share your opinions and comments! A: Go to Google Wave developers and read the blogs, forums and all your questions will be answered including a recent post for a gallery of Wave apps. You will also find other developers to play in the sandbox with. A: I haven't tried the gadgets, but from the little I've looked at them, they seem pretty straight-forward. They're implemented in a template-ish way and you can easily keep states in them, allowing more complex things such as RSVP lists and even games. Robots are what I'm most interested in, and well, all I can say is that they're really easy to develop! Like barely any effort at all! Heck, I'll code one for you right here: import waveapi.events import waveapi.robot def OnBlipSubmitted(properties, context): # Get the blip that was just submitted. blip = context.GetBlipById(properties['blipId']) # Respond to the blip (i.e. create a child blip) blip.CreateChild().GetDocument().SetText('That\'s so funny!') def OnRobotAdded(properties, context): # Add a message to the end of the wavelet. wavelet = context.GetRootWavelet() wavelet.CreateBlip().GetDocument().SetText('Heeeeey everybody!') if __name__ == '__main__': # Register the robot. bot = waveapi.robot.Robot( 'The Annoying Bot', image_url='http://example.com/annoying-image.gif', version='1.0', profile_url='http://example.com/') bot.RegisterHandler(waveapi.events.BLIP_SUBMITTED, OnBlipSubmitted) bot.RegisterHandler(waveapi.events.WAVELET_SELF_ADDED, OnRobotAdded) bot.Run() Right now I'm working on a Google App Engine project that's going to be a collaborative text adventure game. For this game I made a bot that lets you play it on Wave. It uses Wave's threading of blips to let you branch the game at any point etc. For more info, have a look at the Google Code project page (scroll down a little bit for a screenshot.) A: I have been working on Gadgets, using the Wave API. It's pretty easy to work with. For the most part, you can use javascript inside an XML file. You just need to have the proper tags for the XML file. Below is a sample of what a Gadget would look like, this particular gadget retrieves the top headlines from Slashdot and displays them at the top of the Wave. You can learn more about Gadgets here and here. alt text http://www.m1cr0sux0r.com/xml.jpg
Google Wave Sandbox
Is anyone developing robots and/or gadgets for Google Wave? I have been a part of the sandbox development for a few days and I was interested in seeing what others have thought about the Google Wave APIs. I was also wondering what everyone has been working on. Please share your opinions and comments!
[ "Go to Google Wave developers and read the blogs, forums and all your questions will be answered including a recent post for a gallery of Wave apps. You will also find other developers to play in the sandbox with.\n", "I haven't tried the gadgets, but from the little I've looked at them, they seem pretty straight-forward. They're implemented in a template-ish way and you can easily keep states in them, allowing more complex things such as RSVP lists and even games.\nRobots are what I'm most interested in, and well, all I can say is that they're really easy to develop! Like barely any effort at all! Heck, I'll code one for you right here:\nimport waveapi.events\nimport waveapi.robot\n\ndef OnBlipSubmitted(properties, context):\n # Get the blip that was just submitted.\n blip = context.GetBlipById(properties['blipId'])\n # Respond to the blip (i.e. create a child blip)\n blip.CreateChild().GetDocument().SetText('That\\'s so funny!')\n\ndef OnRobotAdded(properties, context):\n # Add a message to the end of the wavelet.\n wavelet = context.GetRootWavelet()\n wavelet.CreateBlip().GetDocument().SetText('Heeeeey everybody!')\n\nif __name__ == '__main__':\n # Register the robot.\n bot = waveapi.robot.Robot(\n 'The Annoying Bot',\n image_url='http://example.com/annoying-image.gif',\n version='1.0',\n profile_url='http://example.com/')\n bot.RegisterHandler(waveapi.events.BLIP_SUBMITTED, OnBlipSubmitted)\n bot.RegisterHandler(waveapi.events.WAVELET_SELF_ADDED, OnRobotAdded)\n bot.Run()\n\nRight now I'm working on a Google App Engine project that's going to be a collaborative text adventure game. For this game I made a bot that lets you play it on Wave. It uses Wave's threading of blips to let you branch the game at any point etc. For more info, have a look at the Google Code project page (scroll down a little bit for a screenshot.)\n", "I have been working on Gadgets, using the Wave API. It's pretty easy to work with. For the most part, you can use javascript inside an XML file. You just need to have the proper tags for the XML file. Below is a sample of what a Gadget would look like, this particular gadget retrieves the top headlines from Slashdot and displays them at the top of the Wave. You can learn more about Gadgets here and here.\nalt text http://www.m1cr0sux0r.com/xml.jpg\n" ]
[ 2, 2, 2 ]
[]
[]
[ "google_app_engine", "google_wave", "java", "python" ]
stackoverflow_0001161660_google_app_engine_google_wave_java_python.txt
Q: Python equivalent of PropertyUtilsBean I was wondering, is there a Python equivalent to Apache commons' PropertyUtilsBean? Edit: For example, I'd like to be able to make this assignment x.y[2].z = v given "y[2].z" as a string. Please note, I'm asking just because I'd like to not reinvent the wheel :) A: Do you mean something like setattr? From its docstring: setattr(object, name, value) Set a named attribute on an object; setattr(x, 'y', v) is equivalent to ``x.y = v''. A: Why do you need such a thing when there's exec?
Python equivalent of PropertyUtilsBean
I was wondering, is there a Python equivalent to Apache commons' PropertyUtilsBean? Edit: For example, I'd like to be able to make this assignment x.y[2].z = v given "y[2].z" as a string. Please note, I'm asking just because I'd like to not reinvent the wheel :)
[ "Do you mean something like setattr?\nFrom its docstring:\n\nsetattr(object, name, value)\n\nSet a named attribute on an object;\n setattr(x, 'y', v) is equivalent to\n ``x.y = v''.\n\n", "Why do you need such a thing when there's exec?\n" ]
[ 1, 1 ]
[]
[]
[ "apache_commons", "python" ]
stackoverflow_0001176139_apache_commons_python.txt
Q: Return files only from specific folder I wrote a function in Python, that must return file from specific folder and all subfolders. File name taken from function parameter: def ReturnFile(fileName) return open("C:\\folder\\" + fileName,"r") But as fileName you can pass for example: "..\\Windows\\passwords.txt" or some unicode symbols for dots. How to fix it? Some RegExp maybe? A: The os.path.normpath function normalizes a given path py resolving things like "..". Then you can check if the resulting path is in the expected directory: def ReturnFile(fileName) norm = os.path.abspath("C:\\folder\\" + fileName) if not norm.startswith("C:\\folder\\"): raise Exception("Invalid filename specified") return open(norm,"r") A: How about this: import os _BASE_PATH= "C:\\folder\\" def return_file(file_name): "Return File Object from Base Path named `file_name`" os.path.normpath(file_name).split(os.path.sep)[-1] return(open(_BASE_PATH+file_name))
Return files only from specific folder
I wrote a function in Python, that must return file from specific folder and all subfolders. File name taken from function parameter: def ReturnFile(fileName) return open("C:\\folder\\" + fileName,"r") But as fileName you can pass for example: "..\\Windows\\passwords.txt" or some unicode symbols for dots. How to fix it? Some RegExp maybe?
[ "The os.path.normpath function normalizes a given path py resolving things like \"..\". Then you can check if the resulting path is in the expected directory:\ndef ReturnFile(fileName)\n norm = os.path.abspath(\"C:\\\\folder\\\\\" + fileName)\n if not norm.startswith(\"C:\\\\folder\\\\\"):\n raise Exception(\"Invalid filename specified\")\n return open(norm,\"r\")\n\n", "How about this:\nimport os\n\n_BASE_PATH= \"C:\\\\folder\\\\\"\n\ndef return_file(file_name):\n \"Return File Object from Base Path named `file_name`\"\n os.path.normpath(file_name).split(os.path.sep)[-1]\n return(open(_BASE_PATH+file_name))\n\n" ]
[ 4, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001176624_python.txt
Q: How do I add items to a gtk.ComboBox created through glade at runtime? I'm using Glade 3 to create a GtkBuilder file for a PyGTK app I'm working on. It's for managing bandwidth, so I have a gtk.ComboBox for selecting the network interface to track. How do I add strings to the ComboBox at runtime? This is what I have so far: self.tracked_interface = builder.get_object("tracked_interface") self.iface_list_store = gtk.ListStore(gobject.TYPE_STRING) self.iface_list_store.append(["hello, "]) self.iface_list_store.append(["world."]) self.tracked_interface.set_model(self.iface_list_store) self.tracked_interface.set_active(0) But the ComboBox remains empty. I tried RTFM'ing, but just came away more confused, if anything. Cheers. A: Hey, I actually get to answer my own question! You have to add gtk.CellRendererText into there for it to actually render: self.iface_list_store = gtk.ListStore(gobject.TYPE_STRING) self.iface_list_store.append(["hello, "]) self.iface_list_store.append(["world."]) self.tracked_interface.set_model(self.iface_list_store) self.tracked_interface.set_active(0) # And here's the new stuff: cell = gtk.CellRendererText() self.tracked_interface.pack_start(cell, True) self.tracked_interface.add_attribute(cell, "text", 0) Retrieved from, of course, the PyGTK FAQ. Corrected example thanks to Joe McBride A: Or you could just create and insert the combo box yourself using gtk.combo_box_new_text(). Then you'll be able to use gtk shortcuts to append, insert, prepend and remove text. combo = gtk.combo_box_new_text() combo.append_text('hello') combo.append_text('world') combo.set_active(0) box = builder.get_object('some-box') box.pack_start(combo, False, False)
How do I add items to a gtk.ComboBox created through glade at runtime?
I'm using Glade 3 to create a GtkBuilder file for a PyGTK app I'm working on. It's for managing bandwidth, so I have a gtk.ComboBox for selecting the network interface to track. How do I add strings to the ComboBox at runtime? This is what I have so far: self.tracked_interface = builder.get_object("tracked_interface") self.iface_list_store = gtk.ListStore(gobject.TYPE_STRING) self.iface_list_store.append(["hello, "]) self.iface_list_store.append(["world."]) self.tracked_interface.set_model(self.iface_list_store) self.tracked_interface.set_active(0) But the ComboBox remains empty. I tried RTFM'ing, but just came away more confused, if anything. Cheers.
[ "Hey, I actually get to answer my own question!\nYou have to add gtk.CellRendererText into there for it to actually render:\nself.iface_list_store = gtk.ListStore(gobject.TYPE_STRING)\nself.iface_list_store.append([\"hello, \"])\nself.iface_list_store.append([\"world.\"])\nself.tracked_interface.set_model(self.iface_list_store)\nself.tracked_interface.set_active(0)\n# And here's the new stuff:\ncell = gtk.CellRendererText()\nself.tracked_interface.pack_start(cell, True)\nself.tracked_interface.add_attribute(cell, \"text\", 0)\n\nRetrieved from, of course, the PyGTK FAQ.\nCorrected example thanks to Joe McBride\n", "Or you could just create and insert the combo box yourself using gtk.combo_box_new_text(). Then you'll be able to use gtk shortcuts to append, insert, prepend and remove text.\ncombo = gtk.combo_box_new_text()\ncombo.append_text('hello')\ncombo.append_text('world')\ncombo.set_active(0)\n\nbox = builder.get_object('some-box')\nbox.pack_start(combo, False, False)\n\n" ]
[ 6, 6 ]
[]
[]
[ "gtk", "pygtk", "python" ]
stackoverflow_0001176748_gtk_pygtk_python.txt
Q: Python- about file-handle limits on OS HI i wrote a program by python , and when i open too many tempfile, i will got an exception: Too many open files ... Then i figure out that windows OS or C runtime has the file-handle limits, so, i alter my program using StringIO(), but still don`t know whether StringIO also is limited?? A: Python's StringIO does not use OS file handles, so it won't be limited in the same way. StringIO will be limited by available virtual memory, but you've probably got heaps of available memory. Normally the OS allows a single process to open thousands of files before running into the limit, so if your program is running out of file handles you might be forgetting to close them. Unless you're intending to open thousands of files and really have just run out, of course.
Python- about file-handle limits on OS
HI i wrote a program by python , and when i open too many tempfile, i will got an exception: Too many open files ... Then i figure out that windows OS or C runtime has the file-handle limits, so, i alter my program using StringIO(), but still don`t know whether StringIO also is limited??
[ "Python's StringIO does not use OS file handles, so it won't be limited in the same way. StringIO will be limited by available virtual memory, but you've probably got heaps of available memory.\nNormally the OS allows a single process to open thousands of files before running into the limit, so if your program is running out of file handles you might be forgetting to close them. Unless you're intending to open thousands of files and really have just run out, of course.\n" ]
[ 7 ]
[]
[]
[ "python" ]
stackoverflow_0001177230_python.txt
Q: Why is there module search path instead of typing the directory name + typing the file name? Is there an advantage? What is it? A: So that everyone doesn't need to have exactly the same file structure on their hard drive? import C:\Python\lib\module\ probably wouldn't work too well on my Mac... Edit: Also, what the heck are you talking about with the working directory? You can certainly use modules outside the working directory, as long as they're on the PYTHONPATH.
Why is there module search path instead of typing the directory name + typing the file name?
Is there an advantage? What is it?
[ "So that everyone doesn't need to have exactly the same file structure on their hard drive? import C:\\Python\\lib\\module\\ probably wouldn't work too well on my Mac...\nEdit: Also, what the heck are you talking about with the working directory? You can certainly use modules outside the working directory, as long as they're on the PYTHONPATH.\n" ]
[ 6 ]
[]
[]
[ "import", "module_search_path", "python" ]
stackoverflow_0001177513_import_module_search_path_python.txt
Q: Profiling Python Scripts running on Mod_wsgi How can I profile a python script running on mod_wsgi on apache I would like to use cProfile but it seems it requires me to invoke a function manually. Is there a way to enable cProfile globally and have it keep on logging results. A: You need to wrap you wsgi application function inside another function that just calls your function using cProfile and use that as the application. Or you can reuse existing WSGI middleware to do that for you, for example repoze.profile does pretty much what you seem to want. A: Here is the WSGI profile middleware for WHIFF (currently only available from the mercurial repository): profile.py. That should get you started. If you want to modify it to run outside of the WHIFF context change the line gateway.putResource(env, resourcePath, report) to something like file("/tmp/profile.txt", "w").write(report)
Profiling Python Scripts running on Mod_wsgi
How can I profile a python script running on mod_wsgi on apache I would like to use cProfile but it seems it requires me to invoke a function manually. Is there a way to enable cProfile globally and have it keep on logging results.
[ "You need to wrap you wsgi application function inside another function that just calls your function using cProfile and use that as the application. Or you can reuse existing WSGI middleware to do that for you, for example repoze.profile does pretty much what you seem to want.\n", "Here is the WSGI profile middleware for WHIFF (currently only available from the mercurial repository):\nprofile.py. That should get you started. If you want to modify it to run outside of the WHIFF context change the line\n gateway.putResource(env, resourcePath, report)\n\nto something like\n file(\"/tmp/profile.txt\", \"w\").write(report)\n\n" ]
[ 9, 0 ]
[]
[]
[ "profiling", "python", "wsgi" ]
stackoverflow_0001169833_profiling_python_wsgi.txt
Q: Mismatch between MySQL and Python I know the mismatch between Object Oriented Technology and the Relational Technology, generally here. But I do not know the mismatch between MySQL and Python, and other tools, not just ORMs, to deal with the issue, missing in the latter article. Questions: How is the problem dealt between MySQL and Python? Does App Engine's non-SQL makes Python work better together? Are there some general tools, perhaps ORM, to deal with mismatches? What are non-standard ways to deal with the problem? Could you say that the nonSQL is a tool to make the object-oriented world of Python match the Relational world? Or does the new design totally avoid the problem? A: ORM is the standard solution for making the object-oriented world of Python match the Relational world of MySQL. There are at least 3 popular ORM components. SQLAlchemy SQLObject Django's ORM. A: As was once said on comp.lang.python ORM's are like morphine -- it can save you pain if you are really hurting, but if you use it regularly you will end up with really big problems. It's not hard to build relatively low level interfaces between a relational database and an object model. It's extremely hard to migrate an automated ORM mapping to a new design after the fact. Only immature programmers try to simplify things that are not hard without looking ahead to the possible consequences that are extremely hard. The google app engine mini-rdb-with-some-restrictions-removed is nice because it only automates extremely simple stuff and forces you to think about the table layout without pretending that it can all be done automatically.
Mismatch between MySQL and Python
I know the mismatch between Object Oriented Technology and the Relational Technology, generally here. But I do not know the mismatch between MySQL and Python, and other tools, not just ORMs, to deal with the issue, missing in the latter article. Questions: How is the problem dealt between MySQL and Python? Does App Engine's non-SQL makes Python work better together? Are there some general tools, perhaps ORM, to deal with mismatches? What are non-standard ways to deal with the problem? Could you say that the nonSQL is a tool to make the object-oriented world of Python match the Relational world? Or does the new design totally avoid the problem?
[ "ORM is the standard solution for making the object-oriented world of Python match the Relational world of MySQL.\nThere are at least 3 popular ORM components.\n\nSQLAlchemy\nSQLObject\nDjango's ORM.\n\n", "As was once said on comp.lang.python ORM's are like morphine -- it can save you pain if you are really hurting, but if you use it regularly you will end up with really big problems.\nIt's not hard to build relatively low level interfaces between a relational database and an object model. It's extremely hard to migrate an automated ORM mapping to a new design after the fact. Only immature programmers try to simplify things that are not hard without looking ahead to the possible consequences that are extremely hard.\nThe google app engine mini-rdb-with-some-restrictions-removed is nice because it\nonly automates extremely simple stuff and forces you to think about the table layout\nwithout pretending that it can all be done automatically.\n" ]
[ 3, 1 ]
[]
[]
[ "google_app_engine", "mismatch", "mysql", "python" ]
stackoverflow_0001172790_google_app_engine_mismatch_mysql_python.txt
Q: Decoding double encoded utf8 in Python I've got a problem with strings that I get from one of my clients over xmlrpc. He sends me utf8 strings that are encoded twice :( so when I get them in python I have an unicode object that has to be decoded one more time, but obviously python doesn't allow that. I've noticed my client however I need to do quick workaround for now before he fixes it. Raw string from tcp dump: <string>Rafa\xc3\x85\xc2\x82</string> this is converted into: u'Rafa\xc5\x82' The best we get is: eval(repr(u'Rafa\xc5\x82')[1:]).decode("utf8") This results in correct string which is: u'Rafa\u0142' this works however is ugly as hell and cannot be used in production code. If anyone knows how to fix this problem in more suitable way please write. Thanks, Chris A: >>> s = u'Rafa\xc5\x82' >>> s.encode('raw_unicode_escape').decode('utf-8') u'Rafa\u0142' >>> A: Yow, that was fun! >>> original = "Rafa\xc3\x85\xc2\x82" >>> first_decode = original.decode('utf-8') >>> as_chars = ''.join([chr(ord(x)) for x in first_decode]) >>> result = as_chars.decode('utf-8') >>> result u'Rafa\u0142' So you do the first decode, getting a Unicode string where each character is actually a UTF-8 byte value. You go via the integer value of each of those characters to get back to a genuine UTF-8 string, which you then decode as normal. A: >>> weird = u'Rafa\xc5\x82' >>> weird.encode('latin1').decode('utf8') u'Rafa\u0142' >>> latin1 is just an abbreviation for Richie's nuts'n'bolts method. It is very curious that the seriously under-described raw_unicode_escape codec gives the same result as latin1 in this case. Do they always give the same result? If so, why have such a codec? If not, it would preferable to know for sure exactly how the OP's client did the transformation from 'Rafa\xc5\x82' to u'Rafa\xc5\x82' and then to reverse that process exactly -- otherwise we might come unstuck if different data crops up before the double encoding is fixed.
Decoding double encoded utf8 in Python
I've got a problem with strings that I get from one of my clients over xmlrpc. He sends me utf8 strings that are encoded twice :( so when I get them in python I have an unicode object that has to be decoded one more time, but obviously python doesn't allow that. I've noticed my client however I need to do quick workaround for now before he fixes it. Raw string from tcp dump: <string>Rafa\xc3\x85\xc2\x82</string> this is converted into: u'Rafa\xc5\x82' The best we get is: eval(repr(u'Rafa\xc5\x82')[1:]).decode("utf8") This results in correct string which is: u'Rafa\u0142' this works however is ugly as hell and cannot be used in production code. If anyone knows how to fix this problem in more suitable way please write. Thanks, Chris
[ "\n>>> s = u'Rafa\\xc5\\x82'\n>>> s.encode('raw_unicode_escape').decode('utf-8')\nu'Rafa\\u0142'\n>>>\n\n", "Yow, that was fun!\n>>> original = \"Rafa\\xc3\\x85\\xc2\\x82\"\n>>> first_decode = original.decode('utf-8')\n>>> as_chars = ''.join([chr(ord(x)) for x in first_decode])\n>>> result = as_chars.decode('utf-8')\n>>> result\nu'Rafa\\u0142'\n\nSo you do the first decode, getting a Unicode string where each character is actually a UTF-8 byte value. You go via the integer value of each of those characters to get back to a genuine UTF-8 string, which you then decode as normal.\n", ">>> weird = u'Rafa\\xc5\\x82'\n>>> weird.encode('latin1').decode('utf8')\nu'Rafa\\u0142'\n>>>\n\nlatin1 is just an abbreviation for Richie's nuts'n'bolts method.\nIt is very curious that the seriously under-described raw_unicode_escape codec gives the same result as latin1 in this case. Do they always give the same result? If so, why have such a codec? If not, it would preferable to know for sure exactly how the OP's client did the transformation from 'Rafa\\xc5\\x82' to u'Rafa\\xc5\\x82' and then to reverse that process exactly -- otherwise we might come unstuck if different data crops up before the double encoding is fixed.\n" ]
[ 48, 4, 2 ]
[]
[]
[ "decode", "python", "string", "utf_8" ]
stackoverflow_0001177316_decode_python_string_utf_8.txt
Q: Python file read problem file_read = open("/var/www/rajaneesh/file/_config.php", "r") contents = file_read.read() print contents file_read.close() The output is empty, but in that file all contents are there. Please help me how to do read and replace a string in __conifg.php. A: Usually, when there is such kind of issues, it is very useful to start the interactive shell and analyze all commands. For instance, it could be that the file does not exists (see comment from freiksenet) or you do not have privileges to it, or it is locked by another process. If you execute the script in some system (like a web server, as the path could suggest), the exception could go to a log - or simply be swallowed by other components in the system. On the contrary, if you execute it in the interactive shell, you can immediately see what the problem was, and eventually inspect the object (by using help(), dir() or the module inspect). By the way, this is also a good method for developing a script - just by tinkering around with the concept in the shell, then putting altogether. While we are here, I strongly suggest you usage of IPython. It is an evolution of the standard shell, with powerful aids for introspection (just press tab, or a put a question mark after an object). Unfortunately in the latest weeks the site is not often not available, but there are good chances you already have it installed on your system. A: I copied your code onto my own system, and changed the filename so that it works on my system. Also, I changed the indenting (putting everything at the same level) from what shows in your question. With those changes, the code worked fine. Thus, I think it's something else specific to your system that we probably cannot solve here (easily). A: Would it be possible that you don't have read access to the file you are trying to open?
Python file read problem
file_read = open("/var/www/rajaneesh/file/_config.php", "r") contents = file_read.read() print contents file_read.close() The output is empty, but in that file all contents are there. Please help me how to do read and replace a string in __conifg.php.
[ "Usually, when there is such kind of issues, it is very useful to start the interactive shell and analyze all commands.\nFor instance, it could be that the file does not exists (see comment from freiksenet) or you do not have privileges to it, or it is locked by another process.\nIf you execute the script in some system (like a web server, as the path could suggest), the exception could go to a log - or simply be swallowed by other components in the system.\nOn the contrary, if you execute it in the interactive shell, you can immediately see what the problem was, and eventually inspect the object (by using help(), dir() or the module inspect). By the way, this is also a good method for developing a script - just by tinkering around with the concept in the shell, then putting altogether.\nWhile we are here, I strongly suggest you usage of IPython. It is an evolution of the standard shell, with powerful aids for introspection (just press tab, or a put a question mark after an object). Unfortunately in the latest weeks the site is not often not available, but there are good chances you already have it installed on your system.\n", "I copied your code onto my own system, and changed the filename so that it works on my system. Also, I changed the indenting (putting everything at the same level) from what shows in your question. With those changes, the code worked fine.\nThus, I think it's something else specific to your system that we probably cannot solve here (easily).\n", "Would it be possible that you don't have read access to the file you are trying to open?\n" ]
[ 4, 2, 0 ]
[]
[]
[ "file_io", "python" ]
stackoverflow_0001176988_file_io_python.txt
Q: how to search for specific file type with yahoo search API? Does anyone know if there is some parameter available for programmatic search on yahoo allowing to restrict results so only links to files of specific type will be returned (like PDF for example)? It's possible to do that in GUI, but how to make it happen through API? I'd very much appreciate a sample code in Python, but any other solutions might be helpful as well. A: Yes, there is: http://developer.yahoo.com/search/boss/boss_guide/Web_Search.html#id356163 A: Thank you. I found myself that something like this works OK (file type is the first argument, and query is the second): format = sys.argv[1] query = " ".join(sys.argv[2:]) srch = create_search("Web", app_id, query=query, format=format) A: Here's what I do for this sort of thing. It exposes more of the parameters so you can tune it to your needs. This should print out the first ten PDFs URLs from the query "resume" [mine's not one of them ;) ]. You can download those URLs however you like. The json dictionary that gets returned from the query is a little gross, but this should get you started. Be aware that in real code you will need to check whether some of the keys in the dictionary exist. When there are no results, this code will probably throw an exception. The link that Tiago provided is good for knowing what values are supported for the "type" parameter. from yos.crawl import rest APPID="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" base_url = "http://boss.yahooapis.com/ysearch/%s/v%d/%s?start=%d&count=%d&type=%s" + "&appid=" + APPID querystr="resume" start=0 count=10 type="pdf" search_url = base_url % ("web", 1, querystr, start, count, type) json_result = rest.load_json(search_url) for url in [recs['url'] for recs in json_result['ysearchresponse']['resultset_web']]: print url
how to search for specific file type with yahoo search API?
Does anyone know if there is some parameter available for programmatic search on yahoo allowing to restrict results so only links to files of specific type will be returned (like PDF for example)? It's possible to do that in GUI, but how to make it happen through API? I'd very much appreciate a sample code in Python, but any other solutions might be helpful as well.
[ "Yes, there is:\nhttp://developer.yahoo.com/search/boss/boss_guide/Web_Search.html#id356163\n", "Thank you.\nI found myself that something like this works OK (file type is the first argument, and query is the second):\nformat = sys.argv[1]\nquery = \" \".join(sys.argv[2:])\nsrch = create_search(\"Web\", app_id, query=query, format=format)\n", "Here's what I do for this sort of thing. It exposes more of the parameters so you can tune it to your needs. This should print out the first ten PDFs URLs from the query \"resume\" [mine's not one of them ;) ]. You can download those URLs however you like.\nThe json dictionary that gets returned from the query is a little gross, but this should get you started. Be aware that in real code you will need to check whether some of the keys in the dictionary exist. When there are no results, this code will probably throw an exception. \nThe link that Tiago provided is good for knowing what values are supported for the \"type\" parameter. \nfrom yos.crawl import rest\nAPPID=\"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\"\nbase_url = \"http://boss.yahooapis.com/ysearch/%s/v%d/%s?start=%d&count=%d&type=%s\" + \"&appid=\" + APPID\nquerystr=\"resume\"\nstart=0\ncount=10\ntype=\"pdf\"\nsearch_url = base_url % (\"web\", 1, querystr, start, count, type)\njson_result = rest.load_json(search_url)\nfor url in [recs['url'] for recs in json_result['ysearchresponse']['resultset_web']]:\n print url\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "python", "yahoo_api", "yahoo_search" ]
stackoverflow_0000522781_python_yahoo_api_yahoo_search.txt
Q: python distutils win32 version question So you can use distutils to create a file, such as PIL-1.1.6.win32-py2.5.exe which you can run and use to easily install something. However, the installation requires user input to proceed (you have to click 'OK' three times). I want to create an easily installable windows version that you can just run as a cmd line program, that doesn't require input from the user. Is this possible? Do these .exe files do it already, but you need to pass them a magic cmd line argument to work? A: See this post which describes an idea to modify the stub installer like this: It also mentions another alternative: use setup.py bdist_msi instead, which will produce an msi package, that can be installed unattended A: You get the executable by running "setup.py bdist_wininst". You can have something simpler by running "setup.py bdist_dumb". This will produce a .zip file which, unzipped at the root of the drive where Python is installed, provided that it's installed in the same directory as the machine you've build it, will install the library. Now I don't know if there is an unzip command-line utility under Windows that can be used to do that; I usually have Cygwin installed on all my Windows boxes, but it may prove quite simple to just ship it with the .zip. A: I have done this before using a simple batch file to call setuptools' install script passing the egg file path as an argument to it. The only trouble is that you need to ensure that script is in the PATH, which it might not be. Assuming Python itself is in the PATH you can try something like this in a python script you would distribute with your egg (call it install.py or somesuch). import sys from pkg_resources import load_entry_point def install_egg(egg_file_path): sys.argv[1] = egg_file_path easy_install = load_entry_point( 'setuptools==0.6c9', 'console_scripts', 'easy_install' ) easy_install() if __name__ == "__main__": install_egg("PIL-1.1.6.win32-py2.5.egg") Essentially this does the same as the "easy_install.py" script. You're locating the entrypoint for that script, and setting up sys.argv so that the first argument points at your egg file. It should do the rest for you. HTH
python distutils win32 version question
So you can use distutils to create a file, such as PIL-1.1.6.win32-py2.5.exe which you can run and use to easily install something. However, the installation requires user input to proceed (you have to click 'OK' three times). I want to create an easily installable windows version that you can just run as a cmd line program, that doesn't require input from the user. Is this possible? Do these .exe files do it already, but you need to pass them a magic cmd line argument to work?
[ "See this post which describes an idea to modify the stub installer like this:\nIt also mentions another alternative: use setup.py bdist_msi instead, which will produce an msi package, that can be installed unattended\n", "You get the executable by running \"setup.py bdist_wininst\". You can have something simpler by running \"setup.py bdist_dumb\". This will produce a .zip file which, unzipped at the root of the drive where Python is installed, provided that it's installed in the same directory as the machine you've build it, will install the library.\nNow I don't know if there is an unzip command-line utility under Windows that can be used to do that; I usually have Cygwin installed on all my Windows boxes, but it may prove quite simple to just ship it with the .zip.\n", "I have done this before using a simple batch file to call setuptools' install script passing the egg file path as an argument to it. The only trouble is that you need to ensure that script is in the PATH, which it might not be.\nAssuming Python itself is in the PATH you can try something like this in a python script you would distribute with your egg (call it install.py or somesuch).\nimport sys\nfrom pkg_resources import load_entry_point\n\ndef install_egg(egg_file_path):\n sys.argv[1] = egg_file_path\n easy_install = load_entry_point(\n 'setuptools==0.6c9', \n 'console_scripts', \n 'easy_install'\n )\n easy_install()\n\nif __name__ == \"__main__\":\n install_egg(\"PIL-1.1.6.win32-py2.5.egg\")\n\nEssentially this does the same as the \"easy_install.py\" script. You're locating the entrypoint for that script, and setting up sys.argv so that the first argument points at your egg file. It should do the rest for you.\nHTH\n" ]
[ 1, 0, 0 ]
[]
[]
[ "distutils", "installation", "python", "windows_installer" ]
stackoverflow_0001166503_distutils_installation_python_windows_installer.txt
Q: How can I create bound methods with type()? I am dynamically generating a function and assigning it to a class. This is a simple/minimal example of what I am trying to achieve: def echo(obj): print obj.hello class Foo(object): hello = "Hello World" spam = type("Spam", (Foo, ), {"echo":echo}) spam.echo() Results in this error Traceback (most recent call last): File "<input>", line 1, in <module> TypeError: unbound method echo() must be called with Spam instance as first argument (got nothing instead) I know if I used the @staticmethod decorator that I can pass spam in as a parameter to echo, but that is not possible for me in my use case. How would I get the echo function to be bound to Spam and access self? Is it possible at all? A: So far, you only have created a class. You also need to create objects, i.e. instances of that class: Spam = type("Spam", (Foo, ), {"echo":echo}) spam = Spam() spam.echo() If you really want this to be a method on the class, rather than an instance method, wrap it with classmethod (instead of staticmethod).
How can I create bound methods with type()?
I am dynamically generating a function and assigning it to a class. This is a simple/minimal example of what I am trying to achieve: def echo(obj): print obj.hello class Foo(object): hello = "Hello World" spam = type("Spam", (Foo, ), {"echo":echo}) spam.echo() Results in this error Traceback (most recent call last): File "<input>", line 1, in <module> TypeError: unbound method echo() must be called with Spam instance as first argument (got nothing instead) I know if I used the @staticmethod decorator that I can pass spam in as a parameter to echo, but that is not possible for me in my use case. How would I get the echo function to be bound to Spam and access self? Is it possible at all?
[ "So far, you only have created a class. You also need to create objects, i.e. instances of that class:\nSpam = type(\"Spam\", (Foo, ), {\"echo\":echo})\nspam = Spam()\nspam.echo()\n\nIf you really want this to be a method on the class, rather than an instance method, wrap it with classmethod (instead of staticmethod).\n" ]
[ 8 ]
[]
[]
[ "python" ]
stackoverflow_0001178337_python.txt
Q: How to store arbitrary number of fields in django model? I'm new to python/django. I need to store an arbitrary number of fields in a django model. I'm wondering if django has something that takes care of this. Typically, I would store some XML in a column to do this. Does django offer some classes that makes this easy to do whether it be XML or some other(better) method? Thanks, Pete A: There are a lot of approaches to solve this problem, and depending on your situation any of them might work. You could certainly use a TextField to store XML or JSON or any other form of text. In combination with Python's pickle feature you can do some neater stuff. You might look at the Django Pickle Field definition on DjangoSnippets: http://www.djangosnippets.org/snippets/513/ That allows you to dump Python dictionaries into fields and does some manipulation so that when you reference those fields you can get easy access to the dictionaries without any need to re-parse XML or anything. I imagine you could also explore writing a custom field definition that would do a similar thing for other serialization formats, although I'm not sure how useful that would be. Or you could simply refactor your model to take advantage of ManyToMany fields. You can create a model for a generic key,value pair, then on your primary model you would have a M2M reference to that generic key,value model. In that way you could leverage more of the Django ORM to reference data, etc. A: There is a XML field available: http://docs.djangoproject.com/en/dev/ref/models/fields/#xmlfield But it would force you to do extra parsing on the resulting query. (Which I think you'll have to do to some degree...) I've considered just dumping a list, unicode(mycolumnlist), into a single char field and having just a set number of indexed charfields after that like: class DumbFlexModel(models.Model): available_fields = models.CharField() field1 = models.CharField() field2 = models.CharField() field3 = models.CharField() ... That way you could at least perform a contains query on available_fields to filter results to only those with the field your trying to get vaules for, but then the position is arbitrary, so you'd still have to go through each result and process available_fields get get the position of the value. Or maybe dumping a serialized (pickle.dumps()) list of dictionaries? I'm interested in other suggestions.
How to store arbitrary number of fields in django model?
I'm new to python/django. I need to store an arbitrary number of fields in a django model. I'm wondering if django has something that takes care of this. Typically, I would store some XML in a column to do this. Does django offer some classes that makes this easy to do whether it be XML or some other(better) method? Thanks, Pete
[ "There are a lot of approaches to solve this problem, and depending on your situation any of them might work. You could certainly use a TextField to store XML or JSON or any other form of text. In combination with Python's pickle feature you can do some neater stuff.\nYou might look at the Django Pickle Field definition on DjangoSnippets:\nhttp://www.djangosnippets.org/snippets/513/\nThat allows you to dump Python dictionaries into fields and does some manipulation so that when you reference those fields you can get easy access to the dictionaries without any need to re-parse XML or anything.\nI imagine you could also explore writing a custom field definition that would do a similar thing for other serialization formats, although I'm not sure how useful that would be.\nOr you could simply refactor your model to take advantage of ManyToMany fields. You can create a model for a generic key,value pair, then on your primary model you would have a M2M reference to that generic key,value model. In that way you could leverage more of the Django ORM to reference data, etc. \n", "There is a XML field available:\nhttp://docs.djangoproject.com/en/dev/ref/models/fields/#xmlfield\nBut it would force you to do extra parsing on the resulting query.\n(Which I think you'll have to do to some degree...)\nI've considered just dumping a list, unicode(mycolumnlist), into a single char field and having just a set number of indexed charfields after that like:\nclass DumbFlexModel(models.Model):\n available_fields = models.CharField()\n field1 = models.CharField()\n field2 = models.CharField()\n field3 = models.CharField()\n ...\n\nThat way you could at least perform a contains query on available_fields to filter results to only those with the field your trying to get vaules for, but then the position is arbitrary, so you'd still have to go through each result and process available_fields get get the position of the value.\nOr maybe dumping a serialized (pickle.dumps()) list of dictionaries?\nI'm interested in other suggestions.\n" ]
[ 11, 0 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001178551_django_django_models_python.txt
Q: using pyunit on a network thread I am tasked with writing unit tests for a suite of networked software written in python. Writing units for message builders and other static methods is very simple, but I've hit a wall when it comes to writing a tests for network looped threads. For example: The server it connects to could be on any port, and I want to be able to test the ability to connect to numerous ports (in sequence, not parallel) without actually having to run numerous servers. What is a good way to approach this? Perhaps make server construction and destruction part of the test? Something tells me there must a simpler answer that evades me. I have to imagine there are methods for unit testing networked threads, but I can't seem to find any. A: I would try to introduce a factory into your existing code that purports to create socket objects. Then in a test pass in a mock factory which creates mock sockets which just pretend they've connected to a server (or not for error cases, which you also want to test, don't you?) and log the message traffic to prove that your code has used the right ports to connect to the right types of servers. Try not to use threads just yet, to simplify testing. A: It depends on how your network software is layered and how detailed you want your tests to be, but it's certainly feasible in some scenarios to make server setup and tear-down part of the test. For example, when I was working on the Python logging package (before it became part of Python), I had a test (I didn't use pyunit/unittest - it was just an ad-hoc script) which fired up (in one test) four servers to listen on TCP, UDP, HTTP and HTTP/SOAP ports, and then sent network traffic to them. If you're interested, the distribution is here and the relevant test script in the archive to look at is log_test.py. The Python logging package has of course come some way since then, but the old package is still around for use with versions of Python < 2.3 and >= 1.5.2. A: I've some test cases that run a server in the setUp and close it in the tearDown. I don't know if it is very elegant way to do it but it works of for me. I am happy to have it and it helps me a lot. If the server init is very long, an alternative would be to automate it with ant. ant would run/stop the server before/after executing the tests. See here for very interesting tutorial about ant and python A: You would need to create mock sockets. The exact way to do that would depend on how you create sockets and creating a socket generator would be a good idea. You can also use a mocking library like pymox to make your life easier. It can also possibly eliminate the need to create a socket generator just for the sole purpose of testing. Using pymox, you would do something like this: def test_connect(self): m = mox.Mox() m.StubOutWithMock(socket, 'socket') socket_mock = m.MockAnything() m.socket.socket(socket.AF_INET, socket.SOCK_STREAM).AndReturn(socket_mock) socket_mock.connect(('test_server1', 80)) socket_mock.connect(('test_server2', 81)) socket_mock.connect(('test_server3', 82)) m.ReplayAll() code_to_be_tested() m.VerifyAll() m.UnsetStubs()
using pyunit on a network thread
I am tasked with writing unit tests for a suite of networked software written in python. Writing units for message builders and other static methods is very simple, but I've hit a wall when it comes to writing a tests for network looped threads. For example: The server it connects to could be on any port, and I want to be able to test the ability to connect to numerous ports (in sequence, not parallel) without actually having to run numerous servers. What is a good way to approach this? Perhaps make server construction and destruction part of the test? Something tells me there must a simpler answer that evades me. I have to imagine there are methods for unit testing networked threads, but I can't seem to find any.
[ "I would try to introduce a factory into your existing code that purports to create socket objects. Then in a test pass in a mock factory which creates mock sockets which just pretend they've connected to a server (or not for error cases, which you also want to test, don't you?) and log the message traffic to prove that your code has used the right ports to connect to the right types of servers.\nTry not to use threads just yet, to simplify testing.\n", "It depends on how your network software is layered and how detailed you want your tests to be, but it's certainly feasible in some scenarios to make server setup and tear-down part of the test. For example, when I was working on the Python logging package (before it became part of Python), I had a test (I didn't use pyunit/unittest - it was just an ad-hoc script) which fired up (in one test) four servers to listen on TCP, UDP, HTTP and HTTP/SOAP ports, and then sent network traffic to them. If you're interested, the distribution is here and the relevant test script in the archive to look at is log_test.py. The Python logging package has of course come some way since then, but the old package is still around for use with versions of Python < 2.3 and >= 1.5.2.\n", "I've some test cases that run a server in the setUp and close it in the tearDown. I don't know if it is very elegant way to do it but it works of for me.\nI am happy to have it and it helps me a lot. \nIf the server init is very long, an alternative would be to automate it with ant. ant would run/stop the server before/after executing the tests.\nSee here for very interesting tutorial about ant and python \n", "You would need to create mock sockets. The exact way to do that would depend on how you create sockets and creating a socket generator would be a good idea. You can also use a mocking library like pymox to make your life easier. It can also possibly eliminate the need to create a socket generator just for the sole purpose of testing.\nUsing pymox, you would do something like this:\ndef test_connect(self):\n m = mox.Mox()\n m.StubOutWithMock(socket, 'socket')\n socket_mock = m.MockAnything()\n m.socket.socket(socket.AF_INET, socket.SOCK_STREAM).AndReturn(socket_mock)\n socket_mock.connect(('test_server1', 80))\n socket_mock.connect(('test_server2', 81))\n socket_mock.connect(('test_server3', 82))\n\n m.ReplayAll()\n code_to_be_tested()\n m.VerifyAll()\n m.UnsetStubs()\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "networking", "python", "python_unittest", "unit_testing" ]
stackoverflow_0001173767_networking_python_python_unittest_unit_testing.txt
Q: Suggestions for python assert function I'm using assert multiple times throughout multiple scripts, I was wondering if anyone has any suggestions on a better way to achieve this instead of the functions I have created below. def assert_validation(expected, actual, type='', message=''): if type == '==': assert expected == actual, 'Expected: %s, Actual: %s, %s' %(expected, actual, message) elif type == '!=': assert expected != actual, 'Expected: %s, Actual: %s, %s' %(expected, actual, message) elif type == '<=': assert expected <= actual, 'Expected: %s, Actual: %s, %s' %(expected, actual, message) elif type == '>=': assert expected >= actual, 'Expected: %s, Actual: %s, %s' %(expected, actual, message) def assert_str_validation(expected, actual, type='', message=''): if type == '==': assert str(expected) == str(actual), 'Expected: %s, Actual: %s, %s' %(expected, actual, message) elif type == '!=': assert str(expected) != str(actual), 'Expected: %s, Actual: %s, %s' %(expected, actual, message) elif type == '<=': assert str(expected) <= str(actual), 'Expected: %s, Actual: %s, %s' %(expected, actual, message) elif type == '>=': assert str(expected) >= str(actual), 'Expected: %s, Actual: %s, %s' %(expected, actual, message) A: Well this is certainly shorter... can you really not just use assert expected == actual or whatever in the scripts themselves? def assert_validation(expected, actual, type='', message='', trans=(lambda x: x)): m = { '==': (lambda e, a: e == a), '!=': (lambda e, a: e != a), '<=': (lambda e, a: e <= a), '>=': (lambda e, a: e >= a), } assert m[type](trans(expected), trans(actual)), 'Expected: %s, Actual: %s, %s' % (expected, actual, message) def assert_str_validation(expected, actual, type='', message=''): assert_validation(expected, actual, type, message, trans=str)
Suggestions for python assert function
I'm using assert multiple times throughout multiple scripts, I was wondering if anyone has any suggestions on a better way to achieve this instead of the functions I have created below. def assert_validation(expected, actual, type='', message=''): if type == '==': assert expected == actual, 'Expected: %s, Actual: %s, %s' %(expected, actual, message) elif type == '!=': assert expected != actual, 'Expected: %s, Actual: %s, %s' %(expected, actual, message) elif type == '<=': assert expected <= actual, 'Expected: %s, Actual: %s, %s' %(expected, actual, message) elif type == '>=': assert expected >= actual, 'Expected: %s, Actual: %s, %s' %(expected, actual, message) def assert_str_validation(expected, actual, type='', message=''): if type == '==': assert str(expected) == str(actual), 'Expected: %s, Actual: %s, %s' %(expected, actual, message) elif type == '!=': assert str(expected) != str(actual), 'Expected: %s, Actual: %s, %s' %(expected, actual, message) elif type == '<=': assert str(expected) <= str(actual), 'Expected: %s, Actual: %s, %s' %(expected, actual, message) elif type == '>=': assert str(expected) >= str(actual), 'Expected: %s, Actual: %s, %s' %(expected, actual, message)
[ "Well this is certainly shorter... can you really not just use assert expected == actual or whatever in the scripts themselves?\ndef assert_validation(expected, actual, type='', message='', trans=(lambda x: x)):\n m = { '==': (lambda e, a: e == a),\n '!=': (lambda e, a: e != a),\n '<=': (lambda e, a: e <= a),\n '>=': (lambda e, a: e >= a), }\n assert m[type](trans(expected), trans(actual)), 'Expected: %s, Actual: %s, %s' % (expected, actual, message)\n\ndef assert_str_validation(expected, actual, type='', message=''):\n assert_validation(expected, actual, type, message, trans=str)\n\n" ]
[ 11 ]
[]
[]
[ "assert", "python" ]
stackoverflow_0001179096_assert_python.txt
Q: In python when passing arguments what does ** before an argument do? From reading this example and from my slim knowledge of Python it must be a shortcut for converting an array to a dictionary or something? class hello: def GET(self, name): return render.hello(name=name) # Another way: #return render.hello(**locals()) A: In python f(**d) passes the values in the dictionary d as keyword parameters to the function f. Similarly f(*a) passes the values from the array a as positional parameters. As an example: def f(count, msg): for i in range(count): print msg Calling this function with **d or *a: >>> d = {'count': 2, 'msg': "abc"} >>> f(**d) abc abc >>> a = [1, "xyz"] >>> f(*a) xyz A: It "unpacks" an dictionary as an argument list. ie: def somefunction(keyword1, anotherkeyword): pass it could be called as somefunction(keyword1=something, anotherkeyword=something) or as di = {'keyword1' : 'something', anotherkeyword : 'something'} somefunction(**di) A: From the Python docuemntation, 5.3.4: If any keyword argument does not correspond to a formal parameter name, a TypeError exception is raised, unless a formal parameter using the syntax **identifier is present; in this case, that formal parameter receives a dictionary containing the excess keyword arguments (using the keywords as keys and the argument values as corresponding values), or a (new) empty dictionary if there were no excess keyword arguments. This is also used for the power operator, in a different context. A: **local() passes the dictionary corresponding to the local namespace of the caller. When passing a function with ** a dictionary is passed, this allows variable length argument lists.
In python when passing arguments what does ** before an argument do?
From reading this example and from my slim knowledge of Python it must be a shortcut for converting an array to a dictionary or something? class hello: def GET(self, name): return render.hello(name=name) # Another way: #return render.hello(**locals())
[ "In python f(**d) passes the values in the dictionary d as keyword parameters to the function f. Similarly f(*a) passes the values from the array a as positional parameters.\nAs an example:\ndef f(count, msg):\n for i in range(count):\n print msg\n\nCalling this function with **d or *a:\n>>> d = {'count': 2, 'msg': \"abc\"}\n>>> f(**d)\nabc\nabc\n>>> a = [1, \"xyz\"]\n>>> f(*a)\nxyz\n\n", "It \"unpacks\" an dictionary as an argument list.\nie:\ndef somefunction(keyword1, anotherkeyword):\n pass\n\nit could be called as\nsomefunction(keyword1=something, anotherkeyword=something)\nor as\ndi = {'keyword1' : 'something', anotherkeyword : 'something'}\nsomefunction(**di)\n\n", "From the Python docuemntation, 5.3.4:\n\nIf any keyword argument does not correspond to a formal parameter name, a TypeError exception is raised, unless a formal parameter using the syntax **identifier is present; in this case, that formal parameter receives a dictionary containing the excess keyword arguments (using the keywords as keys and the argument values as corresponding values), or a (new) empty dictionary if there were no excess keyword arguments.\n\nThis is also used for the power operator, in a different context.\n", "**local() passes the dictionary corresponding to the local namespace of the caller. When passing a function with ** a dictionary is passed, this allows variable length argument lists.\n" ]
[ 11, 1, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001179223_python.txt
Q: What does keyword CONSTRAINT do in this CREATE TABLE statement I'm learning how to use sqlite3 with python. The example in the text book I am following is a database where each Country record has a Region, Country, and Population. The book says: The following snippet uses the CONSTRAINT keyword to specify that no two entries in the table being created will ever have the same values for region and country: >>> cur.execute(''' CREATE TABLE PopByCountry( Region TEXT NOT NULL, Country TEXT NOT NULL, Population INTEGER NOT NULL, CONSTRAINT Country_Key PRIMARY KEY (Region, Country)) ''') Please could you explain what CONSTRAINT Country_Key does here. If I remove it, the PRIMARY KEY statement alone seems to ensure that each country has a unique name for that region. A: Country_key is simply giving a name to the constraint. If you do not do this the name will be generated for you. This is useful when there are several constraints on the table and you need to drop one of them. As an example for dropping the constraint: ALTER TABLE PopByCountry DROP CONSTRAINT Country_Key A: If you omit CONSTRAINT Contry_Key from the statement, SQL server will generate a name for your PRIMARY KEY constraint for you (the PRIMARY KEY is a type of constraint). By specifically putting CONSTRAINT in the query you are essentially specifying a name for your primary key constraint.
What does keyword CONSTRAINT do in this CREATE TABLE statement
I'm learning how to use sqlite3 with python. The example in the text book I am following is a database where each Country record has a Region, Country, and Population. The book says: The following snippet uses the CONSTRAINT keyword to specify that no two entries in the table being created will ever have the same values for region and country: >>> cur.execute(''' CREATE TABLE PopByCountry( Region TEXT NOT NULL, Country TEXT NOT NULL, Population INTEGER NOT NULL, CONSTRAINT Country_Key PRIMARY KEY (Region, Country)) ''') Please could you explain what CONSTRAINT Country_Key does here. If I remove it, the PRIMARY KEY statement alone seems to ensure that each country has a unique name for that region.
[ "Country_key is simply giving a name to the constraint. If you do not do this the name will be generated for you. This is useful when there are several constraints on the table and you need to drop one of them.\nAs an example for dropping the constraint:\nALTER TABLE PopByCountry DROP CONSTRAINT Country_Key\n\n", "If you omit CONSTRAINT Contry_Key from the statement, SQL server will generate a name for your PRIMARY KEY constraint for you (the PRIMARY KEY is a type of constraint).\nBy specifically putting CONSTRAINT in the query you are essentially specifying a name for your primary key constraint.\n" ]
[ 16, 3 ]
[]
[]
[ "python", "sql" ]
stackoverflow_0001179352_python_sql.txt
Q: Overriding inherited behavior I am using Multi-table inheritance for an object, and I need to limit the choices of the parent object foreign key references to only the rules that apply the child system. from schedule.models import Event, Rule class AirShowRule(Rule): """ Inheritance of the schedule.Rule """ rule_type = models.TextField(default='onAir') class AirShow(Event): station = models.ForeignKey(Station) image = models.ImageField(upload_to='images/airshow', null=True, blank= True) thumb_image = models.ImageField(upload_to='images/airshow', null=True, blank= True) Now, in the admin, I only want the AirShowRule(s) to be the choices for an AirShow(Event). What I get is all the rules that are in the schedule.event system. I am inheriting from django-schedule found at http://code.google.com/p/django-schedule/ A: I looked into the structure of the classes listed, and you should add this: class AirShow(Event): ... your stuff... rule = models.ForeignKey(AirShowRule, null = True, blank = True, verbose_name="VERBOSE NAME", help_text="HELP TEXT") that should get everything straight (changed to "AirShowRule" from "Rule") you should also make sure that you implement AirShowRule more completely as I imagine you aren't overriding rule_type, and if you are, I don't think it'll do all that you want *see: models.py:23 ...this line was taken from models.py:103 with the modification of the arguments: verbose___name & help_text (probably optional, but I'll leave that for you to inspect) Be aware that I haven't used these modules before, but this should give you a push to keep going :)
Overriding inherited behavior
I am using Multi-table inheritance for an object, and I need to limit the choices of the parent object foreign key references to only the rules that apply the child system. from schedule.models import Event, Rule class AirShowRule(Rule): """ Inheritance of the schedule.Rule """ rule_type = models.TextField(default='onAir') class AirShow(Event): station = models.ForeignKey(Station) image = models.ImageField(upload_to='images/airshow', null=True, blank= True) thumb_image = models.ImageField(upload_to='images/airshow', null=True, blank= True) Now, in the admin, I only want the AirShowRule(s) to be the choices for an AirShow(Event). What I get is all the rules that are in the schedule.event system. I am inheriting from django-schedule found at http://code.google.com/p/django-schedule/
[ "I looked into the structure of the classes listed, and you should add this:\nclass AirShow(Event):\n ... your stuff...\n rule = models.ForeignKey(AirShowRule, null = True, blank = True,\n verbose_name=\"VERBOSE NAME\", help_text=\"HELP TEXT\")\n\nthat should get everything straight (changed to \"AirShowRule\" from \"Rule\")\nyou should also make sure that you implement AirShowRule more completely as I imagine you aren't overriding rule_type, and if you are, I don't think it'll do all that you want\n*see: models.py:23\n...this line was taken from models.py:103 with the modification of the arguments: verbose___name & help_text (probably optional, but I'll leave that for you to inspect)\nBe aware that I haven't used these modules before, but this should give you a push to keep going :)\n" ]
[ 1 ]
[]
[]
[ "django_models", "python" ]
stackoverflow_0001179213_django_models_python.txt
Q: exit failed script run (python) I have seen several questions about exiting a script after a task is successfully completed, but is there a way to do the same for a script which has failed? I am writing a testing script which just checks that a camera is functioning correctly. If the first test fails it is more than likely that the following tests will also fail; therefore, I want the first failure to invoke an exit and provide output to screen letting me know that there was an error. I hope this is enough information; let me know if more details are required to help me. A: Are you just looking for the exit() function? import sys if 1 < 0: print >> sys.stderr, "Something is seriously wrong." sys.exit(1) The (optional) parameter of exit() is the return code the script will return to the shell. Usually values different than 0 signal an error. A: You can use sys.exit() to exit. However, if any code higher up catches the SystemExit exception, it won't exit. A: You can raise exceptions to identify error conditions. Your top-level code can catch those exceptions and handle them appropriately. You can use sys.exit to exit. E.g., in Python 2.x: import sys class CameraInitializationError(StandardError): pass def camera_test_1(): pass def camera_test_2(): raise CameraInitializationError('Failed to initialize camera') if __name__ == '__main__': try: camera_test_1() camera_test_2() print 'Camera successfully initialized' except CameraInitializationError, e: print >>sys.stderr, 'ERROR: %s' % e sys.exit(1) A: You want to check the return code from the c++ program you are running, and exit if it indicates failure. In the code below, /bin/false and /bin/true are programs that exit with error and success codes, respectively. Replace them with your own program. import os import sys status = os.system('/bin/true') if status != 0: # Failure occurred, exit. print 'true returned error' sys.exit(1) status = os.system('/bin/false') if status != 0: # Failure occurred, exit. print 'false returned error' sys.exit(1) This assumes that the program you're running exits with zero on success, nonzero on failure.
exit failed script run (python)
I have seen several questions about exiting a script after a task is successfully completed, but is there a way to do the same for a script which has failed? I am writing a testing script which just checks that a camera is functioning correctly. If the first test fails it is more than likely that the following tests will also fail; therefore, I want the first failure to invoke an exit and provide output to screen letting me know that there was an error. I hope this is enough information; let me know if more details are required to help me.
[ "Are you just looking for the exit() function?\nimport sys\n\nif 1 < 0:\n print >> sys.stderr, \"Something is seriously wrong.\"\n sys.exit(1)\n\nThe (optional) parameter of exit() is the return code the script will return to the shell. Usually values different than 0 signal an error.\n", "You can use sys.exit() to exit. However, if any code higher up catches the SystemExit exception, it won't exit.\n", "You can raise exceptions to identify error conditions. Your top-level code can catch those exceptions and handle them appropriately. You can use sys.exit to exit. E.g., in Python 2.x:\nimport sys\n\nclass CameraInitializationError(StandardError):\n pass\n\ndef camera_test_1():\n pass\n\ndef camera_test_2():\n raise CameraInitializationError('Failed to initialize camera')\n\nif __name__ == '__main__':\n try:\n camera_test_1()\n camera_test_2()\n print 'Camera successfully initialized'\n except CameraInitializationError, e:\n print >>sys.stderr, 'ERROR: %s' % e\n sys.exit(1)\n\n", "You want to check the return code from the c++ program you are running, and exit if it indicates failure. In the code below, /bin/false and /bin/true are programs that exit with error and success codes, respectively. Replace them with your own program.\nimport os\nimport sys\n\nstatus = os.system('/bin/true')\nif status != 0:\n # Failure occurred, exit.\n print 'true returned error'\n sys.exit(1)\n\nstatus = os.system('/bin/false')\nif status != 0:\n # Failure occurred, exit.\n print 'false returned error'\n sys.exit(1)\n\nThis assumes that the program you're running exits with zero on success, nonzero on failure.\n" ]
[ 25, 5, 1, 0 ]
[]
[]
[ "exception", "exit", "python" ]
stackoverflow_0001178989_exception_exit_python.txt
Q: How to generate pdf with epydoc? I am considering epydoc for the documentation of one module. It looks ok to me and is working fine when I am generating html document. I would like to try to generate the documenation in the pdf format. I've just modified the 'output' setting in my config file. Unfortunately, epydoc fails when generating the pdf file. The error is "Error: Error reading pstat file: [Errno 2] No such file or directory: 'profile.out'" it generates some tex files. I think that maybe I am missing latex but I am not very familliar with tex and latex. More over I am working on Windows. What should be the next steps for making epydoc generating pdf file? Thanks in advance for your help A: I suppose that you used an existing conf file. If you have a closer look inside, you will see an option pstat: profile.out. This options says that the file profile.out will be used to generate the call graph (see doc). # The name of one or more pstat files (generated by the profile # or hotshot module). These are used to generate call graphs. pstat: profile.out You need to generate this file, by using the profileor hotspotmodule. For example, you can run your module by python -m "profile" -o profile.out mymodule.py (it is also possible to use hotspot or cProfile, that is much much faster than profile) This should works (I hope so)
How to generate pdf with epydoc?
I am considering epydoc for the documentation of one module. It looks ok to me and is working fine when I am generating html document. I would like to try to generate the documenation in the pdf format. I've just modified the 'output' setting in my config file. Unfortunately, epydoc fails when generating the pdf file. The error is "Error: Error reading pstat file: [Errno 2] No such file or directory: 'profile.out'" it generates some tex files. I think that maybe I am missing latex but I am not very familliar with tex and latex. More over I am working on Windows. What should be the next steps for making epydoc generating pdf file? Thanks in advance for your help
[ "I suppose that you used an existing conf file.\nIf you have a closer look inside, you will see an option pstat: profile.out. This options says that the file profile.out will be used to generate the call graph (see doc).\n# The name of one or more pstat files (generated by the profile\n# or hotshot module). These are used to generate call graphs.\npstat: profile.out\n\nYou need to generate this file, by using the profileor hotspotmodule. For example, you can run your module by\npython -m \"profile\" -o profile.out mymodule.py\n\n(it is also possible to use hotspot or cProfile, that is much much faster than profile)\nThis should works (I hope so)\n" ]
[ 3 ]
[]
[]
[ "documentation", "epydoc", "latex", "python" ]
stackoverflow_0001176407_documentation_epydoc_latex_python.txt
Q: Python error: int argument required What am I doing wrong here? i = 0 cursor.execute("insert into core_room (order) values (%i)", (int(i)) Error: int argument required The database field is an int(11), but I think the %i is generating the error. Update: Here's a more thorough example: time = datetime.datetime.now() floor = 0 i = 0 try: booster_cursor.execute('insert into core_room (extern_id, name, order, unit_id, created, updated) values (%s, %s, %s, %s, %s, %s)', (row[0], row[0], i, floor, time, time,)) except Exception, e: print e Error: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'order, unit_id, created, updated) values ('99', '99', '235', '12', '2009-07-24 1' at line 1") A: Two things. First, use %s and not %i. Second, parameters must be in a tuple - so you need (i,) (with comma after i). Also, ORDER is a keyword, and should be escaped if you're using it as field name. A: I believe the second argument to execute() is expected to be an iterable. IF this is the case you need to change: (int(i)) to: (int(i),) to make it into a tuple. A: You should be using ? instead of %i probably. And you're missing a parenthesis. cursor.execute("insert into core_room (order) values (?)", (int(i),))
Python error: int argument required
What am I doing wrong here? i = 0 cursor.execute("insert into core_room (order) values (%i)", (int(i)) Error: int argument required The database field is an int(11), but I think the %i is generating the error. Update: Here's a more thorough example: time = datetime.datetime.now() floor = 0 i = 0 try: booster_cursor.execute('insert into core_room (extern_id, name, order, unit_id, created, updated) values (%s, %s, %s, %s, %s, %s)', (row[0], row[0], i, floor, time, time,)) except Exception, e: print e Error: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'order, unit_id, created, updated) values ('99', '99', '235', '12', '2009-07-24 1' at line 1")
[ "Two things. First, use %s and not %i. Second, parameters must be in a tuple - so you need (i,) (with comma after i).\nAlso, ORDER is a keyword, and should be escaped if you're using it as field name.\n", "I believe the second argument to execute() is expected to be an iterable. IF this is the case you need to change:\n(int(i))\n\nto:\n(int(i),)\n\nto make it into a tuple.\n", "You should be using ? instead of %i probably. And you're missing a parenthesis.\ncursor.execute(\"insert into core_room (order) values (?)\", (int(i),))\n\n" ]
[ 4, 1, 1 ]
[]
[]
[ "mysql", "mysql_error_1064", "python" ]
stackoverflow_0001180673_mysql_mysql_error_1064_python.txt
Q: Best way to sort 1M records in Python I have a service that runs that takes a list of about 1,000,000 dictionaries and does the following myHashTable = {} myLists = { 'hits':{}, 'misses':{}, 'total':{} } sorted = { 'hits':[], 'misses':[], 'total':[] } for item in myList: id = item.pop('id') myHashTable[id] = item for k, v in item.iteritems(): myLists[k][id] = v So, if I had the following list of dictionaries: [ {'id':'id1', 'hits':200, 'misses':300, 'total':400}, {'id':'id2', 'hits':300, 'misses':100, 'total':500}, {'id':'id3', 'hits':100, 'misses':400, 'total':600} ] I end up with myHashTable = { 'id1': {'hits':200, 'misses':300, 'total':400}, 'id2': {'hits':300, 'misses':100, 'total':500}, 'id3': {'hits':100, 'misses':400, 'total':600} } and myLists = { 'hits': {'id1':200, 'id2':300, 'id3':100}, 'misses': {'id1':300, 'id2':100, 'id3':400}, 'total': {'id1':400, 'id2':500, 'id3':600} } I then need to sort all of the data in each of the myLists dictionaries. What I doing currently is something like the following: def doSort(key): sorted[key] = sorted(myLists[key].items(), key=operator.itemgetter(1), reverse=True) which would yield, in the case of misses: [('id3', 400), ('id1', 300), ('id2', 200)] This works great when I have up to 100,000 records or so, but with 1,000,000 it is taking at least 5 - 10 minutes to sort each with a total of 16 (my original list of dictionaries actually has 17 fields including id which is popped) * EDIT * This service is a ThreadingTCPServer which has a method allowing a client to connect and add new data. The new data may include new records (meaning dictionaries with unique 'id's to what is already in memory) or modified records (meaning the same 'id' with different data for the other key value pairs So, once this is running I would pass in [ {'id':'id1', 'hits':205, 'misses':305, 'total':480}, {'id':'id4', 'hits':30, 'misses':40, 'total':60}, {'id':'id5', 'hits':50, 'misses':90, 'total':20 ] I have been using dictionaries to store the data so that I don't end up with duplicates. After the dictionaries are updated with the new/modified data I resort each of them. * END EDIT * So, what is the best way for me to sort these? Is there a better method? A: You may find this related answer from Guido: Sorting a million 32-bit integers in 2MB of RAM using Python A: What you really want is an ordered container, instead of an unordered one. That would implicitly sort the results as they're inserted. The standard data structure for this is a tree. However, there doesn't seem to be one of these in Python. I can't explain that; this is a core, fundamental data type in any language. Python's dict and set are both unordered containers, which map to the basic data structure of a hash table. It should definitely have an optimized tree data structure; there are many things you can do with them that are impossible with a hash table, and they're quite tricky to implement well, so people generally don't want to be doing it themselves. (There's also nothing mapping to a linked list, which also should be a core data type. No, a deque is not equivalent.) I don't have an existing ordered container implementation to point you to (and it should probably be implemented natively, not in Python), but hopefully this will point you in the right direction. A good tree implementation should support iterating across a range by value ("iterate all values from [2,100] in order"), find next/prev value from any other node in O(1), efficient range extraction ("delete all values in [2,100] and return them in a new tree"), etc. If anyone has a well-optimized data structure like this for Python, I'd love to know about it. (Not all operations fit nicely in Python's data model; for example, to get next/prev value from another value, you need a reference to a node, not the value itself.) A: If you have a fixed number of fields, use tuples instead of dictionaries. Place the field you want to sort on in first position, and just use mylist.sort() A: Others have provided some excellent advices, try them out. As a general advice, in situations like that you need to profile your code. Know exactly where most of the time is spent. Bottlenecks hide well, in places you least expect them to be. If there is a lot of number crunching involved then a JIT compiler like the (now-dead) psyco might also help. When processing takes minutes or hours 2x speed-up really counts. http://docs.python.org/library/profile.html http://www.vrplumber.com/programming/runsnakerun/ http://psyco.sourceforge.net/ A: This seems to be pretty fast. raw= [ {'id':'id1', 'hits':200, 'misses':300, 'total':400}, {'id':'id2', 'hits':300, 'misses':100, 'total':500}, {'id':'id3', 'hits':100, 'misses':400, 'total':600} ] hits= [ (r['hits'],r['id']) for r in raw ] hits.sort() misses = [ (r['misses'],r['id']) for r in raw ] misses.sort() total = [ (r['total'],r['id']) for r in raw ] total.sort() Yes, it makes three passes through the raw data. I think it's faster than pulling out the data in one pass. A: Instead of trying to keep your list ordered, maybe you can get by with a heap queue. It lets you push any item, keeping the 'smallest' one at h[0], and popping this item (and 'bubbling' the next smallest) is an O(nlogn) operation. so, just ask yourself: do i need the whole list ordered all the time? : use an ordered structure (like Zope's BTree package, as mentioned by Ealdwulf) or the whole list ordered but only after a day's work of random insertions?: use sort like you're doing, or like S.Lott's answer or just a few 'smallest' items at any moment? : use heapq A: sorted(myLists[key], key=mylists[key].get, reverse=True) should save you some time, though not a lot. A: I would look into using a different sorting algorithm. Something like a Merge Sort might work. Break the list up into smaller lists and sort them individually. Then loop. Pseudo code: list1 = [] // sorted separately list2 = [] // sorted separately // Recombine sorted lists result = [] while (list1.hasMoreElements || list2.hasMoreElements): if (! list1.hasMoreElements): result.addAll(list2) break elseif (! list2.hasMoreElements): result.AddAll(list1) break if (list1.peek < list2.peek): result.add(list1.pop) else: result.add(list2.pop) A: Glenn Maynard is correct that a sorted mapping would be appropriate here. This is one for python: http://wiki.zope.org/ZODB/guide/node6.html#SECTION000630000000000000000 A: I've done some quick profiling of both the original way and SLott's proposal. In neither case does it take 5-10 minutes per field. The actual sorting is not the problem. It looks like most of the time is spent in slinging data around and transforming it. Also, my memory usage is skyrocketing - my python is over 350 megs of ram! are you sure you're not using up all your ram and paging to disk? Even with my crappy 3 year old power saving processor laptop, I am seeing results way less than 5-10 minutes per key sorted for a million items. What I can't explain is the variability in the actual sort() calls. I know python sort is extra good at sorting partially sorted lists, so maybe his list is getting partially sorted in the transform from the raw data to the list to be sorted. Here's the results for slott's method: done creating data done transform. elapsed: 16.5160000324 sorting one key slott's way takes 1.29699993134 here's the code to get those results: starttransform = time.time() hits= [ (r['hits'],r['id']) for r in myList ] endtransform = time.time() print "done transform. elapsed: " + str(endtransform - starttransform) hits.sort() endslottsort = time.time() print "sorting one key slott's way takes " + str(endslottsort - endtransform) Now the results for the original method, or at least a close version with some instrumentation added: done creating data done transform. elapsed: 8.125 about to get stuff to be sorted done getting data. elapsed time: 37.5939998627 about to sort key hits done sorting on key <hits> elapsed time: 5.54699993134 Here's the code: for k, v in myLists.iteritems(): time1 = time.time() print "about to get stuff to be sorted " tobesorted = myLists[k].items() time2 = time.time() print "done getting data. elapsed time: " + str(time2-time1) print "about to sort key " + str(k) mysorted[k] = tobesorted.sort( key=itemgetter(1)) time3 = time.time() print "done sorting on key <" + str(k) + "> elapsed time: " + str(time3-time2)
Best way to sort 1M records in Python
I have a service that runs that takes a list of about 1,000,000 dictionaries and does the following myHashTable = {} myLists = { 'hits':{}, 'misses':{}, 'total':{} } sorted = { 'hits':[], 'misses':[], 'total':[] } for item in myList: id = item.pop('id') myHashTable[id] = item for k, v in item.iteritems(): myLists[k][id] = v So, if I had the following list of dictionaries: [ {'id':'id1', 'hits':200, 'misses':300, 'total':400}, {'id':'id2', 'hits':300, 'misses':100, 'total':500}, {'id':'id3', 'hits':100, 'misses':400, 'total':600} ] I end up with myHashTable = { 'id1': {'hits':200, 'misses':300, 'total':400}, 'id2': {'hits':300, 'misses':100, 'total':500}, 'id3': {'hits':100, 'misses':400, 'total':600} } and myLists = { 'hits': {'id1':200, 'id2':300, 'id3':100}, 'misses': {'id1':300, 'id2':100, 'id3':400}, 'total': {'id1':400, 'id2':500, 'id3':600} } I then need to sort all of the data in each of the myLists dictionaries. What I doing currently is something like the following: def doSort(key): sorted[key] = sorted(myLists[key].items(), key=operator.itemgetter(1), reverse=True) which would yield, in the case of misses: [('id3', 400), ('id1', 300), ('id2', 200)] This works great when I have up to 100,000 records or so, but with 1,000,000 it is taking at least 5 - 10 minutes to sort each with a total of 16 (my original list of dictionaries actually has 17 fields including id which is popped) * EDIT * This service is a ThreadingTCPServer which has a method allowing a client to connect and add new data. The new data may include new records (meaning dictionaries with unique 'id's to what is already in memory) or modified records (meaning the same 'id' with different data for the other key value pairs So, once this is running I would pass in [ {'id':'id1', 'hits':205, 'misses':305, 'total':480}, {'id':'id4', 'hits':30, 'misses':40, 'total':60}, {'id':'id5', 'hits':50, 'misses':90, 'total':20 ] I have been using dictionaries to store the data so that I don't end up with duplicates. After the dictionaries are updated with the new/modified data I resort each of them. * END EDIT * So, what is the best way for me to sort these? Is there a better method?
[ "You may find this related answer from Guido: Sorting a million 32-bit integers in 2MB of RAM using Python\n", "What you really want is an ordered container, instead of an unordered one. That would implicitly sort the results as they're inserted. The standard data structure for this is a tree.\nHowever, there doesn't seem to be one of these in Python. I can't explain that; this is a core, fundamental data type in any language. Python's dict and set are both unordered containers, which map to the basic data structure of a hash table. It should definitely have an optimized tree data structure; there are many things you can do with them that are impossible with a hash table, and they're quite tricky to implement well, so people generally don't want to be doing it themselves.\n(There's also nothing mapping to a linked list, which also should be a core data type. No, a deque is not equivalent.)\nI don't have an existing ordered container implementation to point you to (and it should probably be implemented natively, not in Python), but hopefully this will point you in the right direction.\nA good tree implementation should support iterating across a range by value (\"iterate all values from [2,100] in order\"), find next/prev value from any other node in O(1), efficient range extraction (\"delete all values in [2,100] and return them in a new tree\"), etc. If anyone has a well-optimized data structure like this for Python, I'd love to know about it. (Not all operations fit nicely in Python's data model; for example, to get next/prev value from another value, you need a reference to a node, not the value itself.)\n", "If you have a fixed number of fields, use tuples instead of dictionaries. Place the field you want to sort on in first position, and just use mylist.sort()\n", "Others have provided some excellent advices, try them out. \nAs a general advice, in situations like that you need to profile your code. Know exactly where most of the time is spent. Bottlenecks hide well, in places you least expect them to be.\nIf there is a lot of number crunching involved then a JIT compiler like the (now-dead) psyco might also help. When processing takes minutes or hours 2x speed-up really counts.\n\nhttp://docs.python.org/library/profile.html\nhttp://www.vrplumber.com/programming/runsnakerun/ \nhttp://psyco.sourceforge.net/\n\n", "This seems to be pretty fast.\nraw= [ {'id':'id1', 'hits':200, 'misses':300, 'total':400},\n {'id':'id2', 'hits':300, 'misses':100, 'total':500},\n {'id':'id3', 'hits':100, 'misses':400, 'total':600}\n]\n\nhits= [ (r['hits'],r['id']) for r in raw ]\nhits.sort()\n\nmisses = [ (r['misses'],r['id']) for r in raw ]\nmisses.sort()\n\ntotal = [ (r['total'],r['id']) for r in raw ]\ntotal.sort()\n\nYes, it makes three passes through the raw data. I think it's faster than pulling out the data in one pass.\n", "Instead of trying to keep your list ordered, maybe you can get by with a heap queue. It lets you push any item, keeping the 'smallest' one at h[0], and popping this item (and 'bubbling' the next smallest) is an O(nlogn) operation.\nso, just ask yourself: \n\ndo i need the whole list ordered all the time? : use an ordered structure (like Zope's BTree package, as mentioned by Ealdwulf)\nor the whole list ordered but only after a day's work of random insertions?: use sort like you're doing, or like S.Lott's answer\nor just a few 'smallest' items at any moment? : use heapq\n\n", "sorted(myLists[key], key=mylists[key].get, reverse=True)\n\nshould save you some time, though not a lot.\n", "I would look into using a different sorting algorithm. Something like a Merge Sort might work. Break the list up into smaller lists and sort them individually. Then loop.\nPseudo code:\nlist1 = [] // sorted separately\nlist2 = [] // sorted separately\n\n// Recombine sorted lists\nresult = []\nwhile (list1.hasMoreElements || list2.hasMoreElements):\n if (! list1.hasMoreElements):\n result.addAll(list2)\n break\n elseif (! list2.hasMoreElements):\n result.AddAll(list1)\n break\n\n if (list1.peek < list2.peek):\n result.add(list1.pop)\n else:\n result.add(list2.pop)\n\n", "Glenn Maynard is correct that a sorted mapping would be appropriate here. This is one for python: http://wiki.zope.org/ZODB/guide/node6.html#SECTION000630000000000000000\n", "I've done some quick profiling of both the original way and SLott's proposal. In neither case does it take 5-10 minutes per field. The actual sorting is not the problem. It looks like most of the time is spent in slinging data around and transforming it. Also, my memory usage is skyrocketing - my python is over 350 megs of ram! are you sure you're not using up all your ram and paging to disk? Even with my crappy 3 year old power saving processor laptop, I am seeing results way less than 5-10 minutes per key sorted for a million items. What I can't explain is the variability in the actual sort() calls. I know python sort is extra good at sorting partially sorted lists, so maybe his list is getting partially sorted in the transform from the raw data to the list to be sorted.\nHere's the results for slott's method:\ndone creating data\ndone transform. elapsed: 16.5160000324\nsorting one key slott's way takes 1.29699993134\n\nhere's the code to get those results:\nstarttransform = time.time()\nhits= [ (r['hits'],r['id']) for r in myList ]\nendtransform = time.time()\nprint \"done transform. elapsed: \" + str(endtransform - starttransform)\nhits.sort()\nendslottsort = time.time()\nprint \"sorting one key slott's way takes \" + str(endslottsort - endtransform)\n\nNow the results for the original method, or at least a close version with some instrumentation added:\ndone creating data\ndone transform. elapsed: 8.125\nabout to get stuff to be sorted \ndone getting data. elapsed time: 37.5939998627\nabout to sort key hits\ndone sorting on key <hits> elapsed time: 5.54699993134\n\nHere's the code:\nfor k, v in myLists.iteritems():\n time1 = time.time()\n print \"about to get stuff to be sorted \"\n tobesorted = myLists[k].items()\n time2 = time.time()\n print \"done getting data. elapsed time: \" + str(time2-time1)\n print \"about to sort key \" + str(k) \n mysorted[k] = tobesorted.sort( key=itemgetter(1))\n time3 = time.time()\n print \"done sorting on key <\" + str(k) + \"> elapsed time: \" + str(time3-time2)\n\n" ]
[ 13, 4, 1, 1, 1, 1, 0, 0, 0, 0 ]
[ "Honestly, the best way is to not use Python. If performance is a major concern for this, use a faster language.\n" ]
[ -5 ]
[ "python" ]
stackoverflow_0001180240_python.txt
Q: one liner for conditionally replacing dictionary values Is there a better way to express this using list comprehension? Or any other way of expressing this in one line? I want to replace each value in the original dictionary with a corresponding value in the col dictionary, or leave it unchanged if its not in the col dictionary. col = {'1':3.5, '6':4.7} original = {'1':3, '2':1, '3':5, '4':2, '5':3, '6':4} for entry in col.iteritems(): original[entry[0]] = entry[1] A: I believe update is what you want. update([other]) Update the dictionary with the key/value pairs from other, overwriting existing keys. Return None. Code: original.update(col[user]) A simple test: user = "user" matrix = { "user" : { "a" : "b", "c" : "d", "e" : "f", }, } col = { "user" : { "a" : "b_2", "c" : "d_2", }, } original.update(col[user]) print(original) Output {'a': 'b_2', 'c': 'd_2', 'e': 'f'}
one liner for conditionally replacing dictionary values
Is there a better way to express this using list comprehension? Or any other way of expressing this in one line? I want to replace each value in the original dictionary with a corresponding value in the col dictionary, or leave it unchanged if its not in the col dictionary. col = {'1':3.5, '6':4.7} original = {'1':3, '2':1, '3':5, '4':2, '5':3, '6':4} for entry in col.iteritems(): original[entry[0]] = entry[1]
[ "I believe update is what you want.\n\nupdate([other])\nUpdate the dictionary with the key/value pairs from other, overwriting existing keys.\n Return None.\n\nCode:\noriginal.update(col[user])\n\nA simple test:\nuser = \"user\"\n\nmatrix = {\n \"user\" : {\n \"a\" : \"b\",\n \"c\" : \"d\",\n \"e\" : \"f\",\n },\n}\n\ncol = {\n \"user\" : {\n \"a\" : \"b_2\",\n \"c\" : \"d_2\",\n },\n}\n\noriginal.update(col[user])\n\nprint(original)\n\nOutput\n{'a': 'b_2', 'c': 'd_2', 'e': 'f'}\n\n" ]
[ 2 ]
[]
[]
[ "dictionary", "list_comprehension", "python", "refactoring" ]
stackoverflow_0001180846_dictionary_list_comprehension_python_refactoring.txt
Q: Composite pattern for GTD app This is a continuation of one of my previous questions Here are my classes. #Project class class Project: def __init__(self, name, children=[]): self.name = name self.children = children #add object def add(self, object): self.children.append(object) #get list of all actions def actions(self): a = [] for c in self.children: if isinstance(c, Action): a.append(c.name) return a #get specific action def action(self, name): for c in self.children: if isinstance(c, Action): if name == c.name: return c #get list of all projects def projects(self): p = [] for c in self.children: if isinstance(c, Project): p.append(c.name) return p #get specific project def project(self, name): for c in self.children: if isinstance(c, Project): if name == c.name: return c #Action class class Action: def __init__(self, name): self.name = name self.done = False def mark_done(self): self.done = True Here's the trouble I'm having. If I build a big project with several small projects, I want to see what the projects are or the actions for the current project, however I'm getting all of them in the tree. Here's the test code I'm using (note that I purposely chose several different ways to add projects and actions to test to make sure different ways work). life = Project("life") playguitar = Action("Play guitar") life.add(Project("Get Married")) wife = Project("Find wife") wife.add(Action("Date")) wife.add(Action("Propose")) wife.add(Action("Plan wedding")) life.project("Get Married").add(wife) life.add(Project("Have kids")) life.project("Have kids").add(Action("Bang wife")) life.project("Have kids").add(Action("Get wife pregnant")) life.project("Have kids").add(Project("Suffer through pregnancy")) life.project("Have kids").project("Suffer through pregnancy").add(Action("Drink")) life.project("Have kids").project("Suffer through pregnancy").add(playguitar) life.add(Project("Retire")) life.project("Retire").add(playguitar) life should have a few projects in it, with a few projects inside of those. The structure amounts to something like this (where indents are projects and -'s are actions) Life Get Married Find wife - Date - Propose - Plan wedding Have kids - Bang wife - Get wife pregnant Suffer through pregnancy - Drink - Play guitar Retire - Play guitar What I'm finding is that life.actions() is returning every action in the tree when it should return none. life.projects() is returning every project, even sub projects, when I only want 'Get Married', 'Have kids', and 'Retire'. What is it that I'm doing wrong? A: The problem is with your initialization of Projects: __init__(self, name, children=[]): You only get one list, which is shared by all Projects you create without passing a value for children. See here for an explanation. You want to instead make the default None, and initialize an empty list whenever the value is None. __init__(self, name, children=None): if children is None: children = []
Composite pattern for GTD app
This is a continuation of one of my previous questions Here are my classes. #Project class class Project: def __init__(self, name, children=[]): self.name = name self.children = children #add object def add(self, object): self.children.append(object) #get list of all actions def actions(self): a = [] for c in self.children: if isinstance(c, Action): a.append(c.name) return a #get specific action def action(self, name): for c in self.children: if isinstance(c, Action): if name == c.name: return c #get list of all projects def projects(self): p = [] for c in self.children: if isinstance(c, Project): p.append(c.name) return p #get specific project def project(self, name): for c in self.children: if isinstance(c, Project): if name == c.name: return c #Action class class Action: def __init__(self, name): self.name = name self.done = False def mark_done(self): self.done = True Here's the trouble I'm having. If I build a big project with several small projects, I want to see what the projects are or the actions for the current project, however I'm getting all of them in the tree. Here's the test code I'm using (note that I purposely chose several different ways to add projects and actions to test to make sure different ways work). life = Project("life") playguitar = Action("Play guitar") life.add(Project("Get Married")) wife = Project("Find wife") wife.add(Action("Date")) wife.add(Action("Propose")) wife.add(Action("Plan wedding")) life.project("Get Married").add(wife) life.add(Project("Have kids")) life.project("Have kids").add(Action("Bang wife")) life.project("Have kids").add(Action("Get wife pregnant")) life.project("Have kids").add(Project("Suffer through pregnancy")) life.project("Have kids").project("Suffer through pregnancy").add(Action("Drink")) life.project("Have kids").project("Suffer through pregnancy").add(playguitar) life.add(Project("Retire")) life.project("Retire").add(playguitar) life should have a few projects in it, with a few projects inside of those. The structure amounts to something like this (where indents are projects and -'s are actions) Life Get Married Find wife - Date - Propose - Plan wedding Have kids - Bang wife - Get wife pregnant Suffer through pregnancy - Drink - Play guitar Retire - Play guitar What I'm finding is that life.actions() is returning every action in the tree when it should return none. life.projects() is returning every project, even sub projects, when I only want 'Get Married', 'Have kids', and 'Retire'. What is it that I'm doing wrong?
[ "The problem is with your initialization of Projects:\n __init__(self, name, children=[]):\n\nYou only get one list, which is shared by all Projects you create without passing a value for children. See here for an explanation. You want to instead make the default None, and initialize an empty list whenever the value is None. \n __init__(self, name, children=None):\n if children is None:\n children = []\n\n" ]
[ 5 ]
[]
[]
[ "composite", "gtd", "python", "recursion" ]
stackoverflow_0001180876_composite_gtd_python_recursion.txt
Q: Python Popen difficulties: File not found I'm trying to use python to run a program. from subprocess import Popen sa_proc = Popen(['C:\\sa\\sa.exe','--?']) Running this small snippit gives the error: WindowsError: [Error 2] The system cannot find the file specified The program exists and I have copy and pasted directly from explorer the absolute path to the exe. I have tried other things and have found that if I put the EXE in the source folder with the python script and use './sa.exe' then it works. The only thing I can think of is that I'm running the python script (and python) from a separate partition (F:). Any ideas? Thanks A: As the docs say, "On Windows: the Popen class uses CreateProcess() to execute the child program, which operates on strings. If args is a sequence, it will be converted to a string using the list2cmdline() method.". Maybe that method is messing things up, so why not try the simpler approach of: sa_proc = Popen('C:\\sa\\sa.exe --?') If this still fails, then: what's os.environ['COMSPEC'] just before you try this? What happens if you add , shell=True to Popen's arguments? Edit: turns out apparently to be a case of simple mis-spellling, as 'sa' was actually the program spelled SpamAssassin -- double s twice -- and what the OP was writing was spamassasin -- one double s but a single one the second time. A: You may not have permission to execute C:\sa\sa.exe. Have you tried running the program manually?
Python Popen difficulties: File not found
I'm trying to use python to run a program. from subprocess import Popen sa_proc = Popen(['C:\\sa\\sa.exe','--?']) Running this small snippit gives the error: WindowsError: [Error 2] The system cannot find the file specified The program exists and I have copy and pasted directly from explorer the absolute path to the exe. I have tried other things and have found that if I put the EXE in the source folder with the python script and use './sa.exe' then it works. The only thing I can think of is that I'm running the python script (and python) from a separate partition (F:). Any ideas? Thanks
[ "As the docs say, \"On Windows: the Popen class uses CreateProcess() to execute the child program, which operates on strings. If args is a sequence, it will be converted to a string using the list2cmdline() method.\". Maybe that method is messing things up, so why not try the simpler approach of:\nsa_proc = Popen('C:\\\\sa\\\\sa.exe --?')\n\nIf this still fails, then: what's os.environ['COMSPEC'] just before you try this? What happens if you add , shell=True to Popen's arguments?\nEdit: turns out apparently to be a case of simple mis-spellling, as 'sa' was actually the program spelled SpamAssassin -- double s twice -- and what the OP was writing was spamassasin -- one double s but a single one the second time.\n", "You may not have permission to execute C:\\sa\\sa.exe. Have you tried running the program manually?\n" ]
[ 8, 0 ]
[]
[]
[ "popen", "python", "subprocess" ]
stackoverflow_0001180592_popen_python_subprocess.txt
Q: How can you check if a key is currently pressed using Tkinter in Python? Is there any way to detect which keys are currently pressed using Tkinter? I don't want to have to use extra libraries if possible. I can already detect when keys are pressed, but I want to be able to check at any time what keys are pressed down at the moment. A: I think you need to keep track of events about keys getting pressed and released (maintaining your own set of "currently pressed" keys) -- I believe Tk doesn't keep track of that for you (and Tkinter really adds little on top of Tk, it's mostly a direct interface to it).
How can you check if a key is currently pressed using Tkinter in Python?
Is there any way to detect which keys are currently pressed using Tkinter? I don't want to have to use extra libraries if possible. I can already detect when keys are pressed, but I want to be able to check at any time what keys are pressed down at the moment.
[ "I think you need to keep track of events about keys getting pressed and released (maintaining your own set of \"currently pressed\" keys) -- I believe Tk doesn't keep track of that for you (and Tkinter really adds little on top of Tk, it's mostly a direct interface to it).\n" ]
[ 4 ]
[]
[]
[ "keylistener", "python", "tkinter" ]
stackoverflow_0001181027_keylistener_python_tkinter.txt
Q: limit output from a sort method if my views code is: arttags = sorted(arttags, key=operator.attrgetter('date_added'), reverse=True) what is the argument that will limit the result to 50 tags? I'm assuming this: .... limit=50) is incorrect. more complete code follows: videoarttags = Media.objects.order_by('date_added'),filter(topic__exact='art') audioarttags = Audio.objects.order_by('date_added'),filter(topic__exact='art') conarttags = Concert.objects.order_by('date_added'),filter(topic__exact='art') arttags = list(chain(videoarttags, audioarttags, conarttags)) arttags = sorted(arttags, key=operator.attrgetter('date_added'), reverse=True) how do incorporate – itertools.islice(sorted(...),50) A: what about heapq.nlargest: Return a list with the n largest elements from the dataset defined by iterable.key, if provided, specifies a function of one argument that is used to extract a comparison key from each element in the iterable: key=str.lower Equivalent to: sorted(iterable, key=key, reverse=True)[:n] >>> from heapq import nlargest >>> data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0] >>> nlargest(3, data) [9, 8, 7] A: You'll probably find that a slice works for you: arttags = sorted(arttags, key=operator.attrgetter('date_added'), reverse=True)[:50] A: The general idea of what you want is a take, I believe. From the itertools documentation: def take(n, iterable): "Return first n items of the iterable as a list" return list(islice(iterable, n)) A: I think I was pretty much barking up the wrong tree. What I was trying to accomplish was actually very simple using a template filter (slice) which I didn't know I could do. The code was as follows: {% for arttag in arttags|slice:":50" %} Yes, I feel pretty stupid, but I'm glad I got it done :-) A: You might also want to add [:50] to each of the objects.order_by.filter calls. Doing that will mean you only ever have to sort 150 items in-memory in Python instead of possibly many more.
limit output from a sort method
if my views code is: arttags = sorted(arttags, key=operator.attrgetter('date_added'), reverse=True) what is the argument that will limit the result to 50 tags? I'm assuming this: .... limit=50) is incorrect. more complete code follows: videoarttags = Media.objects.order_by('date_added'),filter(topic__exact='art') audioarttags = Audio.objects.order_by('date_added'),filter(topic__exact='art') conarttags = Concert.objects.order_by('date_added'),filter(topic__exact='art') arttags = list(chain(videoarttags, audioarttags, conarttags)) arttags = sorted(arttags, key=operator.attrgetter('date_added'), reverse=True) how do incorporate – itertools.islice(sorted(...),50)
[ "what about heapq.nlargest:\nReturn a list with the n largest elements from the dataset defined by iterable.key, if provided, specifies a function of one argument that is used to extract a comparison key from each element in the iterable: key=str.lower Equivalent to: sorted(iterable, key=key, reverse=True)[:n]\n>>> from heapq import nlargest\n>>> data = [1, 3, 5, 7, 9, 2, 4, 6, 8, 0]\n>>> nlargest(3, data)\n[9, 8, 7]\n\n", "You'll probably find that a slice works for you:\narttags = sorted(arttags, key=operator.attrgetter('date_added'), reverse=True)[:50]\n\n", "The general idea of what you want is a take, I believe. From the itertools documentation:\ndef take(n, iterable):\n \"Return first n items of the iterable as a list\"\n return list(islice(iterable, n))\n\n", "I think I was pretty much barking up the wrong tree. What I was trying to accomplish was actually very simple using a template filter (slice) which I didn't know I could do.\nThe code was as follows:\n{% for arttag in arttags|slice:\":50\" %}\n\nYes, I feel pretty stupid, but I'm glad I got it done :-) \n", "You might also want to add [:50] to each of the objects.order_by.filter calls. Doing that will mean you only ever have to sort 150 items in-memory in Python instead of possibly many more.\n" ]
[ 4, 3, 0, 0, 0 ]
[]
[]
[ "django", "python", "python_itertools" ]
stackoverflow_0001162142_django_python_python_itertools.txt
Q: emulating LiveHTTPheader in server side script or javascript? I ran into this problem when scraping sites with heavy usage of javascript to obfuscate it's data. For example, "a href="javascript:void(0)" onClick="grabData(23)"> VIEW DETAILS This href attribute, reveals no information about the actual URL. You'd have to manually look and examine the grabData() javascript function to get a clue. OR The old school way is manually opening up Live HTTP header add on for firefox, and monitoring the POST perimeters, which reveals the actual URL being POSTed. So i'm wondering, is there a way to capture the POST parameters in a server side script or Javscript, as Live HTTP header does, for the outgoing and incoming POST parameters? This would make even the most javscript obfuscated web pages easily scrapable. thanks. A: I'm not sure I understand the question but... In PHP, incoming POST parameters are stored in the $_POST array, you can display them with print_r($_POST);.
emulating LiveHTTPheader in server side script or javascript?
I ran into this problem when scraping sites with heavy usage of javascript to obfuscate it's data. For example, "a href="javascript:void(0)" onClick="grabData(23)"> VIEW DETAILS This href attribute, reveals no information about the actual URL. You'd have to manually look and examine the grabData() javascript function to get a clue. OR The old school way is manually opening up Live HTTP header add on for firefox, and monitoring the POST perimeters, which reveals the actual URL being POSTed. So i'm wondering, is there a way to capture the POST parameters in a server side script or Javscript, as Live HTTP header does, for the outgoing and incoming POST parameters? This would make even the most javscript obfuscated web pages easily scrapable. thanks.
[ "I'm not sure I understand the question but...\nIn PHP, incoming POST parameters are stored in the $_POST array, you can display them with print_r($_POST);.\n" ]
[ 1 ]
[]
[]
[ "jquery", "php", "python" ]
stackoverflow_0001181233_jquery_php_python.txt
Q: Rearrange equations for solver I am looking for a generic python way to manipulate text into solvable equations. For example: there may be some constants to initialize e1,e2=0.58,0.62 ma1,ma2=0.85,1.15 mw=0.8 Cpa,Cpw=1.023,4.193 dba,dbr=0.0,25.0 and a set of equations (written here for readability rather than the solver) Q=e1*ma1*Cpa*(tw1-dba) Q=ma1*Cpa*(dbs-dba) Q=mw*Cpw*(tw1-tw2) Q=e2*ma2*Cpa*(dbr-tw2) Q=ma2*Cpa*(dbr-dbo) This leaves 5 unknowns, so presumably the system can be solved. Q, dbo, dbr, tw1, tw2 Actual systems are non-linear and much more complicated. I have already solved this easy example with scipy, Delphi, Sage... so I'm not looking for the solve part. The equations are typed directly into a text editor and I want a Python program to give me an array of unknowns and an array of error functions. y = mysolver.fsolve(f, x) So, for the above example x=[Q,dbo,dbr,tw1,tw2] f=[Q-e1*ma1*Cpa*(tw1-dba), Q-ma1*Cpa*(dbs-dba), Q-mw*Cpw*(tw1-tw2), Q-e2*ma2*Cpa*(dbr-tw2), Q-ma2*Cpa*(dbr-dbo)] I just don't know how to extract the unknowns and create the error functions. I tried the compile.parse() function and it seems to give a structured breakdown. Can anyone give some ideas on the best approach. A: Actually, I've implemented exactly the same thing in python. I'm also familiar with the Eureka and the other programs you mentioned. You can see my implementation at xyzsolve.appspot.com (Sorry for the shameless plug). The implementation is in all python. I'll list the iterations the code went through: Iteration #0: Do a simple search a replace for each variable in the equation and replace the variable with its value. For example x * y would become 1.1 * 2.2 if the values of x and y are 1.1 and 2.2. After you get the transformed string, you can just use eval and put its value into the residual (or f vector, in your case). Scipy's fsolve/fmin function lets you pass additional arguments into your residual function, so make use of that. I.e. pass a dictionary that contains the index of each named variable. Your dict should contain something like {'x': 0, 'y':1} and then you can just do a search and replace for each equation. This works, but very slowly since you have to do a search-replace everytime the residual function is called. Iteration #1: Do the same as iteration #0, except replace variables with the x array element directly, so 'y' would become 'x[1]'. In fact you can do all this to generate a function string; something that looks like "def f(x): return x[0]+x[1], x[0] - x[1]". Then you can use the exec function in python to create the function to pass to fsolve/fmin. No speed hit and you can stop at this point if your equations are in the form of valid python syntax. You can't do much more with this approach if you want to support more extensive equation input format. Iteration #2: Implement a custom lexer and parser. This isn't as hard to do as it sounds. I used http://www.evanfosmark.com/2009/02/sexy-lexing-with-python/ for the lexer. I created a recursive descent parser (this isn't hard at all, 100 or so lines of code) to parse each equation. This gives you complete flexibilty with the equation format. I just keep track of the variables, constants that occur on each side of the equation in separate lists. As the parser parses the equation, it builds a equation string that looks like 'var_000 + var_001 * var_002' and so on. Finally I just replace the 'var_000' with the appropriate index from the x vector. So 'var_000' becomes 'x[0]' and so on. If you want you can build an AST and do many more sophisticated transformations but I stopped here. Finally, you might also want to consider the type of input equations. There are quite a few innocuous non-linear equations that will not solve with fsolve (it uses MINPACK hybrdj). You probably also need a way to input initial guesses. I'd be interested to hear if there are any other alternative ways of doing this. A: If you don't want to write a parser for your own expression language, you can indeed try to use the Python syntax. Don't use the compiler module; instead, use some kind of abstract syntax. Since 2.5, you can use the _ast module: py> import _ast py> tree = compile("e1,e2=0.58,0.62", "<string>", "exec", _ast.PyCF_ONLY_AST) py> tree <_ast.Module object at 0xb7cd5fac> py> tree.body[0] <_ast.Assign object at 0xb7cd5fcc> py> tree.body[0].targets[0] <_ast.Tuple object at 0xb7cd5fec> py> tree.body[0].targets[0].elts [<_ast.Name object at 0xb7cd5e4c>, <_ast.Name object at 0xb7cd5f6c>] py> tree.body[0].targets[0].elts[0].id 'e1' py> tree.body[0].targets[0].elts[1].id 'e2' In earlier versions, you would have to use parser.suite, which gives you a concrete-syntax tree that is more difficult to process.
Rearrange equations for solver
I am looking for a generic python way to manipulate text into solvable equations. For example: there may be some constants to initialize e1,e2=0.58,0.62 ma1,ma2=0.85,1.15 mw=0.8 Cpa,Cpw=1.023,4.193 dba,dbr=0.0,25.0 and a set of equations (written here for readability rather than the solver) Q=e1*ma1*Cpa*(tw1-dba) Q=ma1*Cpa*(dbs-dba) Q=mw*Cpw*(tw1-tw2) Q=e2*ma2*Cpa*(dbr-tw2) Q=ma2*Cpa*(dbr-dbo) This leaves 5 unknowns, so presumably the system can be solved. Q, dbo, dbr, tw1, tw2 Actual systems are non-linear and much more complicated. I have already solved this easy example with scipy, Delphi, Sage... so I'm not looking for the solve part. The equations are typed directly into a text editor and I want a Python program to give me an array of unknowns and an array of error functions. y = mysolver.fsolve(f, x) So, for the above example x=[Q,dbo,dbr,tw1,tw2] f=[Q-e1*ma1*Cpa*(tw1-dba), Q-ma1*Cpa*(dbs-dba), Q-mw*Cpw*(tw1-tw2), Q-e2*ma2*Cpa*(dbr-tw2), Q-ma2*Cpa*(dbr-dbo)] I just don't know how to extract the unknowns and create the error functions. I tried the compile.parse() function and it seems to give a structured breakdown. Can anyone give some ideas on the best approach.
[ "Actually, I've implemented exactly the same thing in python. I'm also familiar with the Eureka and the other programs you mentioned. You can see my implementation at xyzsolve.appspot.com (Sorry for the shameless plug). The implementation is in all python. I'll list the iterations the code went through: \nIteration #0: Do a simple search a replace for each variable in the equation and replace the variable with its value. For example x * y would become 1.1 * 2.2 if the values of x and y are 1.1 and 2.2. After you get the transformed string, you can just use eval and put its value into the residual (or f vector, in your case). Scipy's fsolve/fmin function lets you pass additional arguments into your residual function, so make use of that. I.e. pass a dictionary that contains the index of each named variable. Your dict should contain something like {'x': 0, 'y':1} and then you can just do a search and replace for each equation. This works, but very slowly since you have to do a search-replace everytime the residual function is called.\nIteration #1: Do the same as iteration #0, except replace variables with the x array element directly, so 'y' would become 'x[1]'. In fact you can do all this to generate a function string; something that looks like \"def f(x): return x[0]+x[1], x[0] - x[1]\". Then you can use the exec function in python to create the function to pass to fsolve/fmin. No speed hit and you can stop at this point if your equations are in the form of valid python syntax. You can't do much more with this approach if you want to support more extensive equation input format.\nIteration #2: Implement a custom lexer and parser. This isn't as hard to do as it sounds. I used http://www.evanfosmark.com/2009/02/sexy-lexing-with-python/ for the lexer. I created a recursive descent parser (this isn't hard at all, 100 or so lines of code) to parse each equation. This gives you complete flexibilty with the equation format. I just keep track of the variables, constants that occur on each side of the equation in separate lists. As the parser parses the equation, it builds a equation string that looks like 'var_000 + var_001 * var_002' and so on. Finally I just replace the 'var_000' with the appropriate index from the x vector. So 'var_000' becomes 'x[0]' and so on. If you want you can build an AST and do many more sophisticated transformations but I stopped here.\nFinally, you might also want to consider the type of input equations. There are quite a few innocuous non-linear equations that will not solve with fsolve (it uses MINPACK hybrdj). You probably also need a way to input initial guesses. \nI'd be interested to hear if there are any other alternative ways of doing this.\n", "If you don't want to write a parser for your own expression language, you can indeed try to use the Python syntax. Don't use the compiler module; instead, use some kind of abstract syntax. Since 2.5, you can use the _ast module:\npy> import _ast \npy> tree = compile(\"e1,e2=0.58,0.62\", \"<string>\", \"exec\", _ast.PyCF_ONLY_AST)\npy> tree\n<_ast.Module object at 0xb7cd5fac> \npy> tree.body[0]\n<_ast.Assign object at 0xb7cd5fcc>\npy> tree.body[0].targets[0]\n<_ast.Tuple object at 0xb7cd5fec>\npy> tree.body[0].targets[0].elts\n[<_ast.Name object at 0xb7cd5e4c>, <_ast.Name object at 0xb7cd5f6c>]\npy> tree.body[0].targets[0].elts[0].id\n'e1'\npy> tree.body[0].targets[0].elts[1].id\n'e2'\n\nIn earlier versions, you would have to use parser.suite, which gives you a concrete-syntax tree that is more difficult to process.\n" ]
[ 2, 1 ]
[]
[]
[ "equation", "python", "solver" ]
stackoverflow_0001169593_equation_python_solver.txt
Q: How do I configure Eclipse to launch a browser when Run or Debug is selected using Pydev plugin I'm learning Python and Django using the Eclipse Pydev plugin. I want the internal or external browser to launch or refresh with the URL http:/127.0.0.1 when I press Run or Debug. I've seen it done with the PHP plugins but not Pydev. A: Here are the steps to set up an external launch configuration to launch IE: Select Run->External Tools->External Tools Configurations... In the left hand pane, select Program then the new icon (left-most icon above the pane). In the right hand pane, select the Main tab. Enter launch_ie in the Name: field. Enter ${system_path:explorer.exe} in the Location: field. Enter http:/127.0.0.1 in the Arguments field. To run the external configuration, select Run. If you want to share the configuration you can use these optional steps: Select the Common tab Select the Shared file: option in the Save As section. Select a location to save the configuration (saving it to an otherwise empty project might be a good idea, as you can import that to another workspace) To rerun the configuration you have a few choices: Select the External Tools icon from the menu bar then click launch_ie Select Run->External Tools->launch ie Hit Alt+R, E, 1 (assuming launch_ie is the first item in the list, otherwise pick the appropriate number) A: project properties (right click project in left pane) Go to "run/debug settings", add a new profile. Setup the path and environment etc... you want to launch. The new configuration will show up in your build menu. You could also configure it as an "external tool"
How do I configure Eclipse to launch a browser when Run or Debug is selected using Pydev plugin
I'm learning Python and Django using the Eclipse Pydev plugin. I want the internal or external browser to launch or refresh with the URL http:/127.0.0.1 when I press Run or Debug. I've seen it done with the PHP plugins but not Pydev.
[ "Here are the steps to set up an external launch configuration to launch IE:\n\nSelect Run->External Tools->External Tools Configurations...\nIn the left hand pane, select Program then the new icon (left-most icon above the pane).\nIn the right hand pane, select the Main tab.\nEnter launch_ie in the Name: field.\nEnter ${system_path:explorer.exe} in the Location: field.\nEnter http:/127.0.0.1 in the Arguments field.\nTo run the external configuration, select Run.\n\nIf you want to share the configuration you can use these optional steps:\n\nSelect the Common tab\nSelect the Shared file: option in the Save As section.\nSelect a location to save the configuration (saving it to an otherwise empty project might be a good idea, as you can import that to another workspace)\n\nTo rerun the configuration you have a few choices:\n\nSelect the External Tools icon from the menu bar \n then click launch_ie\nSelect Run->External Tools->launch ie\nHit Alt+R, E, 1 (assuming launch_ie is the first item in the list, otherwise pick the appropriate number)\n\n", "project properties (right click project in left pane)\nGo to \"run/debug settings\", add a new profile. Setup the path and environment etc... you want to launch. The new configuration will show up in your build menu. You could also configure it as an \"external tool\"\n" ]
[ 7, 1 ]
[]
[]
[ "eclipse", "eclipse_plugin", "pydev", "python" ]
stackoverflow_0000697142_eclipse_eclipse_plugin_pydev_python.txt
Q: Does anyone have example code of using scipy.stats.distributions? I am struggling to figure out how to use the scipy.distributions package and wondered if anyone could post some example code for me. It appears to do everything I need, I just can't figure out how to use it. I need to generate two distributions, one log-normal and one poisson. I know the variance and lambda for each. Links to resources would work just as well. A: I assume you mean the distributions in scipy.stats. To create a distribution, generate random variates and calculate the pdf: Python 2.5.1 (r251:54863, Feb 4 2008, 21:48:13) [GCC 4.0.1 (Apple Inc. build 5465)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from scipy.stats import poisson, lognorm >>> myShape = 5;myMu=10 >>> ln = lognorm(myShape) >>> p = poisson(myMu) >>> ln.rvs((10,)) #generate 10 RVs from ln array([ 2.09164812e+00, 3.29062874e-01, 1.22453941e-03, 3.80101527e+02, 7.67464002e-02, 2.53530952e+01, 1.41850880e+03, 8.36347923e+03, 8.69209870e+03, 1.64317413e-01]) >>> p.rvs((10,)) #generate 10 RVs from p array([ 8, 9, 7, 12, 6, 13, 11, 11, 10, 8]) >>> ln.pdf(3) #lognorm PDF at x=3 array(0.02596183475208955) Other methods (and the rest of the scipy.stats documentation) can be found at the new SciPy documentation site. A: Here's some sample code: Probability distributions in SciPy
Does anyone have example code of using scipy.stats.distributions?
I am struggling to figure out how to use the scipy.distributions package and wondered if anyone could post some example code for me. It appears to do everything I need, I just can't figure out how to use it. I need to generate two distributions, one log-normal and one poisson. I know the variance and lambda for each. Links to resources would work just as well.
[ "I assume you mean the distributions in scipy.stats. To create a distribution, generate random variates and calculate the pdf:\nPython 2.5.1 (r251:54863, Feb 4 2008, 21:48:13) \n[GCC 4.0.1 (Apple Inc. build 5465)] on darwin\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> from scipy.stats import poisson, lognorm\n>>> myShape = 5;myMu=10\n>>> ln = lognorm(myShape)\n>>> p = poisson(myMu)\n>>> ln.rvs((10,)) #generate 10 RVs from ln\narray([ 2.09164812e+00, 3.29062874e-01, 1.22453941e-03,\n 3.80101527e+02, 7.67464002e-02, 2.53530952e+01,\n 1.41850880e+03, 8.36347923e+03, 8.69209870e+03,\n 1.64317413e-01])\n>>> p.rvs((10,)) #generate 10 RVs from p\narray([ 8, 9, 7, 12, 6, 13, 11, 11, 10, 8])\n>>> ln.pdf(3) #lognorm PDF at x=3\narray(0.02596183475208955)\n\nOther methods (and the rest of the scipy.stats documentation) can be found at the new SciPy documentation site.\n", "Here's some sample code: Probability distributions in SciPy\n" ]
[ 8, 3 ]
[]
[]
[ "python", "scipy" ]
stackoverflow_0000485076_python_scipy.txt
Q: Practical point of view: Why would I want to use Python with C++? I've been seeing some examples of Python being used with c++, and I'm trying to understand why would someone want to do it. What are the benefits of calling C++ code from an external language such as Python? I'd appreciate a simple example - Boost::Python will do A: It depends on your point of view: Calling C++ code from a python application You generally want to do this when performance is an issue. Highly dynamic languages like python are typically somewhat slower then native code such as C++. "Features" of C++ such as manual memory management allows for the development of very fast libraries, which can then be called from python in order to gain performance. Another reason is due to the fact that most libraries on both windows and *nix are written in C or C++, and it's a huge advantage to have this existing code base available. Calling python code from a C++ application Complex applications sometimes require the ability to define additional abilities. Adding behaviors in a compiled application is messy, requires the original source code and is time consuming. Therefore it's often strategic to embed a scripting language such as python in order to make the application more flexible and customizable. As for an example: I think you need to clarify a bit what you're interested in if you want the sample to be of any help. The boost manual provides a simple hello world sample, if that's what you're looking for. A: Generally, you'd call C++ from python in order to use an existing library or other functionality. Often someone else has written a set of functions that make your life easier, and calling compiled C code is easier than re-writing the library in python. The other reason is for performance purposes. Often, specific functions of an otherwise completely scripted program are written in a pre-compiled language like C because they take a long time to run and can be more efficiently done in a lower-level language. A third reason is for interfacing with devices. Python doesn't natively include a lot of code for dealing with sound cards, serial ports, and so on. If your device needs a device driver, python will talk to it via pre-compiled code you include in your app. A: Here's two possibilities: Perhaps the C++ code is already written & available for use. It's likely the C++ code is faster/smaller than equivalent Python A: Because C++ provides a direct way of calling OS services, and (if used in a careful way) can produce code that is more efficient in memory and time, whereas Python is a high-level language, and is less painful to use in those situations where utter efficiency isn't a concern and where you already have libraries giving you access to the services you need. If you're a C++ user, you may wonder why this is necessary, but the expressiveness and safety of a high-level language has such a massive relative effect on your productivity, it has to be experienced to be understood or believed. I can't speak for Python specifically, but I've heard people talk in terms of "tripling" their productivity by doing most of their development in it and using C++ only where shown to be necessary by profiling, or to create extra libraries. If you're a Python user, you may not have encountered a situation where you need anything beyond the libraries already available, and you may not have a problem with the performance you get from pure Python (this is quite likely). In which case - lucky you! You can forget about all this. A: Here's a real-life example: I've written a DLL in C to interface with some custom hardware for work. Then for the very first stage of testing, I was writing short programs in C to verify that the different commands were working properly. The process of write, compile, run took probably 3-5 times as long as when I finally wrote a Python interface to the DLL using ctypes. Now, I can write testing scripts much more rapidly with much less regards to proper variable initialization and memory management that I would have to worry about in C. In fact, I've even been able to use unit testing libraries in Python to create much more robust tests than before. Would that have been possible in C? Absolutely, but it would have taken me much longer, and it would have been many more lines of code. Fewer lines of code in Python means (in general) that there are fewer things with my main logic that can go wrong. Moreover, since the hardware communication is almost completely IO bound, there's no need to write any supporting code in C. I may as well program in whatever is fastest to develop. So there you go, real-life example. A: Performance : From my limited experience, Python is about 10 times slower than using C. Using Psyco will dramatically improve it, but still about 5 times slower than C. BUT, calling c module from python is only a little faster than Psyco. When you have some libraries in C. For example, I am working heavily on SIP. It's a very complicated protocol stacks and there is no complete Python implementation. So my only choice is calling SIP libraries written in C. There are also this kind of cases, like video/audio decoding. A: One nice thing about using a scripting language is that you can reload new code into the application without quitting the app, then making changes, recompile, and then relaunching the app. When people talk about quicker development times, some of that refers to this capability. A downside of using a scripting languages is that their debuggers are usually not as fully featured as what you would have in C++. I haven't done any Python programming so I don't know what the features are of its debugger, if it has one at all. This answer doesn't exactly answer what you asked but I thought it was relevant. The answer is more the pro/cons of using a scripting language. Please don't flame me. :)
Practical point of view: Why would I want to use Python with C++?
I've been seeing some examples of Python being used with c++, and I'm trying to understand why would someone want to do it. What are the benefits of calling C++ code from an external language such as Python? I'd appreciate a simple example - Boost::Python will do
[ "It depends on your point of view:\nCalling C++ code from a python application\nYou generally want to do this when performance is an issue. Highly dynamic languages like python are typically somewhat slower then native code such as C++. \"Features\" of C++ such as manual memory management allows for the development of very fast libraries, which can then be called from python in order to gain performance.\nAnother reason is due to the fact that most libraries on both windows and *nix are written in C or C++, and it's a huge advantage to have this existing code base available.\nCalling python code from a C++ application\nComplex applications sometimes require the ability to define additional abilities. Adding behaviors in a compiled application is messy, requires the original source code and is time consuming. Therefore it's often strategic to embed a scripting language such as python in order to make the application more flexible and customizable.\nAs for an example: I think you need to clarify a bit what you're interested in if you want the sample to be of any help. The boost manual provides a simple hello world sample, if that's what you're looking for.\n", "Generally, you'd call C++ from python in order to use an existing library or other functionality. Often someone else has written a set of functions that make your life easier, and calling compiled C code is easier than re-writing the library in python.\nThe other reason is for performance purposes. Often, specific functions of an otherwise completely scripted program are written in a pre-compiled language like C because they take a long time to run and can be more efficiently done in a lower-level language.\nA third reason is for interfacing with devices. Python doesn't natively include a lot of code for dealing with sound cards, serial ports, and so on. If your device needs a device driver, python will talk to it via pre-compiled code you include in your app.\n", "Here's two possibilities:\n\nPerhaps the C++ code is already written & available for use. \nIt's likely the C++ code is faster/smaller than equivalent Python\n\n", "Because C++ provides a direct way of calling OS services, and (if used in a careful way) can produce code that is more efficient in memory and time, whereas Python is a high-level language, and is less painful to use in those situations where utter efficiency isn't a concern and where you already have libraries giving you access to the services you need.\nIf you're a C++ user, you may wonder why this is necessary, but the expressiveness and safety of a high-level language has such a massive relative effect on your productivity, it has to be experienced to be understood or believed.\nI can't speak for Python specifically, but I've heard people talk in terms of \"tripling\" their productivity by doing most of their development in it and using C++ only where shown to be necessary by profiling, or to create extra libraries.\nIf you're a Python user, you may not have encountered a situation where you need anything beyond the libraries already available, and you may not have a problem with the performance you get from pure Python (this is quite likely). In which case - lucky you! You can forget about all this.\n", "Here's a real-life example: I've written a DLL in C to interface with some custom hardware for work. Then for the very first stage of testing, I was writing short programs in C to verify that the different commands were working properly. The process of write, compile, run took probably 3-5 times as long as when I finally wrote a Python interface to the DLL using ctypes.\nNow, I can write testing scripts much more rapidly with much less regards to proper variable initialization and memory management that I would have to worry about in C. In fact, I've even been able to use unit testing libraries in Python to create much more robust tests than before. Would that have been possible in C? Absolutely, but it would have taken me much longer, and it would have been many more lines of code. \nFewer lines of code in Python means (in general) that there are fewer things with my main logic that can go wrong.\nMoreover, since the hardware communication is almost completely IO bound, there's no need to write any supporting code in C. I may as well program in whatever is fastest to develop.\nSo there you go, real-life example.\n", "\nPerformance :\n\nFrom my limited experience, Python is about 10 times slower than using C.\nUsing Psyco will dramatically improve it, but still about 5 times slower than C.\nBUT, calling c module from python is only a little faster than Psyco.\n\nWhen you have some libraries in C.\nFor example, I am working heavily on SIP. It's a very complicated protocol stacks and there is no complete Python implementation. So my only choice is calling SIP libraries written in C.\n\nThere are also this kind of cases, like video/audio decoding.\n", "One nice thing about using a scripting language is that you can reload new code into the application without quitting the app, then making changes, recompile, and then relaunching the app. When people talk about quicker development times, some of that refers to this capability.\nA downside of using a scripting languages is that their debuggers are usually not as fully featured as what you would have in C++. I haven't done any Python programming so I don't know what the features are of its debugger, if it has one at all.\nThis answer doesn't exactly answer what you asked but I thought it was relevant. The answer is more the pro/cons of using a scripting language. Please don't flame me. :)\n" ]
[ 21, 5, 3, 3, 2, 0, 0 ]
[]
[]
[ "c++", "python" ]
stackoverflow_0001181462_c++_python.txt
Q: Python: Multicore processing? I've been reading about Python's multiprocessing module. I still don't think I have a very good understanding of what it can do. Let's say I have a quadcore processor and I have a list with 1,000,000 integers and I want the sum of all the integers. I could simply do: list_sum = sum(my_list) But this only sends it to one core. Is it possible, using the multiprocessing module, to divide the array up and have each core get the sum of it's part and return the value so the total sum may be computed? Something like: core1_sum = sum(my_list[0:500000]) #goes to core 1 core2_sum = sum(my_list[500001:1000000]) #goes to core 2 all_core_sum = core1_sum + core2_sum #core 3 does final computation Any help would be appreciated. A: Yes, it's possible to do this summation over several processes, very much like doing it with multiple threads: from multiprocessing import Process, Queue def do_sum(q,l): q.put(sum(l)) def main(): my_list = range(1000000) q = Queue() p1 = Process(target=do_sum, args=(q,my_list[:500000])) p2 = Process(target=do_sum, args=(q,my_list[500000:])) p1.start() p2.start() r1 = q.get() r2 = q.get() print r1+r2 if __name__=='__main__': main() However, it is likely that doing it with multiple processes is likely slower than doing it in a single process, as copying the data forth and back is more expensive than summing them right away. A: Welcome the world of concurrent programming. What Python can (and can't) do depends on two things. What the OS can (and can't) do. Most OS's allocate processes to cores. To use 4 cores, you need to break your problem into four processes. This is easier than it sounds. Sometimes. What the underlying C libraries can (and can't) do. If the C libraries expose features of the OS AND the OS exposes features of the hardware, you're solid. To break a problem into multiple processes -- especially in GNU/Linux -- is easy. Break it into a multi-step pipeline. In the case of summing a million numbers, think of the following shell script. Assuming some hypothetical sum.py program that sums either a range of numbers or a list of numbers on stdin. ( sum.py 0 500000 & sum.py 50000 1000000 ) | sum.py This would have 3 concurrent processes. Two are doing sums of a lot of numbers, the third is summing two numbers. Since the GNU/Linux shells and the OS already handle some parts of concurrency for you, you can design simple (very, very simple) programs that read from stdin, write to stdout, and are designed to do small parts of a large job. You can try to reduce the overheads by using subprocess to build the pipeline instead of allocating the job to the shell. You may find, however, that the shell builds pipelines very, very quickly. (It was written directly in C and makes direct OS API calls for you.) A: Sure, for example: from multiprocessing import Process, Queue thelist = range(1000*1000) def f(q, sublist): q.put(sum(sublist)) def main(): start = 0 chunk = 500*1000 queue = Queue() NP = 0 subprocesses = [] while start < len(thelist): p = Process(target=f, args=(queue, thelist[start:start+chunk])) NP += 1 print 'delegated %s:%s to subprocess %s' % (start, start+chunk, NP) p.start() start += chunk subprocesses.append(p) total = 0 for i in range(NP): total += queue.get() print "total is", total, '=', sum(thelist) while subprocesses: subprocesses.pop().join() if __name__ == '__main__': main() results in: $ python2.6 mup.py delegated 0:500000 to subprocess 1 delegated 500000:1000000 to subprocess 2 total is 499999500000 = 499999500000 note that this granularity is too fine to be worth spawning processes for -- the overall summing task is small (which is why I can recompute the sum in main as a check;-) and too many data is being moved back and forth (in fact the subprocesses wouldn't need to get copies of the sublists they work on -- indices would suffice). So, it's a "toy example" where multiprocessing isn't really warranted. With different architectures (use a pool of subprocesses that receive multiple tasks to perform from a queue, minimize data movement back and forth, etc, etc) and on less granular tasks you could actually get benefits in terms of performance, however.
Python: Multicore processing?
I've been reading about Python's multiprocessing module. I still don't think I have a very good understanding of what it can do. Let's say I have a quadcore processor and I have a list with 1,000,000 integers and I want the sum of all the integers. I could simply do: list_sum = sum(my_list) But this only sends it to one core. Is it possible, using the multiprocessing module, to divide the array up and have each core get the sum of it's part and return the value so the total sum may be computed? Something like: core1_sum = sum(my_list[0:500000]) #goes to core 1 core2_sum = sum(my_list[500001:1000000]) #goes to core 2 all_core_sum = core1_sum + core2_sum #core 3 does final computation Any help would be appreciated.
[ "Yes, it's possible to do this summation over several processes, very much like doing it with multiple threads:\nfrom multiprocessing import Process, Queue\n\ndef do_sum(q,l):\n q.put(sum(l))\n\ndef main():\n my_list = range(1000000)\n\n q = Queue()\n\n p1 = Process(target=do_sum, args=(q,my_list[:500000]))\n p2 = Process(target=do_sum, args=(q,my_list[500000:]))\n p1.start()\n p2.start()\n r1 = q.get()\n r2 = q.get()\n print r1+r2\n\nif __name__=='__main__':\n main()\n\nHowever, it is likely that doing it with multiple processes is likely slower than doing it in a single process, as copying the data forth and back is more expensive than summing them right away.\n", "Welcome the world of concurrent programming.\nWhat Python can (and can't) do depends on two things.\n\nWhat the OS can (and can't) do. Most OS's allocate processes to cores. To use 4 cores, you need to break your problem into four processes. This is easier than it sounds. Sometimes.\nWhat the underlying C libraries can (and can't) do. If the C libraries expose features of the OS AND the OS exposes features of the hardware, you're solid. \n\nTo break a problem into multiple processes -- especially in GNU/Linux -- is easy. Break it into a multi-step pipeline.\nIn the case of summing a million numbers, think of the following shell script. Assuming some hypothetical sum.py program that sums either a range of numbers or a list of numbers on stdin.\n( sum.py 0 500000 & sum.py 50000 1000000 ) | sum.py\nThis would have 3 concurrent processes. Two are doing sums of a lot of numbers, the third is summing two numbers. \nSince the GNU/Linux shells and the OS already handle some parts of concurrency for you, you can design simple (very, very simple) programs that read from stdin, write to stdout, and are designed to do small parts of a large job.\nYou can try to reduce the overheads by using subprocess to build the pipeline instead of allocating the job to the shell. You may find, however, that the shell builds pipelines very, very quickly. (It was written directly in C and makes direct OS API calls for you.)\n", "Sure, for example:\nfrom multiprocessing import Process, Queue\n\nthelist = range(1000*1000)\n\ndef f(q, sublist):\n q.put(sum(sublist))\n\ndef main():\n start = 0\n chunk = 500*1000\n queue = Queue()\n NP = 0\n subprocesses = []\n while start < len(thelist):\n p = Process(target=f, args=(queue, thelist[start:start+chunk]))\n NP += 1\n print 'delegated %s:%s to subprocess %s' % (start, start+chunk, NP)\n p.start()\n start += chunk\n subprocesses.append(p)\n total = 0\n for i in range(NP):\n total += queue.get()\n print \"total is\", total, '=', sum(thelist)\n while subprocesses:\n subprocesses.pop().join()\n\nif __name__ == '__main__':\n main()\n\nresults in:\n$ python2.6 mup.py \ndelegated 0:500000 to subprocess 1\ndelegated 500000:1000000 to subprocess 2\ntotal is 499999500000 = 499999500000\n\nnote that this granularity is too fine to be worth spawning processes for -- the overall summing task is small (which is why I can recompute the sum in main as a check;-) and too many data is being moved back and forth (in fact the subprocesses wouldn't need to get copies of the sublists they work on -- indices would suffice). So, it's a \"toy example\" where multiprocessing isn't really warranted. With different architectures (use a pool of subprocesses that receive multiple tasks to perform from a queue, minimize data movement back and forth, etc, etc) and on less granular tasks you could actually get benefits in terms of performance, however.\n" ]
[ 37, 22, 8 ]
[]
[]
[ "multicore", "multiprocessing", "python" ]
stackoverflow_0001182315_multicore_multiprocessing_python.txt
Q: cx_Oracle MemoryError when reading lob When trying to read data from a lob field using cx_Oralce I’m receiving “exceptions.MemoryError”. This code has been working, this one lob field seems to be too big. Example: xml_cursor = ora_connection.cursor() xml_cursor.arraysize = 2000 try: xml_cursor.execute(“select xml_data from xmlTable where id = 1”) for row_data in xml_cursor.fetchall(): str_xml = str(row_data[0]) #this throws “exceptions.MemoryError” A: Yep, if Python is giving MemoryError it means that just that one field of that just one row takes more memory than you have (quite possible with a LOB of course). You'll have to slice it up and get it in chunks (with select dbms_lob.substr(xml_data, ... repeatedly) and feed it to an incremental XML parser (or write it out to a file, or whatever is it that you're trying to do with that multi-GB LOB). DBMS_LOB is a well-documented Oracle-supplied package, and you can find its docs in many places, e.g. here.
cx_Oracle MemoryError when reading lob
When trying to read data from a lob field using cx_Oralce I’m receiving “exceptions.MemoryError”. This code has been working, this one lob field seems to be too big. Example: xml_cursor = ora_connection.cursor() xml_cursor.arraysize = 2000 try: xml_cursor.execute(“select xml_data from xmlTable where id = 1”) for row_data in xml_cursor.fetchall(): str_xml = str(row_data[0]) #this throws “exceptions.MemoryError”
[ "Yep, if Python is giving MemoryError it means that just that one field of that just one row takes more memory than you have (quite possible with a LOB of course). You'll have to slice it up and get it in chunks (with select dbms_lob.substr(xml_data, ... repeatedly) and feed it to an incremental XML parser (or write it out to a file, or whatever is it that you're trying to do with that multi-GB LOB). DBMS_LOB is a well-documented Oracle-supplied package, and you can find its docs in many places, e.g. here.\n" ]
[ 5 ]
[]
[]
[ "cx_oracle", "python" ]
stackoverflow_0001182146_cx_oracle_python.txt
Q: Static vs instance methods of str in Python So, I have learnt that strings have a center method. >>> 'a'.center(3) ' a ' Then I have noticed that I can do the same thing using the 'str' object which is a type, since >>> type(str) <type 'type'> Using this 'type' object I could access the string methods like they were static functions. >>> str.center('a',5) ' a ' Alas! This violates the zen of python. There should be one-- and preferably only one --obvious way to do it. Even the types of these two methods are different. >>> type(str.center) <type 'method_descriptor'> >>> type('Ni!'.center) <type 'builtin_function_or_method'> Now, Is this an example of how classes in python should be designed? Why are the types different? What is a method_descriptor and why should I bother? Thanks for the answers! A: That's simply how classes in Python work: class C: def method(self, arg): print "In C.method, with", arg o = C() o.method(1) C.method(o, 1) # Prints: # In C.method, with 1 # In C.method, with 1 When you say o.method(1) you can think of it as a shorthand for C.method(o, 1). A method_descriptor is part of the machinery that makes that work. A: There should be one-- and preferably only one --obvious way to do it. Philosophically speaking, there is only one obvious way to do it: 'a'.center(3). The fact that there is an unobvious way of calling any method (i.e. the well-explained-by-previous-commentors o.method(x) and Type.method(o, x)) which is useful in many contexts is perfectly in line with the zen of python. Your homework assignment is to read Guido's Why the Explicit Self Has to Stay. A: To expand on RichieHindle's answer: In Python, all methods on a class idiomatically take a "self" parameter. For example: def method(self, arg): pass That "self" argument tells Python what instance of the class the method is being called on. When you call a method on a class instance, this is typically passed implicitly for you: o.method(1) However, you also have the option of using the class Object, and explicitly passing in the class instance: C.method(o, 1) To to use your string example, str.center is a method on the str object: "hi".center(5) is equivalent to: str.center("hi", 5) You are passing in the str instance "hi" to the object, explicitly doing what's normally implicit. A: 'a'.center(3) == str.center('a',3) There is only one way to do it. A: Method descriptor is a normal class with __get__, __set__ and __del__ methods. When, e.g., __get__ is called, it is passed 2 or 3 arguments: self, which is the descriptor class itself, inst, which is the caller objet to which the "described" method should be bound, cls, which can be None. To illustrate method_descriptor machinery, let me give this example: class Descriptor(object): def __init__(self, m): self._meth=m def __get__(self, inst, cls=None): if cls is None: cls=type(inst) delattr(cls,self._meth.func_name) def _inst_meth(*a): return self._meth(inst,*a) return _inst_meth def instanceonlymethod(f): return Descriptor(f) class Test(object): def meth_1(self,*a): return '-'.join(str(i) for i in a) @instanceonlymethod def meth_2(self,*a): return '-'.join(str(i) for i in a) t=Test() print t.meth_1(2,3,4,5) #returns: 2-3-4-5 print Test.meth_1(t,1,2,3,4) #returns: 1-2-3-4 print t.meth_2(2,3,4,5) #returns: 2-3-4-5 try: print Test.meth_2(t,1,2,3,4) except Exception, why: #for 2.6, see changes print why #returns: type object 'Test' has no attribute 'meth_2' Now, when you call Test.meth_2(t, 1,2,3,4), it won't work.
Static vs instance methods of str in Python
So, I have learnt that strings have a center method. >>> 'a'.center(3) ' a ' Then I have noticed that I can do the same thing using the 'str' object which is a type, since >>> type(str) <type 'type'> Using this 'type' object I could access the string methods like they were static functions. >>> str.center('a',5) ' a ' Alas! This violates the zen of python. There should be one-- and preferably only one --obvious way to do it. Even the types of these two methods are different. >>> type(str.center) <type 'method_descriptor'> >>> type('Ni!'.center) <type 'builtin_function_or_method'> Now, Is this an example of how classes in python should be designed? Why are the types different? What is a method_descriptor and why should I bother? Thanks for the answers!
[ "That's simply how classes in Python work:\nclass C:\n def method(self, arg):\n print \"In C.method, with\", arg\n\no = C()\no.method(1)\nC.method(o, 1)\n# Prints:\n# In C.method, with 1\n# In C.method, with 1\n\nWhen you say o.method(1) you can think of it as a shorthand for C.method(o, 1). A method_descriptor is part of the machinery that makes that work.\n", "\nThere should be one-- and preferably only one --obvious way to do it.\n\nPhilosophically speaking, there is only one obvious way to do it: 'a'.center(3). The fact that there is an unobvious way of calling any method (i.e. the well-explained-by-previous-commentors o.method(x) and Type.method(o, x)) which is useful in many contexts is perfectly in line with the zen of python.\nYour homework assignment is to read Guido's Why the Explicit Self Has to Stay.\n", "To expand on RichieHindle's answer:\nIn Python, all methods on a class idiomatically take a \"self\" parameter. For example:\ndef method(self, arg): pass\n\nThat \"self\" argument tells Python what instance of the class the method is being called on. When you call a method on a class instance, this is typically passed implicitly for you:\no.method(1)\n\nHowever, you also have the option of using the class Object, and explicitly passing in the class instance:\nC.method(o, 1)\n\nTo to use your string example, str.center is a method on the str object:\n\"hi\".center(5)\n\nis equivalent to:\nstr.center(\"hi\", 5)\n\nYou are passing in the str instance \"hi\" to the object, explicitly doing what's normally implicit.\n", "'a'.center(3) == str.center('a',3)\n\nThere is only one way to do it.\n", "Method descriptor is a normal class with __get__, __set__ and __del__ methods.\nWhen, e.g., __get__ is called, it is passed 2 or 3 arguments:\n\nself, which is the descriptor class itself,\ninst, which is the caller objet to which the \"described\" method should be bound,\ncls, which can be None.\n\nTo illustrate method_descriptor machinery, let me give this example:\nclass Descriptor(object):\n def __init__(self, m):\n self._meth=m\n\n def __get__(self, inst, cls=None):\n if cls is None: cls=type(inst)\n delattr(cls,self._meth.func_name)\n def _inst_meth(*a):\n return self._meth(inst,*a)\n return _inst_meth\n\ndef instanceonlymethod(f):\n return Descriptor(f)\n\nclass Test(object):\n def meth_1(self,*a):\n return '-'.join(str(i) for i in a)\n\n @instanceonlymethod\n def meth_2(self,*a):\n return '-'.join(str(i) for i in a)\n\nt=Test()\nprint t.meth_1(2,3,4,5) #returns: 2-3-4-5\nprint Test.meth_1(t,1,2,3,4) #returns: 1-2-3-4\nprint t.meth_2(2,3,4,5) #returns: 2-3-4-5\ntry:\n print Test.meth_2(t,1,2,3,4)\nexcept Exception, why: #for 2.6, see changes\n print why #returns: type object 'Test' has no attribute 'meth_2'\n\nNow, when you call Test.meth_2(t, 1,2,3,4), it won't work.\n" ]
[ 19, 9, 5, 2, 1 ]
[]
[]
[ "python", "string" ]
stackoverflow_0001180303_python_string.txt
Q: Python dictionary/list help This is a python application that's supposed to get all the followers from one table and get their latest updates from another table. - All happening in the dashboard. dashboard.html: http://bizteen.pastebin.com/m65c4ae2d the dashboard function in views.py: http://bizteen.pastebin.com/m39798bd5 result: http://bizteen.pastebin.com/mc12d958 NOTE: When you run the the first div is ok cause thats the div from the latest user status so ignore the 1st div in the result..but as u can see all the rest is blank so i basically get 0 errors..:S CAN YOU PLEASEEEEE HELP me OUT here???? :D :D I'd reallllly appreciate it!!! :D Thanks!!!! A: There's far too much code there to try and work out what's going on, and your explanation is not particularly clear. However, one obvious problem is that you've got a lot of blank except clauses, which is almost always a bad idea as it masks any problems that might be happening outside of what you already expected. Always, always use except with one or more actual exception classes - except Object.DoesNotExist for example. Secondly, you need to try and debug this by working out what the values are at each point. The simplest way is to put print statements after every assignment. The values should show up in the console. This will help you track down exactly where your logic is going wrong.
Python dictionary/list help
This is a python application that's supposed to get all the followers from one table and get their latest updates from another table. - All happening in the dashboard. dashboard.html: http://bizteen.pastebin.com/m65c4ae2d the dashboard function in views.py: http://bizteen.pastebin.com/m39798bd5 result: http://bizteen.pastebin.com/mc12d958 NOTE: When you run the the first div is ok cause thats the div from the latest user status so ignore the 1st div in the result..but as u can see all the rest is blank so i basically get 0 errors..:S CAN YOU PLEASEEEEE HELP me OUT here???? :D :D I'd reallllly appreciate it!!! :D Thanks!!!!
[ "There's far too much code there to try and work out what's going on, and your explanation is not particularly clear.\nHowever, one obvious problem is that you've got a lot of blank except clauses, which is almost always a bad idea as it masks any problems that might be happening outside of what you already expected. Always, always use except with one or more actual exception classes - except Object.DoesNotExist for example.\nSecondly, you need to try and debug this by working out what the values are at each point. The simplest way is to put print statements after every assignment. The values should show up in the console. This will help you track down exactly where your logic is going wrong.\n" ]
[ 1 ]
[]
[]
[ "django", "list", "python" ]
stackoverflow_0001183031_django_list_python.txt
Q: Why isn't Django returning a datetime field from the database? For my first Django app, I'm trying to write a simple quote collection site (think bash.org), with really simple functionality, just to get my feet wet. I'm using sqlite as my database, since it's the easiest to setup. Here's my only model right now: class Quote(models.Model): text = models.TextField(); upvotes = models.IntegerField(default=0) downvotes = models.IntegerField(default=0) active = models.BooleanField(default=False) date_published = models.DateTimeField(auto_now_add=True) And a really simple detail template, just to dump the information: Quote: {{ quote.text }}<br> Upvotes: {{ quote.upvotes }}<br> Downvotes: {{ quote.downvotes }}<br> Published: {{ qoute.date_published|date:"F j, Y, g:i a" }} When I go to the detail page, everything for the given object is outputted properly, except for the datetime (blank). However, I've checked the database and verified that there is a datetime stored in that object's column, and it shows up fine admin area. Also, when I run the shell from manage.py, here's what I get: >>> q = Quote.objects.all()[0] >>> q.date_published datetime.datetime(2009, 7, 24, 23, 1, 7, 858688) Also, I'm using the generic view django.views.generic.list_detail.object_detail to handle the request, but I also tried to use the view below and got the same result. def detail(Request, id): q = get_object_or_404(Quote, pk=id) return render_to_response('quotable/quote_detail.html', {'quote': q}) Am I'm doing something wrong in my attempt to display the date, or is something else going on here? Thanks. A: As Adam Bernier mentioned, you're misspelling quote A: I'm not sure what you're doing with that date: filter -- what happens if you replace it with something simple, such as date:"D d M Y? A: I believe that django.views.generic.list_detail.object_detail uses a variable named object_id, not id. urlpatterns = patterns('', (r'^$', 'django.views.generic.list_detail.object_list', info_dict), (r'^(?P<object_id>\d+)/$', 'django.views.generic.list_detail.object_detail', info_dict), url(r'^(?P<object_id>\d+)/results/$', 'django.views.generic.list_detail.object_detail', dict(info_dict, template_name='polls/results.html'), 'poll_results'), (r'^(?P<poll_id>\d+)/vote/$', 'mysite.polls.views.vote'), ) When you change to using the detail template, your url pattern will be wrong.
Why isn't Django returning a datetime field from the database?
For my first Django app, I'm trying to write a simple quote collection site (think bash.org), with really simple functionality, just to get my feet wet. I'm using sqlite as my database, since it's the easiest to setup. Here's my only model right now: class Quote(models.Model): text = models.TextField(); upvotes = models.IntegerField(default=0) downvotes = models.IntegerField(default=0) active = models.BooleanField(default=False) date_published = models.DateTimeField(auto_now_add=True) And a really simple detail template, just to dump the information: Quote: {{ quote.text }}<br> Upvotes: {{ quote.upvotes }}<br> Downvotes: {{ quote.downvotes }}<br> Published: {{ qoute.date_published|date:"F j, Y, g:i a" }} When I go to the detail page, everything for the given object is outputted properly, except for the datetime (blank). However, I've checked the database and verified that there is a datetime stored in that object's column, and it shows up fine admin area. Also, when I run the shell from manage.py, here's what I get: >>> q = Quote.objects.all()[0] >>> q.date_published datetime.datetime(2009, 7, 24, 23, 1, 7, 858688) Also, I'm using the generic view django.views.generic.list_detail.object_detail to handle the request, but I also tried to use the view below and got the same result. def detail(Request, id): q = get_object_or_404(Quote, pk=id) return render_to_response('quotable/quote_detail.html', {'quote': q}) Am I'm doing something wrong in my attempt to display the date, or is something else going on here? Thanks.
[ "As Adam Bernier mentioned, you're misspelling quote\n", "I'm not sure what you're doing with that date: filter -- what happens if you replace it with something simple, such as date:\"D d M Y?\n", "I believe that django.views.generic.list_detail.object_detail uses a variable named object_id, not id.\n urlpatterns = patterns('',\n (r'^$', 'django.views.generic.list_detail.object_list', info_dict),\n (r'^(?P<object_id>\\d+)/$', 'django.views.generic.list_detail.object_detail', info_dict),\n url(r'^(?P<object_id>\\d+)/results/$', 'django.views.generic.list_detail.object_detail', dict(info_dict, template_name='polls/results.html'), 'poll_results'),\n (r'^(?P<poll_id>\\d+)/vote/$', 'mysite.polls.views.vote'),\n\n)\n\nWhen you change to using the detail template, your url pattern will be wrong.\n" ]
[ 2, 0, 0 ]
[]
[]
[ "datetime", "django", "python", "sqlite" ]
stackoverflow_0001181145_datetime_django_python_sqlite.txt
Q: In Python 2.4, how can I strip out characters after ';'? Let's say I'm parsing a file, which uses ; as the comment character. I don't want to parse comments. So if I a line looks like this: example.com. 600 IN MX 8 s1b9.example.net ; hello! Is there an easier/more-elegant way to strip chars out other than this: rtr = '' for line in file: trig = False for char in line: if not trig and char != ';': rtr += char else: trig = True if rtr[max(rtr)] != '\n': rtr += '\n' A: I'd recommend saying line.split(";")[0] which will give you a string of all characters up to but not including the first ";" character. If no ";" character is present, then it will give you the entire line. A: just do a split on the line by comment then get the first element eg line.split(";")[0] A: For Python 2.5 or greater, I would use the partition method: rtr = line.partition(';')[0].rstrip() + '\n' A: So you'll want to split the line on the first semicolon, take everything before it, strip off any lingering whitespace, and append a newline character. rtr = line.split(";", 1)[0].rstrip() + '\n' Links to Documentation: split rstrip A: file = open(r'c:\temp\test.txt', 'r') for line in file: print line.split(";")[0].strip() A: Reading, splitting, stripping, and joining lines with newline all in one line of python: rtr = '\n'.join(line.split(';')[0].strip() for line in open(r'c:\temp\test.txt', 'r')) A: Here is another way : In [6]: line = "foo;bar" In [7]: line[:line.find(";")] + "\n" Out[7]: 'foo\n'
In Python 2.4, how can I strip out characters after ';'?
Let's say I'm parsing a file, which uses ; as the comment character. I don't want to parse comments. So if I a line looks like this: example.com. 600 IN MX 8 s1b9.example.net ; hello! Is there an easier/more-elegant way to strip chars out other than this: rtr = '' for line in file: trig = False for char in line: if not trig and char != ';': rtr += char else: trig = True if rtr[max(rtr)] != '\n': rtr += '\n'
[ "I'd recommend saying\nline.split(\";\")[0]\n\nwhich will give you a string of all characters up to but not including the first \";\" character. If no \";\" character is present, then it will give you the entire line.\n", "just do a split on the line by comment then get the first element\neg\nline.split(\";\")[0]\n\n", "For Python 2.5 or greater, I would use the partition method:\nrtr = line.partition(';')[0].rstrip() + '\\n'\n\n", "So you'll want to split the line on the first semicolon, take everything before it, strip off any lingering whitespace, and append a newline character.\nrtr = line.split(\";\", 1)[0].rstrip() + '\\n'\n\nLinks to Documentation:\n\nsplit\nrstrip\n\n", "file = open(r'c:\\temp\\test.txt', 'r')\nfor line in file: print\n line.split(\";\")[0].strip()\n\n", "Reading, splitting, stripping, and joining lines with newline all in one line of python:\nrtr = '\\n'.join(line.split(';')[0].strip() for line in open(r'c:\\temp\\test.txt', 'r'))\n\n", "Here is another way :\n\nIn [6]: line = \"foo;bar\"\nIn [7]: line[:line.find(\";\")] + \"\\n\"\nOut[7]: 'foo\\n'\n\n" ]
[ 134, 19, 4, 4, 3, 1, 1 ]
[ "I have not tested this with python but I use similar code else where.\nimport re\ncontent = open(r'c:\\temp\\test.txt', 'r').read()\ncontent = re.sub(\";.+\", \"\\n\")\n\n" ]
[ -3 ]
[ "python", "python_2.4", "string" ]
stackoverflow_0001178335_python_python_2.4_string.txt
Q: Interactive python Possible Duplicate: How to save a Python interactive session? Can i save everything I type into a python session when "brain storming"? For instance, not just default variables but of course even overriding the shell. I of course mean by invoking the actual python executable. I seriously hope this is not a stupid question. I need rep of course too, so this probes me a bit. A: iPython (as suggested in another answer) is indeed a good suggestion, but if you prefer the good old Python interactive interpreter it's not too hard to do it there either. Set your environment variable PYTHONSTARTUP to point to a file that contains, for example: import atexit import readline try: readline.read_history_file('.PythonHistory') except OSError: pass atexit.register(lambda: readline.write_history_file('.PythonHistory')) this can be tweaked as you wish (e.g. to load and save the same file no matter what directory you're starting from) but I kind of like this simple version as it makes it very easy to have different "sessions" remembered in different working directories. A: Not sure if you can do this with the Python shell. But it's possible with IPython which gives you a lot more: http://ipython.scipy.org/moin/Cookbook/SavingCurrentSession A: Others (ars, Alex Martelli) have given direct answers to the question. For myself, I've found a more effective strategy is to write all of the commands into a text editors and either execute saved scripts and/or copy-and-paste into python or ipython. I find that I can keep myself more organized that way. A: There is as well the Bpython interpreter : http://www.bpython-interpreter.org/ This is the list of features which include the save code and even send the code to a pastebin service. In-line syntax highlighting. Readline-like autocomplete with suggestions displayed as you type. Expected parameter list for any Python function. "Rewind" function to pop the last line of code from memory and re-evaluate. Send the code you've entered off to a pastebin. Save the code you've entered to a file. Auto-indentation.
Interactive python
Possible Duplicate: How to save a Python interactive session? Can i save everything I type into a python session when "brain storming"? For instance, not just default variables but of course even overriding the shell. I of course mean by invoking the actual python executable. I seriously hope this is not a stupid question. I need rep of course too, so this probes me a bit.
[ "iPython (as suggested in another answer) is indeed a good suggestion, but if you prefer the good old Python interactive interpreter it's not too hard to do it there either. Set your environment variable PYTHONSTARTUP to point to a file that contains, for example:\nimport atexit\nimport readline\ntry:\n readline.read_history_file('.PythonHistory')\nexcept OSError:\n pass\natexit.register(lambda: readline.write_history_file('.PythonHistory'))\n\nthis can be tweaked as you wish (e.g. to load and save the same file no matter what directory you're starting from) but I kind of like this simple version as it makes it very easy to have different \"sessions\" remembered in different working directories.\n", "Not sure if you can do this with the Python shell. But it's possible with IPython which gives you a lot more:\n\nhttp://ipython.scipy.org/moin/Cookbook/SavingCurrentSession\n\n", "Others (ars, Alex Martelli) have given direct answers to the question. For myself, I've found a more effective strategy is to write all of the commands into a text editors and either execute saved scripts and/or copy-and-paste into python or ipython. I find that I can keep myself more organized that way.\n", "There is as well the Bpython interpreter :\nhttp://www.bpython-interpreter.org/\nThis is the list of features which include the save code and even send the code\nto a pastebin service.\n\nIn-line syntax highlighting.\nReadline-like autocomplete with suggestions displayed as you type.\nExpected parameter list for any Python function.\n\"Rewind\" function to pop the last line of code from memory and re-evaluate.\nSend the code you've entered off to a pastebin.\nSave the code you've entered to a file.\nAuto-indentation.\n\n" ]
[ 9, 3, 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001180802_python.txt
Q: To SHA512-hash a password in MySQL database by Python This question is based on the answer. I would like to know how you can hash your password by SHA1 and then remove the clear-text password in a MySQL database by Python. How can you hash your password in a MySQL database by Python? A: As the documentation says you should use hashlib library not the sha since python 2.5. It is pretty easy to do make a hash. hexhash = hashlib.sha512("some text").hexdigest() This hex number will be easy to store in a database. A: If you're storing passwords in a database, a recommended article to read is Jeff's You're Probably Storing Passwords Incorrectly. This article describes the use of salt and some of the things about storing passwords that are deceptively easy to get wrong. A: http://docs.python.org/library/sha.html The python documentation explains this a lot better than I can. A: You don't remove the clear-text password when you hash the password. What you do is accept an input from the user, hash the input, and compare the hash of the input to the hash stored in the database. You should never store or send the plain-text password that the user has. That said, you can use the sha library as scrager said (pre-Python 2.5) and the hashlib library as David Raznick said in newer versions of Python.
To SHA512-hash a password in MySQL database by Python
This question is based on the answer. I would like to know how you can hash your password by SHA1 and then remove the clear-text password in a MySQL database by Python. How can you hash your password in a MySQL database by Python?
[ "As the documentation says you should use hashlib library not the sha since python 2.5.\nIt is pretty easy to do make a hash.\nhexhash = hashlib.sha512(\"some text\").hexdigest()\n\nThis hex number will be easy to store in a database.\n", "If you're storing passwords in a database, a recommended article to read is Jeff's You're Probably Storing Passwords Incorrectly. This article describes the use of salt and some of the things about storing passwords that are deceptively easy to get wrong.\n", "http://docs.python.org/library/sha.html\nThe python documentation explains this a lot better than I can.\n", "You don't remove the clear-text password when you hash the password. What you do is accept an input from the user, hash the input, and compare the hash of the input to the hash stored in the database. You should never store or send the plain-text password that the user has.\nThat said, you can use the sha library as scrager said (pre-Python 2.5) and the hashlib library as David Raznick said in newer versions of Python.\n" ]
[ 12, 7, 4, 1 ]
[]
[]
[ "database", "hash", "mysql", "python" ]
stackoverflow_0001183161_database_hash_mysql_python.txt
Q: Comparing dissimilar types in python First the code: class myClass(object): def __cmp__(self, other): return cmp(type(self), type(other)) or cmp(self.__something, other.__something) Does this produce the same ordering as for other types in python? Is there a correct idiom for this? Related question: A bit of looking around on google I found some pertinent information in the python docs. Quoting: Implementation note: Objects of different types except numbers are ordered by their type names; objects of the same types that don’t support proper comparison are ordered by their address. This suggests that If I want to follow that behavior, I should use class myClass(object): def __cmp__(self, other): return (cmp(self.__class__.__name__, other.__class__.__name) or cmp(self.__something, other.__something)) Especially unfortunate is that I may have an extraordinarily difficult time mantaining transitivity with dicts, which is a special case I had hoped to implement. Do I even need to check the types of my arguments? does python even let me see this? A: Python 2 unfortunately did support such "alien" comparisons (fortunately abrogated in Python 3). It's NOT easy to emulate the built-ins behavior because it has so many special cases, for example float and int compare directly (no type-comparison override as you have it coded) but complex makes any comparison (except == and !=) always raise an exception. Do you really need to emulate all of these quirks and wiggles? Such a need would be very unusual. If your type is "numeric" (e.g. look at decimal.Decimal) it needs to play nice with other numerical types, I guess, but types that aren't "comparable numbers" have fewer real-word sensible constraints...! A: What are you actually trying to do? Why would you want to sort or compare instances of different classes by class/type? It's hard to propose a solution when you haven't actually posted the problem. You need to define your own comparison methods for custom classes, and it's largely up to you to make sure any comparisons you perform make sense. Please explain what you are trying to achieve. This might not be the Pythonic way.
Comparing dissimilar types in python
First the code: class myClass(object): def __cmp__(self, other): return cmp(type(self), type(other)) or cmp(self.__something, other.__something) Does this produce the same ordering as for other types in python? Is there a correct idiom for this? Related question: A bit of looking around on google I found some pertinent information in the python docs. Quoting: Implementation note: Objects of different types except numbers are ordered by their type names; objects of the same types that don’t support proper comparison are ordered by their address. This suggests that If I want to follow that behavior, I should use class myClass(object): def __cmp__(self, other): return (cmp(self.__class__.__name__, other.__class__.__name) or cmp(self.__something, other.__something)) Especially unfortunate is that I may have an extraordinarily difficult time mantaining transitivity with dicts, which is a special case I had hoped to implement. Do I even need to check the types of my arguments? does python even let me see this?
[ "Python 2 unfortunately did support such \"alien\" comparisons (fortunately abrogated in Python 3). It's NOT easy to emulate the built-ins behavior because it has so many special cases, for example float and int compare directly (no type-comparison override as you have it coded) but complex makes any comparison (except == and !=) always raise an exception. Do you really need to emulate all of these quirks and wiggles? Such a need would be very unusual. If your type is \"numeric\" (e.g. look at decimal.Decimal) it needs to play nice with other numerical types, I guess, but types that aren't \"comparable numbers\" have fewer real-word sensible constraints...!\n", "What are you actually trying to do? Why would you want to sort or compare instances of different classes by class/type?\nIt's hard to propose a solution when you haven't actually posted the problem.\nYou need to define your own comparison methods for custom classes, and it's largely up to you to make sure any comparisons you perform make sense.\nPlease explain what you are trying to achieve. This might not be the Pythonic way.\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001175529_python.txt
Q: Permalinks with Russian/Cyrillic news articles I basically am working with an oldschool php cms based site in Russian, one of the many new functionalities requested is permalinks. As of now, currently the website just uses the standard non-mvc 'article.php?id=50'. I was browsing the Russian wiki and this was really the only Russian site I've seen that made use of native Russian permalinks. I'm wondering: Are there any kind of limitations in regards to character usage? Does this require any type of special setup on the server-side or anything? What kind of characters should I look out for in general for permalinks? Any gotchas I need? Any tips on how I should store the permalinks in my database? As of now, the table structure is relatively simple.. just an articles table with: id article_title article_snippet article_whole date_time I was thinking of adding a new column in this table named 'permalink' which will basically store a modified version of the article_title ( so far the only character I can think of with special treatment is the space which I'll convert to an underscore ). How should I have my new clean urls formatted? I was thinking something like: /articles/2009/Заглавная_страница for example. By the way, I'll be using Pylons ( a python framework ) and MySQL 5 though I'm open to PostgreSQL if there are any weird UTF8 restrictions ( I converted the whole database which was previously Latin1 to UTF8 by the way with iconv ). A: The current convention is to encode URLs in UTF-8, and then URL-escape (i.e. %-escape) them: py> urllib.quote(u"articles/2009/Заглавная_страница".encode("utf-8")) 'articles/2009/%D0%97%D0%B0%D0%B3%D0%BB%D0%B0%D0%B2%D0%BD%D0%B0%D1%8F_%D1%81%D1%82%D1%80%D0%B0%D0%BD%D0%B8%D1%86%D0%B0' After this, there won't be any restrictions - i.e. browsers will either recognize it as UTF-8 or not, but they will certainly be able to follow the link.
Permalinks with Russian/Cyrillic news articles
I basically am working with an oldschool php cms based site in Russian, one of the many new functionalities requested is permalinks. As of now, currently the website just uses the standard non-mvc 'article.php?id=50'. I was browsing the Russian wiki and this was really the only Russian site I've seen that made use of native Russian permalinks. I'm wondering: Are there any kind of limitations in regards to character usage? Does this require any type of special setup on the server-side or anything? What kind of characters should I look out for in general for permalinks? Any gotchas I need? Any tips on how I should store the permalinks in my database? As of now, the table structure is relatively simple.. just an articles table with: id article_title article_snippet article_whole date_time I was thinking of adding a new column in this table named 'permalink' which will basically store a modified version of the article_title ( so far the only character I can think of with special treatment is the space which I'll convert to an underscore ). How should I have my new clean urls formatted? I was thinking something like: /articles/2009/Заглавная_страница for example. By the way, I'll be using Pylons ( a python framework ) and MySQL 5 though I'm open to PostgreSQL if there are any weird UTF8 restrictions ( I converted the whole database which was previously Latin1 to UTF8 by the way with iconv ).
[ "The current convention is to encode URLs in UTF-8, and then URL-escape (i.e. %-escape) them:\npy> urllib.quote(u\"articles/2009/Заглавная_страница\".encode(\"utf-8\"))\n'articles/2009/%D0%97%D0%B0%D0%B3%D0%BB%D0%B0%D0%B2%D0%BD%D0%B0%D1%8F_%D1%81%D1%82%D1%80%D0%B0%D0%BD%D0%B8%D1%86%D0%B0'\n\nAfter this, there won't be any restrictions - i.e. browsers will either recognize it as UTF-8 or not, but they will certainly be able to follow the link.\n" ]
[ 2 ]
[]
[]
[ "model_view_controller", "mysql", "python" ]
stackoverflow_0001183956_model_view_controller_mysql_python.txt
Q: why __builtins__ is both module and dict I am using the built-in module to insert a few instances, so they can be accessed globally for debugging purposes. The problem with the __builtins__ module is that it is a module in a main script and is a dict in modules, but as my script depending on cases can be a main script or a module, I have to do this: if isinstance(__builtins__, dict): __builtins__['g_frame'] = 'xxx' else: setattr(__builtins__, 'g_frame', 'xxx') Is there a workaround, shorter than this? More importantly, why does __builtins__ behave this way? Here is a script to see this. Create a module a.py: #module-a import b print 'a-builtin:',type(__builtins__) Create a module b.py: #module-b print 'b-builtin:',type(__builtins__) Now run python a.py: $ python a.py b-builtin: <type 'dict'> a-builtin: <type 'module'> A: I think you want the __builtin__ module (note the singular). See the docs: 27.3. __builtin__ — Built-in objects CPython implementation detail: Most modules have the name __builtins__ (note the 's') made available as part of their globals. The value of __builtins__ is normally either this module or the value of this modules’s [sic] __dict__ attribute. Since this is an implementation detail, it may not be used by alternate implementations of Python.
why __builtins__ is both module and dict
I am using the built-in module to insert a few instances, so they can be accessed globally for debugging purposes. The problem with the __builtins__ module is that it is a module in a main script and is a dict in modules, but as my script depending on cases can be a main script or a module, I have to do this: if isinstance(__builtins__, dict): __builtins__['g_frame'] = 'xxx' else: setattr(__builtins__, 'g_frame', 'xxx') Is there a workaround, shorter than this? More importantly, why does __builtins__ behave this way? Here is a script to see this. Create a module a.py: #module-a import b print 'a-builtin:',type(__builtins__) Create a module b.py: #module-b print 'b-builtin:',type(__builtins__) Now run python a.py: $ python a.py b-builtin: <type 'dict'> a-builtin: <type 'module'>
[ "I think you want the __builtin__ module (note the singular).\nSee the docs:\n\n27.3. __builtin__ — Built-in objects\nCPython implementation detail: Most modules have the name __builtins__ (note the 's') made available as part of their globals. The value of __builtins__ is normally either this module or the value of this modules’s [sic] __dict__ attribute. Since this is an implementation detail, it may not be used by alternate implementations of Python.\n\n" ]
[ 17 ]
[]
[]
[ "built_in", "python", "python_module" ]
stackoverflow_0001184016_built_in_python_python_module.txt
Q: Storing files for testbin/pastebin in Python I'm basically trying to setup my own private pastebin where I can save html files on my private server to test and fool around - have some sort of textarea for the initial input, save the file, and after saving I'd like to be able to view all the files I saved. I'm trying to write this in python, just wondering what the most practical way would be of storing the file(s) or the code? SQLite? Straight up flat files? One other thing I'm worried about is the uniqueness of the files, obviously I don't want conflicting filenames ( maybe save using 'title' and timestamp? ) - how should I structure it? A: I wrote something similar a while back in Django to test jQuery snippets. See: http://jquery.nodnod.net/ I have the code available on GitHub at http://github.com/dz/jquerytester/tree/master if you're curious. If you're using straight Python, there are a couple ways to approach naming: If storing as files, ask for a name, salt with current time, and generate a hash for the filename. If using mysqlite or some other database, just use a numerical unique ID. Personally, I'd go for #2. It's easy, ensures uniqueness, and allows you to easily fetch various sets of 'files'. A: Have you considered trying lodgeit. Its a free pastbin which you can host yourself. I do not know how hard it is to set up. Looking at their code they have gone with a database for storage (sqllite will do). They have structured there paste table like, (this is sqlalchemy table declaration style). The code is just a text field. pastes = Table('pastes', metadata, Column('paste_id', Integer, primary_key=True), Column('code', Text), Column('parent_id', Integer, ForeignKey('pastes.paste_id'), nullable=True), Column('pub_date', DateTime), Column('language', String(30)), Column('user_hash', String(40), nullable=True), Column('handled', Boolean, nullable=False), Column('private_id', String(40), unique=True, nullable=True) ) They have also made a hierarchy (see the self join) which is used for versioning. A: Plain files are definitely more effective. Save your database for more complex queries. If you need some formatting to be done on files, such as highlighting the code properly, it is better to do it before you save the file with that code. That way you don't need to apply formatting every time the file is shown. You definitely would need somehow ensure all file names are unique, but this task is trivial, since you can just check, if the file already exists on the disk and if it does, add some number to its name and check again and so on. Don't store them all in one directory either, since filesystem can perform much worse if there are A LOT (~ 1 million) files in the single directory, so you can structure your storage like this: FILE_DIR/YEAR/MONTH/FileID.html and store the "YEAR/MONTH/FileID" Part in the database as a unique ID for the file. Of course, if you don't worry about performance (not many users, for example) you can just go with storing everything in the database, which is much easier to manage.
Storing files for testbin/pastebin in Python
I'm basically trying to setup my own private pastebin where I can save html files on my private server to test and fool around - have some sort of textarea for the initial input, save the file, and after saving I'd like to be able to view all the files I saved. I'm trying to write this in python, just wondering what the most practical way would be of storing the file(s) or the code? SQLite? Straight up flat files? One other thing I'm worried about is the uniqueness of the files, obviously I don't want conflicting filenames ( maybe save using 'title' and timestamp? ) - how should I structure it?
[ "I wrote something similar a while back in Django to test jQuery snippets. See:\nhttp://jquery.nodnod.net/\nI have the code available on GitHub at http://github.com/dz/jquerytester/tree/master if you're curious.\nIf you're using straight Python, there are a couple ways to approach naming:\n\nIf storing as files, ask for a name, salt with current time, and generate a hash for the filename.\nIf using mysqlite or some other database, just use a numerical unique ID.\n\nPersonally, I'd go for #2. It's easy, ensures uniqueness, and allows you to easily fetch various sets of 'files'.\n", "Have you considered trying lodgeit. Its a free pastbin which you can host yourself. I do not know how hard it is to set up.\nLooking at their code they have gone with a database for storage (sqllite will do). They have structured there paste table like, (this is sqlalchemy table declaration style). The code is just a text field.\npastes = Table('pastes', metadata,\n Column('paste_id', Integer, primary_key=True),\n Column('code', Text),\n Column('parent_id', Integer, ForeignKey('pastes.paste_id'),\n nullable=True),\n Column('pub_date', DateTime),\n Column('language', String(30)),\n Column('user_hash', String(40), nullable=True),\n Column('handled', Boolean, nullable=False),\n Column('private_id', String(40), unique=True, nullable=True)\n )\n\nThey have also made a hierarchy (see the self join) which is used for versioning. \n", "Plain files are definitely more effective. Save your database for more complex queries.\nIf you need some formatting to be done on files, such as highlighting the code properly, it is better to do it before you save the file with that code. That way you don't need to apply formatting every time the file is shown.\nYou definitely would need somehow ensure all file names are unique, but this task is trivial, since you can just check, if the file already exists on the disk and if it does, add some number to its name and check again and so on.\nDon't store them all in one directory either, since filesystem can perform much worse if there are A LOT (~ 1 million) files in the single directory, so you can structure your storage like this:\nFILE_DIR/YEAR/MONTH/FileID.html and store the \"YEAR/MONTH/FileID\" Part in the database as a unique ID for the file.\nOf course, if you don't worry about performance (not many users, for example) you can just go with storing everything in the database, which is much easier to manage.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "python", "web_applications" ]
stackoverflow_0001184116_python_web_applications.txt
Q: How does one encode and decode a string with Python for use in a URL? I have a string like this: String A: [ 12234_1_Hello'World_34433_22acb_4554344_accCC44 ] I would like to encrypt String A to be used in a clean URL. something like this: String B: [ cYdfkeYss4543423sdfHsaaZ ] Is there a encode API in python, given String A, it returns String B? Is there a decode API in python, given String B, it returns String A? A: note that theres a huge difference between encoding and encryption. if you want to send sensitive data, then dont use the encoding mentioned above ;) A: One way of doing the encode/decode is to use the package base64, for an example: import base64 import sys encoded = base64.b64encode(sys.stdin.read()) print encoded decoded = base64.b64decode(encoded) print decoded Is it what you were looking for? With your particular case you get: input: 12234_1_Hello'World_34433_22acb_4554344_accCC44 encoded: MTIyMzRfMV9IZWxsbydXb3JsZF8zNDQzM18yMmFjYl80NTU0MzQ0X2FjY0NDNDQ= decoded: 12234_1_Hello'World_34433_22acb_4554344_accCC44 A: Are you looking to encrypt the string or encode it to remove illegal characters for urls? If the latter, you can use urllib.quote: >>> quoted = urllib.quote("12234_1_Hello'World_34433_22acb_4554344_accCC44") >>> quoted '12234_1_Hello%27World_34433_22acb_4554344_accCC44' >>> urllib.unquote(quoted) "12234_1_Hello'World_34433_22acb_4554344_accCC44" A: Are you after encryption, compression, or just urlencoding? The string can be passed after urlencoding, but that will not make it smaller as in your example. Compression might shrink it, but you would still need to urlencode the result. Do you actually need to hide the string data from the viewer (e.g. sensitive data, should not be viewable by someone reading the URL over your shoulder)? A: To make it really short -> just insert a row into the database. Store something like a list of (id auto_increment, url) tuples. Then you can base64 encode the id to get a "proxy url". Decode it by decoding the id and looking up the proper url in the database. Or if you don't mind the identifiers looking sequential, just use the numbers. A: The base64 module provides encoding and decoding for a string to and from different bases, since python 2.4. In you example, you would do the following: import base64 string_b = base64.b64encode(string_a) string_a = base64.b64decode(string_b) For full API: http://docs.python.org/library/base64.html A: It's hard to reduce the size of a string and preserve arbitrary content. You have to restrict the data to something you can usefully compress. Your alternative is to do the following. Save "all the arguments in the URL" in a database row. Assign a GUID key to this collection of arguments. Then provide that shortened GUID key. A: Another method that would also shorten the string would be to calculate the md5/sha1 hash of the string (concatenated with a seed if you wished): import hashlib >>> hashlib.sha1("12234_1_Hello'World_34433_22acb_4554344_accCC44").hexdigest() 'e1153227558aadc00a2e90b5013fdd6b0804fdfb' In theory you should get a set of strings with very few collisions and with a fixed length. The hashlib library has an array of different hash functions you can use in this manner, with different output sizes. Edit: You also said that you needed a reversible string, so this wouldn't work for that. Afaik, however, many web platforms that use clean URLs like you seem to want to implement use hash functions to calculate a shortened URL and then store that URL along with the page's other data to provide the reverse lookup capability.
How does one encode and decode a string with Python for use in a URL?
I have a string like this: String A: [ 12234_1_Hello'World_34433_22acb_4554344_accCC44 ] I would like to encrypt String A to be used in a clean URL. something like this: String B: [ cYdfkeYss4543423sdfHsaaZ ] Is there a encode API in python, given String A, it returns String B? Is there a decode API in python, given String B, it returns String A?
[ "note that theres a huge difference between encoding and encryption.\nif you want to send sensitive data, then dont use the encoding mentioned above ;)\n", "One way of doing the encode/decode is to use the package base64, for an example:\nimport base64\nimport sys\n\nencoded = base64.b64encode(sys.stdin.read())\nprint encoded\n\ndecoded = base64.b64decode(encoded)\nprint decoded\n\nIs it what you were looking for? With your particular case you get:\ninput: 12234_1_Hello'World_34433_22acb_4554344_accCC44\nencoded: MTIyMzRfMV9IZWxsbydXb3JsZF8zNDQzM18yMmFjYl80NTU0MzQ0X2FjY0NDNDQ=\ndecoded: 12234_1_Hello'World_34433_22acb_4554344_accCC44\n", "Are you looking to encrypt the string or encode it to remove illegal characters for urls?\nIf the latter, you can use urllib.quote:\n>>> quoted = urllib.quote(\"12234_1_Hello'World_34433_22acb_4554344_accCC44\")\n>>> quoted\n'12234_1_Hello%27World_34433_22acb_4554344_accCC44'\n\n>>> urllib.unquote(quoted)\n\"12234_1_Hello'World_34433_22acb_4554344_accCC44\"\n\n", "Are you after encryption, compression, or just urlencoding? The string can be passed after urlencoding, but that will not make it smaller as in your example. Compression might shrink it, but you would still need to urlencode the result.\nDo you actually need to hide the string data from the viewer (e.g. sensitive data, should not be viewable by someone reading the URL over your shoulder)?\n", "To make it really short -> just insert a row into the database. Store something like a list of (id auto_increment, url) tuples. Then you can base64 encode the id to get a \"proxy url\". Decode it by decoding the id and looking up the proper url in the database. Or if you don't mind the identifiers looking sequential, just use the numbers.\n", "The base64 module provides encoding and decoding for a string to and from different bases, since python 2.4.\nIn you example, you would do the following:\nimport base64\nstring_b = base64.b64encode(string_a)\nstring_a = base64.b64decode(string_b)\n\nFor full API:\nhttp://docs.python.org/library/base64.html\n", "It's hard to reduce the size of a string and preserve arbitrary content.\nYou have to restrict the data to something you can usefully compress.\nYour alternative is to do the following.\n\nSave \"all the arguments in the URL\" in a database row.\nAssign a GUID key to this collection of arguments.\nThen provide that shortened GUID key.\n\n", "Another method that would also shorten the string would be to calculate the md5/sha1 hash of the string (concatenated with a seed if you wished):\nimport hashlib\n>>> hashlib.sha1(\"12234_1_Hello'World_34433_22acb_4554344_accCC44\").hexdigest()\n'e1153227558aadc00a2e90b5013fdd6b0804fdfb'\n\nIn theory you should get a set of strings with very few collisions and with a fixed length. The hashlib library has an array of different hash functions you can use in this manner, with different output sizes.\nEdit: You also said that you needed a reversible string, so this wouldn't work for that. Afaik, however, many web platforms that use clean URLs like you seem to want to implement use hash functions to calculate a shortened URL and then store that URL along with the page's other data to provide the reverse lookup capability.\n" ]
[ 13, 9, 5, 5, 5, 2, 2, 1 ]
[]
[]
[ "clean_urls", "hash", "python", "string", "urlencode" ]
stackoverflow_0000875771_clean_urls_hash_python_string_urlencode.txt
Q: python string search replace SSViewer::set_theme('bullsorbit'); this my string. I want search in string "SSViewer::set_theme('bullsorbit'); " and replace 'bullsorbit' with another string. 'bullsorbit' string is dynamically changing. A: Not in a situation to be able to test this so you may need to fiddle with the Regular Expression (they may be errors in it.) import re re.sub("SSViewer::set_theme\('[a-z]+'\)", "SSViewer::set_theme('whatever')", my_string) Is this what you want? Just tested it, this is some sample output: my_string = """Some file with some other junk SSViewer::set_theme('bullsorbit'); SSViewer::set_theme('another'); Something else""" import re replaced = re.sub("SSViewer::set_theme\('[a-z]+'\)", "SSViewer::set_theme('whatever')", my_string) print replaced produces: Some file with some other junk SSViewer::set_theme('whatever'); SSViewer::set_theme('whatever'); Something else if you want to do it to a file: my_string = open('myfile', 'r').read() A: >> my_string = "SSViewer::set_theme('bullsorbit');" >>> import re >>> change = re.findall(r"SSViewer::set_theme\('(\w*)'\);",my_string) >>> my_string.replace(change[0],"blah") "SSViewer::set_theme('blah');" its not elegant but it works. the findall will return a dictionary of items that are inside the ('') and then replaces them. If you can get sub to work then that may look nicer but this will definitely work A: st = "SSViewer::set_theme('" for line in open("file.txt"): line=line.strip() if st in line: a = line[ :line.index(st)+len(st)] b = line [line.index(st)+len(st): ] i = b.index("')") b = b[i:] print a + "newword" + b A: while your explanation is not entirely clear, I think you might make some use of the following: open(fname).read().replace('bullsorbit', 'new_string')
python string search replace
SSViewer::set_theme('bullsorbit'); this my string. I want search in string "SSViewer::set_theme('bullsorbit'); " and replace 'bullsorbit' with another string. 'bullsorbit' string is dynamically changing.
[ "Not in a situation to be able to test this so you may need to fiddle with the Regular Expression (they may be errors in it.)\nimport re\nre.sub(\"SSViewer::set_theme\\('[a-z]+'\\)\", \"SSViewer::set_theme('whatever')\", my_string)\n\nIs this what you want?\nJust tested it, this is some sample output:\nmy_string = \"\"\"Some file with some other junk\nSSViewer::set_theme('bullsorbit');\nSSViewer::set_theme('another');\nSomething else\"\"\"\n\nimport re\nreplaced = re.sub(\"SSViewer::set_theme\\('[a-z]+'\\)\", \"SSViewer::set_theme('whatever')\", my_string)\nprint replaced\n\nproduces:\nSome file with some other junk\nSSViewer::set_theme('whatever');\nSSViewer::set_theme('whatever');\nSomething else\n\nif you want to do it to a file:\nmy_string = open('myfile', 'r').read()\n\n", ">> my_string = \"SSViewer::set_theme('bullsorbit');\"\n>>> import re\n>>> change = re.findall(r\"SSViewer::set_theme\\('(\\w*)'\\);\",my_string)\n>>> my_string.replace(change[0],\"blah\")\n\"SSViewer::set_theme('blah');\"\n\nits not elegant but it works. the findall will return a dictionary of items that are inside the ('') and then replaces them. If you can get sub to work then that may look nicer but this will definitely work\n", "st = \"SSViewer::set_theme('\"\nfor line in open(\"file.txt\"):\n line=line.strip()\n if st in line:\n a = line[ :line.index(st)+len(st)]\n b = line [line.index(st)+len(st): ]\n i = b.index(\"')\")\n b = b[i:]\n print a + \"newword\" + b\n\n", "while your explanation is not entirely clear, I think you might make some use of the following:\nopen(fname).read().replace('bullsorbit', 'new_string')\n\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "python", "replace", "search", "string" ]
stackoverflow_0001184119_python_replace_search_string.txt
Q: Graduation Project I require to do a project as a part of my final year of engineering graduation studies.Can you suggest some projects pertaining to distributed systems and artificial intelligence together and which require python,c or c++ for programming? Note:-Please suggest a project that is attainable for a group of 2 students. A: Perhaps improve computer opponents for Go? http://en.wikipedia.org/wiki/Go_(game) A: How about a decision process that uses mapreduce, and gets more efficient at choosing the answer each time? A: And what about participating in NetFlix competition? A: Orange is an comprehensive data mining and machine learing suite featuring Python scripting and visual programming. Maybe you too distributed it:) A: I need some kind of tool which observes the behaviour of a automation system (for instance a process control system), and is able to figure out on which inputs which actions follow, and then derives some kind of model from it which would then be usable as a simulation of the real system. It's not exactly distributed, but its engineering :-) On the other hand, our code is written in java (although you could use jython instead). If you are interested, drop me a mail (juergen DOT rose AT inavare DOT net). A: If GO seems to complicated, you could also try a five in a row computer opponent. (Wikipedia does this with GO-pieces, but I'm more used to the tic-tac-toe noughts and crosses.) A: How about hacking a P2P protocol and implementing something useful? I worked on a proxy cache implementation for P2P traffic. Basically, design and implement a proxy cache for P2P traffic. It will be different from web documents/objects in that: 1- P2P objects are immutable. You might request a web-page more than once, but you really download a P2P object (e.g., movie) once and read it from your desk multiple times. 2- P2P objects are huge compared to web objects (up to few Gigabytes) so you'll need to cache some objects partially, and implement some kind of smart admission/eviction policy. 3- P2P objects have different popularity. Just because something is in the cache does not mean it should stay in the cache forever, because its popularity will degrade (i.e., once a movie is released it is very popular, downloaded a lot, then it drops and everybody forgets about it), so you can't rely on recency or frequency alone as the only replacement policy.
Graduation Project
I require to do a project as a part of my final year of engineering graduation studies.Can you suggest some projects pertaining to distributed systems and artificial intelligence together and which require python,c or c++ for programming? Note:-Please suggest a project that is attainable for a group of 2 students.
[ "Perhaps improve computer opponents for Go?\nhttp://en.wikipedia.org/wiki/Go_(game)\n", "How about a decision process that uses mapreduce, and gets more efficient at choosing the answer each time?\n", "And what about participating in NetFlix competition?\n", "Orange is an comprehensive data mining and machine learing suite featuring Python scripting and visual programming. Maybe you too distributed it:)\n", "I need some kind of tool which observes the behaviour of a automation system (for instance a process control system), and is able to figure out on which inputs which actions follow, and then derives some kind of model from it which would then be usable as a simulation of the real system. It's not exactly distributed, but its engineering :-)\nOn the other hand, our code is written in java (although you could use jython instead).\nIf you are interested, drop me a mail (juergen DOT rose AT inavare DOT net).\n", "If GO seems to complicated, you could also try a five in a row computer opponent. (Wikipedia does this with GO-pieces, but I'm more used to the tic-tac-toe noughts and crosses.)\n", "How about hacking a P2P protocol and implementing something useful? I worked on a proxy cache implementation for P2P traffic. Basically, design and implement a proxy cache for P2P traffic. It will be different from web documents/objects in that:\n1- P2P objects are immutable. You might request a web-page more than once, but you really download a P2P object (e.g., movie) once and read it from your desk multiple times.\n2- P2P objects are huge compared to web objects (up to few Gigabytes) so you'll need to cache some objects partially, and implement some kind of smart admission/eviction policy.\n3- P2P objects have different popularity. Just because something is in the cache does not mean it should stay in the cache forever, because its popularity will degrade (i.e., once a movie is released it is very popular, downloaded a lot, then it drops and everybody forgets about it), so you can't rely on recency or frequency alone as the only replacement policy.\n" ]
[ 4, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ "artificial_intelligence", "c++", "distributed", "python", "system" ]
stackoverflow_0001184018_artificial_intelligence_c++_distributed_python_system.txt
Q: Python: extending int and MRO for __init__ In Python, I'm trying to extend the builtin 'int' type. In doing so I want to pass in some keywoard arguments to the constructor, so I do this: class C(int): def __init__(self, val, **kwargs): super(C, self).__init__(val) # Do something with kwargs here... However while calling C(3) works fine, C(3, a=4) gives: 'a' is an invalid keyword argument for this function` and C.__mro__ returns the expected: (<class '__main__.C'>, <type 'int'>, <type 'object'>) But it seems that Python is trying to call int.__init__ first... Anyone know why? Is this a bug in the interpreter? A: The docs for the Python data model advise using __new__: object.new(cls[, ...]) new() is intended mainly to allow subclasses of immutable types (like int, str, or tuple) to customize instance creation. It is also commonly overridden in custom metaclasses in order to customize class creation. Something like this should do it for the example you gave: class C(int): def __new__(cls, val, **kwargs): inst = super(C, cls).__new__(cls, val) inst.a = kwargs.get('a', 0) return inst A: You should be overriding "__new__", not "__init__" as ints are immutable. A: What everyone else (so far) said. Int are immutable, so you have to use new. Also see (the accepted answers to): increment int object Why can't I subclass datetime.date?
Python: extending int and MRO for __init__
In Python, I'm trying to extend the builtin 'int' type. In doing so I want to pass in some keywoard arguments to the constructor, so I do this: class C(int): def __init__(self, val, **kwargs): super(C, self).__init__(val) # Do something with kwargs here... However while calling C(3) works fine, C(3, a=4) gives: 'a' is an invalid keyword argument for this function` and C.__mro__ returns the expected: (<class '__main__.C'>, <type 'int'>, <type 'object'>) But it seems that Python is trying to call int.__init__ first... Anyone know why? Is this a bug in the interpreter?
[ "The docs for the Python data model advise using __new__:\nobject.new(cls[, ...])\n\nnew() is intended mainly to allow subclasses of immutable types (like int, str, or tuple) to customize instance creation. It is also commonly overridden in custom metaclasses in order to customize class creation.\n\nSomething like this should do it for the example you gave:\nclass C(int):\n\n def __new__(cls, val, **kwargs):\n inst = super(C, cls).__new__(cls, val)\n inst.a = kwargs.get('a', 0)\n return inst\n\n", "You should be overriding \n\"__new__\", not \"__init__\" as ints are immutable.\n", "What everyone else (so far) said. Int are immutable, so you have to use new.\nAlso see (the accepted answers to):\n\nincrement int object\nWhy can't I subclass datetime.date?\n\n" ]
[ 7, 3, 3 ]
[]
[]
[ "class_design", "overriding", "python" ]
stackoverflow_0001184337_class_design_overriding_python.txt
Q: python+encryption: Encrypting session key using public key I want to encrypt the session key using the public key. How does the PGP software do this? Can somebody specify the procedure or function of encryption in Python? A: There's also the PyCrypto module that looks exactly like what you are looking for: http://www.dlitz.net/software/pycrypto/ the API docs are here: http://www.dlitz.net/software/pycrypto/apidoc/ and some nice docs with basic examples of encrypting/decrypting here: http://www.dlitz.net/software/pycrypto/doc/. I'll confess I haven't used this module, but it seems like you would establish a session with a public key, then use that to encrypt/decrypt the channel with a Crypto.PublicKey object. Then do the usual activity of generating a session key, communicating that over whatever channel you have. Finally, switch the channel to a Crypto.Cipher object using the session key. Also, be sure to be very, very careful about how you obtain the value for your session key if security is a real concern, particularly on multiuser or only partially trusted machine. A: See this post for background information about the basic technology. That post is about encryption in general - for information about using gpg from Python, see this, for example. A: See What is the best/easiest to use encryption library in python, which mentions a PGP-compatible solution, gpgme. For reasons I ignore, nobody in How to do PGP in Python (generate keys, encrypt/decrypt) mentioned gpgme...
python+encryption: Encrypting session key using public key
I want to encrypt the session key using the public key. How does the PGP software do this? Can somebody specify the procedure or function of encryption in Python?
[ "There's also the PyCrypto module that looks exactly like what you are looking for: http://www.dlitz.net/software/pycrypto/ the API docs are here: http://www.dlitz.net/software/pycrypto/apidoc/ and some nice docs with basic examples of encrypting/decrypting here: http://www.dlitz.net/software/pycrypto/doc/.\nI'll confess I haven't used this module, but it seems like you would establish a session with a public key, then use that to encrypt/decrypt the channel with a Crypto.PublicKey object. Then do the usual activity of generating a session key, communicating that over whatever channel you have. Finally, switch the channel to a Crypto.Cipher object using the session key.\nAlso, be sure to be very, very careful about how you obtain the value for your session key if security is a real concern, particularly on multiuser or only partially trusted machine.\n", "See this post for background information about the basic technology. That post is about encryption in general - for information about using gpg from Python, see this, for example.\n", "See What is the best/easiest to use encryption library in python, which mentions a PGP-compatible \nsolution, gpgme.\nFor reasons I ignore, nobody in How to do PGP in Python (generate keys, encrypt/decrypt) mentioned gpgme...\n" ]
[ 3, 1, 0 ]
[]
[]
[ "encryption", "public_key", "python" ]
stackoverflow_0001057768_encryption_public_key_python.txt
Q: Is there a library which handles the parsing of BIND zone files in Python? This is related to a similar question about BIND, but in this case I'm trying to see if there's any easy way to parse various zone files into a dictionary, list, or some other manageable data structure, with the final goal being committing the data to a database. I'm using BIND 8.4.7 and Python 2.4. I may be able to convince management to use a later Python version if needed, but the BIND version is non-negotiable at the moment. A: ISTM, easyzone might meet your needs. It sits on top of dnspython, which would be an alternative API.
Is there a library which handles the parsing of BIND zone files in Python?
This is related to a similar question about BIND, but in this case I'm trying to see if there's any easy way to parse various zone files into a dictionary, list, or some other manageable data structure, with the final goal being committing the data to a database. I'm using BIND 8.4.7 and Python 2.4. I may be able to convince management to use a later Python version if needed, but the BIND version is non-negotiable at the moment.
[ "ISTM, easyzone might meet your needs. It sits on top of dnspython, which would be an alternative API.\n" ]
[ 1 ]
[]
[]
[ "bind", "database", "parsing", "python", "python_2.4" ]
stackoverflow_0001184803_bind_database_parsing_python_python_2.4.txt
Q: To make a plan for my first MySQL project I need to complete the plan of a ask-a-question site for my uni. in a few days. I need to have the first version of the code ready for the next Tuesday, while the end of the project is in about three weeks. Questions about the project which do not fit here to make efficient tables to improve a relation figure to improve a ERD diagram to SHA1-hash you password in a MySQL database by Python to have a revision history for the questions to get the right way in designing databases to get primary and foreign keys right in ERD to understand login -variable in cookies/URL to get info about my Uni's servers to improve SQL -queries to write SQL queries in DDL correctly to prevent to use of duplicate tags in a question to improve SQL queries in DDL to have no duplicate tags in a table to separate answers in a databse My uni. offers little support for tools which I selected: Tools in building the backend Python in building the database schema??? (I am not sure which components I can build by Python) MySQL to store data I am not sure which tool to use in building login and logout -system. They do not allow me to use Google's system. This forces me to use some simple open-source code, since it would take more than a week to build a descent login/logout -system. Tools in building the frontend Django (if we can use MySQL in Django) Tools for Planning Google Docs' Spreadsheet for illustrating the usecases TopCoder UML Tool to show primary keys and other relations in the database Tools for coding Vim, Screen, Zsh, OS X's Visor: my dot-files EasyEclipse for Python (only if I get a difficult error message) My focus in the project: I aim to build a database system only for users and moderators such that I only provide the following features to allow user to add to a database such that I neutralize the input (I know that there is some tool for that, but I am not sure about its name.) to arrange questions by time to arrange questions by name to arrange questions by their subject to allow users to remove their questions to send an email to user that the question was successfully asked Things about which I am uncertain how to integrate the login -system to the database such that the user sees only his data that is his username when he logins successfully, similarly as in Joomla Which components should I not build by Python when I use MySQL for databases? My uni. does not give me hardware support for the project. This suggests me that I will be better of in using a host which is specialized in my project. I used Djangohosting.ch the last month, and by their toos, I got started. Which host would you use such that I can show the final product to my Uni.? This is my first official database project so my plan apparently has shortcomings, since there must be tools which I do not know. Please, pinpoint any one of them. A: First, this is all a lot to work with in a week. But here it goes. Tools for the backend: SQLAlchemy - This is an ORM toolkit that is plenty powerful for most smaller tasks when using a MySQL database built with Python. To my knowledge, it is the best for this job. http://www.sqlalchemy.org/ Django - "...is a high-level Python Web framework..." This might be better for rapid development of a site with login/logout methods included and a minimal learning curve for someone with web/Python understanding. Tools in building the frontend: If you already plan on using Django for the backend, I'd recommend using it for the frontend as well. Things about which you are uncertain: The users can be specified in MySQL and their permissions can be set accordingly. From some of the requirements you listed, most of these sound like they can be contained within the capabilities of Django. A: Use the django Model to create the Object-Relational Mapping (ORM) that injects and retrieve your data. For login/logout, django has an AuthenticationMiddleware feature you can probably use, although I am not sure if you can solve your problem with it. In any case, your project, with the given deadlines is totally unrealistic. Be prepared to miss the deadline, and hear the whooshing sound they do as they fly by. A: I think this can all be accomplished in django. See the official tutorial. A: I'm guessing that you wouldn't be allowed to use an ORM, since you call this a "MySQL project". If this is an incorrect assumption, I'd agree with N Arnold's recommendation of using Django. Rather than using SQLAlchemy, I think you'd find that Django's ORM is good enough (especially if you use v1.1rc or trunk). Like some of the comments to your initial question, this does seem like a large amount of work if you have to learn a framework as well as produce a project in it. On the other hand, someone who knew Django could crack out the base of such a project in a day or two. A: You can build your database with MySQL, go read the official docs. It really doesn't matter what language you use to program a front-end whether it be a website, command line interface, gui interface, most languages handle this pretty well but it seems you're set on building a web application and this can be achieved very easily with Django, which is a Python web framework. Doing what I've told you, if you keep at it you'll be done in under 16 hours. Good luck. Btw. your project seems to focus on a lot of irrelevant things. You're creating a database application, unless you already know CSS and JQuery, why don't you just create it in simple unstyled XHTML; that way you have less work to do! A: Django can use many different database backends one of which is MySQL, to help support this it provides an ORM (Object Relational Mapping) layer which abstracts away the SQL code and storage medium and allows you to write Models containing storage fields and logic as necessary without worrying about how they are stored in the persistence layer. Django also contains basic authentication (login/logout) functionality and has the concept of users and admin-users built into it. As an example using the built in user models and the ORM would allow you to get all questions asked by a user with something like the following code: Question.objects.all.filter(asker=request.user) Where Questions is the model you have defined to hold your questions (with a field called 'asker' which is a foreign key to the user) and the request.user is the user logged into the website. I suggest you read up on the Django ORM. As far as hosting it, you could use Ubuntu on a desktop computer, or if you need an external host then would recommend Webfaction or Djangohosting.ch as two of the most 'Django Friendly' Hosts. A: Ehh, what is the purpose of this development? To build a real production-ready system, or to pass an exam by building a mini-project that will never see real use? If your purpose is to pass an exam, then build what your teachers like to see. Take you cues from the material they have been using in classes, and also ask them outright what they think is good. If you want to build a production system, then Django would be a great choice. Respectfully however, given the limited understanding of Django you demonstrate, you will most likely not complete the project in time. Django has pre-existing functionality for: Building the database schema. The DB tables will be buildt when you define your model classes in Django and run manage.py syncdb. A login system. Django has a login system with cookies etc build into it; and several 3rd party Django addons extend this system. Encrypting passwords in the database, Django uses SHA-1 with a salt if memory serves. Thus your teachers could legitimately say that you haven't demonstrated your own skills at modeling a DB schema, you have just used Djangos pre-existing functionality. Will they be OK with this, or will they fail you on this exam? If you need to demonstrate understanding of the core concepts, perhaps you would be better off by staying with a system you know well already, instead of mixing in Django as another complexity, another thing you'll need to learn in very little time... A: About Moderators and Users What is the difference between moderators and users? Moderators can modify text, but how? Are they planning you to practise permissions, fs or some open-source system to differentiate users from moderators? Very odd that they do not allow Google products, but they allow open-source. Interestingly, USFCA encourages Google products in similar projects. Perhaps, there are some users, who could help you with userspaces. Good luck! A: They give us only this example of the web app. The code is not useful for me, since the code is in Finnish and it is written in PHP. It is also not open-source such that I cannot paste the code here. I would like to see similar examples in Python. Please, put links to the comments.
To make a plan for my first MySQL project
I need to complete the plan of a ask-a-question site for my uni. in a few days. I need to have the first version of the code ready for the next Tuesday, while the end of the project is in about three weeks. Questions about the project which do not fit here to make efficient tables to improve a relation figure to improve a ERD diagram to SHA1-hash you password in a MySQL database by Python to have a revision history for the questions to get the right way in designing databases to get primary and foreign keys right in ERD to understand login -variable in cookies/URL to get info about my Uni's servers to improve SQL -queries to write SQL queries in DDL correctly to prevent to use of duplicate tags in a question to improve SQL queries in DDL to have no duplicate tags in a table to separate answers in a databse My uni. offers little support for tools which I selected: Tools in building the backend Python in building the database schema??? (I am not sure which components I can build by Python) MySQL to store data I am not sure which tool to use in building login and logout -system. They do not allow me to use Google's system. This forces me to use some simple open-source code, since it would take more than a week to build a descent login/logout -system. Tools in building the frontend Django (if we can use MySQL in Django) Tools for Planning Google Docs' Spreadsheet for illustrating the usecases TopCoder UML Tool to show primary keys and other relations in the database Tools for coding Vim, Screen, Zsh, OS X's Visor: my dot-files EasyEclipse for Python (only if I get a difficult error message) My focus in the project: I aim to build a database system only for users and moderators such that I only provide the following features to allow user to add to a database such that I neutralize the input (I know that there is some tool for that, but I am not sure about its name.) to arrange questions by time to arrange questions by name to arrange questions by their subject to allow users to remove their questions to send an email to user that the question was successfully asked Things about which I am uncertain how to integrate the login -system to the database such that the user sees only his data that is his username when he logins successfully, similarly as in Joomla Which components should I not build by Python when I use MySQL for databases? My uni. does not give me hardware support for the project. This suggests me that I will be better of in using a host which is specialized in my project. I used Djangohosting.ch the last month, and by their toos, I got started. Which host would you use such that I can show the final product to my Uni.? This is my first official database project so my plan apparently has shortcomings, since there must be tools which I do not know. Please, pinpoint any one of them.
[ "First, this is all a lot to work with in a week. But here it goes.\nTools for the backend:\n\nSQLAlchemy - This is an ORM toolkit that is plenty powerful for most smaller tasks when using a MySQL database built with Python. To my knowledge, it is the best for this job. http://www.sqlalchemy.org/\nDjango - \"...is a high-level Python Web framework...\" This might be better for rapid development of a site with login/logout methods included and a minimal learning curve for someone with web/Python understanding.\n\nTools in building the frontend:\nIf you already plan on using Django for the backend, I'd recommend using it for the frontend as well.\nThings about which you are uncertain:\n\nThe users can be specified in MySQL and their permissions can be set accordingly.\nFrom some of the requirements you listed, most of these sound like they can be contained within the capabilities of Django.\n\n", "Use the django Model to create the Object-Relational Mapping (ORM) that injects and retrieve your data. \nFor login/logout, django has an AuthenticationMiddleware feature you can probably use, although I am not sure if you can solve your problem with it.\nIn any case, your project, with the given deadlines is totally unrealistic. Be prepared to miss the deadline, and hear the whooshing sound they do as they fly by.\n", "I think this can all be accomplished in django. See the official tutorial.\n", "I'm guessing that you wouldn't be allowed to use an ORM, since you call this a \"MySQL project\".\nIf this is an incorrect assumption, I'd agree with N Arnold's recommendation of using Django. Rather than using SQLAlchemy, I think you'd find that Django's ORM is good enough (especially if you use v1.1rc or trunk).\nLike some of the comments to your initial question, this does seem like a large amount of work if you have to learn a framework as well as produce a project in it. On the other hand, someone who knew Django could crack out the base of such a project in a day or two.\n", "You can build your database with MySQL, go read the official docs. It really doesn't matter what language you use to program a front-end whether it be a website, command line interface, gui interface, most languages handle this pretty well but it seems you're set on building a web application and this can be achieved very easily with Django, which is a Python web framework.\nDoing what I've told you, if you keep at it you'll be done in under 16 hours. Good luck. \nBtw. your project seems to focus on a lot of irrelevant things. You're creating a database application, unless you already know CSS and JQuery, why don't you just create it in simple unstyled XHTML; that way you have less work to do!\n", "Django can use many different database backends one of which is MySQL, to help support this it provides an ORM (Object Relational Mapping) layer which abstracts away the SQL code and storage medium and allows you to write Models containing storage fields and logic as necessary without worrying about how they are stored in the persistence layer.\nDjango also contains basic authentication (login/logout) functionality and has the concept of users and admin-users built into it.\nAs an example using the built in user models and the ORM would allow you to get all questions asked by a user with something like the following code:\nQuestion.objects.all.filter(asker=request.user)\n\nWhere Questions is the model you have defined to hold your questions (with a field called 'asker' which is a foreign key to the user) and the request.user is the user logged into the website.\nI suggest you read up on the Django ORM.\nAs far as hosting it, you could use Ubuntu on a desktop computer, or if you need an external host then would recommend Webfaction or Djangohosting.ch as two of the most 'Django Friendly' Hosts.\n", "Ehh, what is the purpose of this development? To build a real production-ready system, or to pass an exam by building a mini-project that will never see real use?\nIf your purpose is to pass an exam, then build what your teachers like to see. Take you cues from the material they have been using in classes, and also ask them outright what they think is good.\nIf you want to build a production system, then Django would be a great choice. Respectfully however, given the limited understanding of Django you demonstrate, you will most likely not complete the project in time.\nDjango has pre-existing functionality for:\n\nBuilding the database schema. The DB tables will be buildt when you define your model classes in Django and run manage.py syncdb.\nA login system. Django has a login system with cookies etc build into it; and several 3rd party Django addons extend this system.\nEncrypting passwords in the database, Django uses SHA-1 with a salt if memory serves.\n\nThus your teachers could legitimately say that you haven't demonstrated your own skills at modeling a DB schema, you have just used Djangos pre-existing functionality. Will they be OK with this, or will they fail you on this exam?\nIf you need to demonstrate understanding of the core concepts, perhaps you would be better off by staying with a system you know well already, instead of mixing in Django as another complexity, another thing you'll need to learn in very little time...\n", "About Moderators and Users\nWhat is the difference between moderators and users? Moderators can modify text, but how? Are they planning you to practise permissions, fs or some open-source system to differentiate users from moderators? \nVery odd that they do not allow Google products, but they allow open-source. Interestingly, USFCA encourages Google products in similar projects. Perhaps, there are some users, who could help you with userspaces.\nGood luck!\n", "They give us only this example of the web app. The code is not useful for me, since the code is in Finnish and it is written in PHP. It is also not open-source such that I cannot paste the code here.\nI would like to see similar examples in Python. Please, put links to the comments. \n" ]
[ 4, 2, 2, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ "mysql", "python" ]
stackoverflow_0001168701_mysql_python.txt
Q: Creating child frames of main frame in wxPython I am trying create a new frame in wxPython that is a child of the main frame so that when the main frame is closed, the child frame will also be closed. Here is a simplified example of the problem that I am having: #! /usr/bin/env python import wx class App(wx.App): def OnInit(self): frame = MainFrame() frame.Show() self.SetTopWindow(frame) return True class MainFrame(wx.Frame): title = "Main Frame" def __init__(self): wx.Frame.__init__(self, None, 1, self.title) #id = 5 menuFile = wx.Menu() menuAbout = wx.Menu() menuAbout.Append(2, "&About...", "About this program") menuBar = wx.MenuBar() menuBar.Append(menuAbout, "&Help") self.SetMenuBar(menuBar) self.CreateStatusBar() self.Bind(wx.EVT_MENU, self.OnAbout, id=2) def OnQuit(self, event): self.Close() def OnAbout(self, event): AboutFrame().Show() class AboutFrame(wx.Frame): title = "About this program" def __init__(self): wx.Frame.__init__(self, 1, -1, self.title) #trying to set parent=1 (id of MainFrame()) if __name__ == '__main__': app = App(False) app.MainLoop() If I set AboutFrame's parent frame to None (on line 48) then the About frame is succesfully created and displayed but it stays open when the main frame is closed. Is this the approach that I should be taking to create child frames of the main frame or should I be doing it differently, eg. using the onClose event of the main frame to close any child frames (this way sounds very 'hackish'). If I am taking the correct approach, why is it not working? A: class AboutFrame(wx.Frame): title = "About this program" def __init__(self): wx.Frame.__init__(self, wx.GetApp().TopWindow, title=self.title)
Creating child frames of main frame in wxPython
I am trying create a new frame in wxPython that is a child of the main frame so that when the main frame is closed, the child frame will also be closed. Here is a simplified example of the problem that I am having: #! /usr/bin/env python import wx class App(wx.App): def OnInit(self): frame = MainFrame() frame.Show() self.SetTopWindow(frame) return True class MainFrame(wx.Frame): title = "Main Frame" def __init__(self): wx.Frame.__init__(self, None, 1, self.title) #id = 5 menuFile = wx.Menu() menuAbout = wx.Menu() menuAbout.Append(2, "&About...", "About this program") menuBar = wx.MenuBar() menuBar.Append(menuAbout, "&Help") self.SetMenuBar(menuBar) self.CreateStatusBar() self.Bind(wx.EVT_MENU, self.OnAbout, id=2) def OnQuit(self, event): self.Close() def OnAbout(self, event): AboutFrame().Show() class AboutFrame(wx.Frame): title = "About this program" def __init__(self): wx.Frame.__init__(self, 1, -1, self.title) #trying to set parent=1 (id of MainFrame()) if __name__ == '__main__': app = App(False) app.MainLoop() If I set AboutFrame's parent frame to None (on line 48) then the About frame is succesfully created and displayed but it stays open when the main frame is closed. Is this the approach that I should be taking to create child frames of the main frame or should I be doing it differently, eg. using the onClose event of the main frame to close any child frames (this way sounds very 'hackish'). If I am taking the correct approach, why is it not working?
[ "class AboutFrame(wx.Frame):\n\n title = \"About this program\"\n\n def __init__(self):\n wx.Frame.__init__(self, wx.GetApp().TopWindow, title=self.title)\n\n" ]
[ 10 ]
[]
[]
[ "python", "wxpython" ]
stackoverflow_0001185156_python_wxpython.txt
Q: Which python tools for building a database-backed webapp I am completing my first database project which aims to build a simple discussion site. The answers which I got at Superuser suggests me that Python is difficult to use in building a database webapp without any other tools. Which other tools would you use? A: Sorry, your question makes no sense. You say you can't use Django because you have to write your SQL queries yourself. Firstly, why do you have to? And secondly, Django certainly doesn't stop you. Even though you say you want to write your SQL queries yourself, you then ask what ORM is best. An ORM replaces the need to write SQL, that's the whole point. If you can't use Django for that reason, SQLAlchemy won't help. A: Many people would recommend SQLAlchemy. A: Your question is very strange. First, Django doesn't force you to use its SQL abstraction. Each part of Django can be use idenpendently of the others. You can use Django together with any other SQL library. Second, if you need to build your own SQL queries, an ORM is the opposite of what you need. A: ORM is not same as templating. Since you want to write your own sql queries and at the same time want to write clean python code, i would suggest you look at web.py. I have used web.py and i must say its really simple and to the point.It has its own templating engine, but you can use a different one too. If you would like to use ORM you can use SQLAlchemy or SQLObject they seem to be quite popular. A: There are many other options other than Django. However: You can make your own SQL queries in Django. Here is the documentation for extra(). extra() lets you make extra SQL calls on top of the ORM. If you want to make raw SQL queries, bypassing the ORM entirely, you can do so with django.db. See this article for examples. That said, other options aside from Django if you still want to use a framework: Turbogears Pylons Web2Py Zope3 Plone/Zop2 See this list for a listing of more frameworks. Now, if you don't want to use a ORM and just want o make SQL calls directly, Python also has the ability to interact with a database. See this page on the Python wiki.
Which python tools for building a database-backed webapp
I am completing my first database project which aims to build a simple discussion site. The answers which I got at Superuser suggests me that Python is difficult to use in building a database webapp without any other tools. Which other tools would you use?
[ "Sorry, your question makes no sense.\n\nYou say you can't use Django because you have to write your SQL queries yourself. Firstly, why do you have to? And secondly, Django certainly doesn't stop you.\nEven though you say you want to write your SQL queries yourself, you then ask what ORM is best. An ORM replaces the need to write SQL, that's the whole point. If you can't use Django for that reason, SQLAlchemy won't help.\n\n", "Many people would recommend SQLAlchemy.\n", "Your question is very strange.\nFirst, Django doesn't force you to use its SQL abstraction. Each part of Django can be use idenpendently of the others. You can use Django together with any other SQL library.\nSecond, if you need to build your own SQL queries, an ORM is the opposite of what you need.\n", "ORM is not same as templating. Since you want to write your own sql queries and at the same time want to write clean python code, i would suggest you look at web.py. I have used web.py and i must say its really simple and to the point.It has its own templating engine, but you can use a different one too.\nIf you would like to use ORM you can use SQLAlchemy or SQLObject they seem to be quite popular.\n", "There are many other options other than Django. However:\nYou can make your own SQL queries in Django. Here is the documentation for extra(). extra() lets you make extra SQL calls on top of the ORM.\nIf you want to make raw SQL queries, bypassing the ORM entirely, you can do so with django.db. See this article for examples.\nThat said, other options aside from Django if you still want to use a framework:\n\nTurbogears\nPylons\nWeb2Py\nZope3\nPlone/Zop2\n\nSee this list for a listing of more frameworks.\nNow, if you don't want to use a ORM and just want o make SQL calls directly, Python also has the ability to interact with a database. See this page on the Python wiki.\n" ]
[ 5, 2, 2, 1, 1 ]
[]
[]
[ "cheetah", "orm", "python" ]
stackoverflow_0001185248_cheetah_orm_python.txt
Q: memcache entities without ReferenceProperty I have a list of entities which I want to store in the memcache. The problem is that I have large Models referenced by their ReferenceProperty which are automatically also stored in the memcache. As a result I'm exceeding the size limit for objects stored in memcache. Is there any possibility to prevent the ReferenceProperties from loading the referenced Models while putting them in memcache? I tried something like def __getstate__(self): odict = self.__dict__.copy() odict['model'] = None return odict in the class I want to store in memcache, but that doesn't seem to do the trick. Any suggestions would be highly appreciated. Edit: I verified by adding a logging-statement that the __getstate__-Method is executed. A: For large entities, you might want to manually handle the loading of the related entities by storing the keys of the large entities as something other than a ReferenceProperty. That way you can choose when to load the large entity and when not to. Just use a long property store ids or a string property to store keynames. A: odict = self.copy() del odict.model would probably be better than using dict (unless getstate needs to return dict - i'm not familiar with it). Not sure if this solves Your problem, though... You could implement del in Model to test if it's freed. For me it looks like You still hold a reference somewhere. Also check out the pickle module - you would have to store everything under a single key, but it automaticly protects You from multiple references to the same object (stores it only once). Sorry no link, mobile client ;) Good luck!
memcache entities without ReferenceProperty
I have a list of entities which I want to store in the memcache. The problem is that I have large Models referenced by their ReferenceProperty which are automatically also stored in the memcache. As a result I'm exceeding the size limit for objects stored in memcache. Is there any possibility to prevent the ReferenceProperties from loading the referenced Models while putting them in memcache? I tried something like def __getstate__(self): odict = self.__dict__.copy() odict['model'] = None return odict in the class I want to store in memcache, but that doesn't seem to do the trick. Any suggestions would be highly appreciated. Edit: I verified by adding a logging-statement that the __getstate__-Method is executed.
[ "For large entities, you might want to manually handle the loading of the related entities by storing the keys of the large entities as something other than a ReferenceProperty. That way you can choose when to load the large entity and when not to. Just use a long property store ids or a string property to store keynames.\n", "odict = self.copy()\ndel odict.model\n\nwould probably be better than using dict (unless getstate needs to return dict - i'm not familiar with it). Not sure if this solves Your problem, though... You could implement del in Model to test if it's freed. For me it looks like You still hold a reference somewhere.\nAlso check out the pickle module - you would have to store everything under a single key, but it automaticly protects You from multiple references to the same object (stores it only once). Sorry no link, mobile client ;)\nGood luck!\n" ]
[ 1, 0 ]
[]
[]
[ "google_app_engine", "memcached", "python" ]
stackoverflow_0001152690_google_app_engine_memcached_python.txt
Q: Error importing external library within Django template tag library So I'm attempting to write a Django reusable app that provides a method for displaying your Twitter feed on your page. I know well that it already exists 20 times. It's an academic exercise. :) Directory structure is pretty simple: myproject |__ __init__.py |__ manage.py |__ settings.py |__ myapp |__ __init__.py |__ admin.py |__ conf |__ __init__.py |__ appsettings.py |__ feedparser.py |__ models.py |__ templates |__ __init__.py |__ templatetags |__ __init__.py |__ twitterfeed.py |__ views.py |__ templates |__ base.html |__ urls.py When running the Django shell, the functions defined in twitterfeed.py work perfectly. I also believe that I have the template tags properly named and registered. As you can see, I use the excellent Universal Feed Parser. My problem is not within UFP itself, but in UFP's inability to be called while importing the template tag library. When I {% load twitterfeed %} in base.py, I get the following error: 'twitterfeed' is not a valid tag library: Could not load template library from django.templatetags.twitterfeed, No module named feedparser I import feedparser using the following statement: import re, datetime, time, myapp.feedparser The best I can tell, this error message is slightly deceiving. I think there's an ImportError going on when the template library is loaded, and this is Django's interpretation of it. Is there any way I can import feedparser.py within my reusable app without requiring users of the app to place feedparser somewhere in their PythonPath? Thanks! A: I solve this kind of problem (shipping libraries that are dependencies for my overall project) in the following way. First, I create an "ext" directory in the root of my project (in your case that would be myproject/ext). Then I place dependencies such as feedparser in that ext directory - myproject/ext/feedparser Finally, I change my manage.py script to insert the ext/ directory at the front of sys.path. This means both ./manage.py runserver and ./manage.py shell will pick up the correct path: # manage.py import os, sys sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'ext')) # ... rest of manage.py I find this works really well if you don't want to mess around with things like virtualenvs. When you deploy your project you have to make sure the path is correct as well - I usually solve this by adding the same sys.path.insert line to the start of my mod_wsgi app.wsgi file. A: This looks like one of those annoying relative path issues - solved in Python 2.6 and higher (where you can do import ..feedparser etc) but often a bit tricky on older versions. One cheap and cheerful way to fix this could be just to move feedparser.py in to your templatetags directory, as a sibling to twitterfeed.py
Error importing external library within Django template tag library
So I'm attempting to write a Django reusable app that provides a method for displaying your Twitter feed on your page. I know well that it already exists 20 times. It's an academic exercise. :) Directory structure is pretty simple: myproject |__ __init__.py |__ manage.py |__ settings.py |__ myapp |__ __init__.py |__ admin.py |__ conf |__ __init__.py |__ appsettings.py |__ feedparser.py |__ models.py |__ templates |__ __init__.py |__ templatetags |__ __init__.py |__ twitterfeed.py |__ views.py |__ templates |__ base.html |__ urls.py When running the Django shell, the functions defined in twitterfeed.py work perfectly. I also believe that I have the template tags properly named and registered. As you can see, I use the excellent Universal Feed Parser. My problem is not within UFP itself, but in UFP's inability to be called while importing the template tag library. When I {% load twitterfeed %} in base.py, I get the following error: 'twitterfeed' is not a valid tag library: Could not load template library from django.templatetags.twitterfeed, No module named feedparser I import feedparser using the following statement: import re, datetime, time, myapp.feedparser The best I can tell, this error message is slightly deceiving. I think there's an ImportError going on when the template library is loaded, and this is Django's interpretation of it. Is there any way I can import feedparser.py within my reusable app without requiring users of the app to place feedparser somewhere in their PythonPath? Thanks!
[ "I solve this kind of problem (shipping libraries that are dependencies for my overall project) in the following way. First, I create an \"ext\" directory in the root of my project (in your case that would be myproject/ext). Then I place dependencies such as feedparser in that ext directory - myproject/ext/feedparser\nFinally, I change my manage.py script to insert the ext/ directory at the front of sys.path. This means both ./manage.py runserver and ./manage.py shell will pick up the correct path:\n# manage.py\nimport os, sys\nsys.path.insert(0, os.path.join(os.path.dirname(__file__), 'ext'))\n# ... rest of manage.py\n\nI find this works really well if you don't want to mess around with things like virtualenvs. When you deploy your project you have to make sure the path is correct as well - I usually solve this by adding the same sys.path.insert line to the start of my mod_wsgi app.wsgi file.\n", "This looks like one of those annoying relative path issues - solved in Python 2.6 and higher (where you can do import ..feedparser etc) but often a bit tricky on older versions. One cheap and cheerful way to fix this could be just to move feedparser.py in to your templatetags directory, as a sibling to twitterfeed.py\n" ]
[ 5, 2 ]
[]
[]
[ "django", "feedparser", "python", "templatetags" ]
stackoverflow_0001185084_django_feedparser_python_templatetags.txt
Q: merge sort implementation to sort by string length - python I've implemented what I believe to be a merge sort algorithm in python. I've never programmed in Python before, so I used several resources with commands that seemed foreign to me, to gain a better understanding. However, I've also never implemented merge sort in the first place, so I'm not sure if I've even implemented it correctly. Any guidance, tips, or corrections would be greatly appreciated. Here is my merge method: def merge(left, right): result = [] i, j = 0, 0 while(i < len(left) and j< len(right)): if(len(left[i]) <= len(right[j])): print(i) result.append(left[i]) i=i+1 else: result.append(right[j]) j=j+1 result += left[i:] result += right[j:] return result meanwhile, here is my mergesort method: def mergesort(list): if len(list) < 2: return list else: middle = len(list) / 2 left = mergesort(list[:middle]) right = mergesort(list[middle:]) return merge(left, right) Thanks for any possible help! :) A: Don't name variables "list". That's the name of Python's array type, so using a variable by the same name is confusing. When you return from a conditional, you don't need to sitck the rest of the function in an else block. def mergesort(list): if len(list) < 2: return list middle = len(list) / 2 left = mergesort(list[:middle]) right = mergesort(list[middle:]) return merge(left, right) Overall, it looks reasonable. Of course, for anything but an exercise, you should be using list.sort or sorted(). a = ["abc", "de", "f", "ghijkl"] print sorted(a, lambda a,b: cmp(len(a), len(b))) A: How about using the sorted() function? Like this: def len_cmp(x, y): return len(x) - len(y) my_strings = ["hello", "foo", "bar", "spam"] print sorted(my_strings, len_cmp)
merge sort implementation to sort by string length - python
I've implemented what I believe to be a merge sort algorithm in python. I've never programmed in Python before, so I used several resources with commands that seemed foreign to me, to gain a better understanding. However, I've also never implemented merge sort in the first place, so I'm not sure if I've even implemented it correctly. Any guidance, tips, or corrections would be greatly appreciated. Here is my merge method: def merge(left, right): result = [] i, j = 0, 0 while(i < len(left) and j< len(right)): if(len(left[i]) <= len(right[j])): print(i) result.append(left[i]) i=i+1 else: result.append(right[j]) j=j+1 result += left[i:] result += right[j:] return result meanwhile, here is my mergesort method: def mergesort(list): if len(list) < 2: return list else: middle = len(list) / 2 left = mergesort(list[:middle]) right = mergesort(list[middle:]) return merge(left, right) Thanks for any possible help! :)
[ "Don't name variables \"list\". That's the name of Python's array type, so using a variable by the same name is confusing.\nWhen you return from a conditional, you don't need to sitck the rest of the function in an else block.\ndef mergesort(list):\n if len(list) < 2:\n return list\n middle = len(list) / 2\n left = mergesort(list[:middle])\n right = mergesort(list[middle:])\n return merge(left, right)\n\nOverall, it looks reasonable.\nOf course, for anything but an exercise, you should be using list.sort or sorted().\na = [\"abc\", \"de\", \"f\", \"ghijkl\"]\nprint sorted(a, lambda a,b: cmp(len(a), len(b)))\n\n", "How about using the sorted() function? Like this:\ndef len_cmp(x, y):\n return len(x) - len(y)\n\nmy_strings = [\"hello\", \"foo\", \"bar\", \"spam\"]\nprint sorted(my_strings, len_cmp)\n\n" ]
[ 3, 2 ]
[]
[]
[ "algorithm", "mergesort", "python" ]
stackoverflow_0001185388_algorithm_mergesort_python.txt
Q: Using a caesarian cipher on a string of text in python? I'm trying to slowly knock out all of the intricacies of python. Basically, I'm looking for some way, in python, to take a string of characters and push them all over by 'x' characters. For example, inputing abcdefg will give me cdefghi (if x is 2). A: My first version: >>> key = 2 >>> msg = "abcdefg" >>> ''.join( map(lambda c: chr(ord('a') + (ord(c) - ord('a') + key)%26), msg) ) 'cdefghi' >>> msg = "uvwxyz" >>> ''.join( map(lambda c: chr(ord('a') + (ord(c) - ord('a') + key)%26), msg) ) 'wxyzab' (Of course it works as expected only if msg is lowercase...) edit: I definitely second David Raznick's answer: >>> import string >>> alphabet = "abcdefghijklmnopqrstuvwxyz" >>> key = 2 >>> tr = string.maketrans(alphabet, alphabet[key:] + alphabet[:key]) >>> "abcdefg".translate(tr) 'cdefghi' A: I think your best bet is to look at string.translate. You may have to use make_trans to make the mapping you like. A: I would do it this way (for conceptual simplicity): def encode(s): l = [ord(i) for i in s] return ''.join([chr(i + 2) for i in l]) Point being that you convert the letter to ASCII, add 2 to that code, convert it back, and "cast" it into a string (create a new string object). This also makes no conversions based on "case" (upper vs. lower). Potential optimizations/research areas: Use of StringIO module for large strings Apply this to Unicode (not sure how) A: This solution works for both lowercase and uppercase: from string import lowercase, uppercase def caesar(text, key): result = [] for c in text: if c in lowercase: idx = lowercase.index(c) idx = (idx + key) % 26 result.append(lowercase[idx]) elif c in uppercase: idx = uppercase.index(c) idx = (idx + key) % 26 result.append(uppercase[idx]) else: result.append(c) return "".join(result) Here is a test: >>> caesar("abcdefg", 2) 'cdefghi' >>> caesar("z", 1) 'a' A: Another version. Allows for definition of your own alphabet, and doesn't translate any other characters (such as punctuation). The ugly part here is the loop, which might cause performance problems. I'm not sure about python but appending strings like this is a big no in other languages like Java and C#. def rotate(data, n): alphabet = list("abcdefghijklmopqrstuvwxyz") n = n % len(alphabet) target = alphabet[n:] + alphabet[:n] translation = dict(zip(alphabet, target)) result = "" for c in data: if translation.has_key(c): result += translation[c] else: result += c return result print rotate("foobar", 1) print rotate("foobar", 2) print rotate("foobar", -1) print rotate("foobar", -2) Result: gppcbs hqqdct emmazq dllzyp The make_trans() solution suggested by others is the way to go here.
Using a caesarian cipher on a string of text in python?
I'm trying to slowly knock out all of the intricacies of python. Basically, I'm looking for some way, in python, to take a string of characters and push them all over by 'x' characters. For example, inputing abcdefg will give me cdefghi (if x is 2).
[ "My first version:\n>>> key = 2\n>>> msg = \"abcdefg\"\n>>> ''.join( map(lambda c: chr(ord('a') + (ord(c) - ord('a') + key)%26), msg) )\n'cdefghi'\n>>> msg = \"uvwxyz\"\n>>> ''.join( map(lambda c: chr(ord('a') + (ord(c) - ord('a') + key)%26), msg) )\n'wxyzab'\n\n(Of course it works as expected only if msg is lowercase...)\nedit: I definitely second David Raznick's answer:\n>>> import string\n>>> alphabet = \"abcdefghijklmnopqrstuvwxyz\"\n>>> key = 2\n>>> tr = string.maketrans(alphabet, alphabet[key:] + alphabet[:key])\n>>> \"abcdefg\".translate(tr)\n'cdefghi'\n\n", "I think your best bet is to look at string.translate. You may have to use make_trans to make the mapping you like.\n", "I would do it this way (for conceptual simplicity):\ndef encode(s):\n l = [ord(i) for i in s]\n return ''.join([chr(i + 2) for i in l])\n\nPoint being that you convert the letter to ASCII, add 2 to that code, convert it back, and \"cast\" it into a string (create a new string object). This also makes no conversions based on \"case\" (upper vs. lower).\nPotential optimizations/research areas:\n\nUse of StringIO module for large strings\nApply this to Unicode (not sure how)\n\n", "This solution works for both lowercase and uppercase:\nfrom string import lowercase, uppercase\n\ndef caesar(text, key):\n result = []\n for c in text:\n if c in lowercase:\n idx = lowercase.index(c)\n idx = (idx + key) % 26\n result.append(lowercase[idx])\n elif c in uppercase:\n idx = uppercase.index(c)\n idx = (idx + key) % 26\n result.append(uppercase[idx])\n else:\n result.append(c)\n return \"\".join(result)\n\nHere is a test:\n>>> caesar(\"abcdefg\", 2)\n'cdefghi'\n>>> caesar(\"z\", 1)\n'a'\n\n", "Another version. Allows for definition of your own alphabet, and doesn't translate any other characters (such as punctuation). The ugly part here is the loop, which might cause performance problems. I'm not sure about python but appending strings like this is a big no in other languages like Java and C#.\ndef rotate(data, n):\n alphabet = list(\"abcdefghijklmopqrstuvwxyz\")\n\n n = n % len(alphabet)\n target = alphabet[n:] + alphabet[:n]\n\n translation = dict(zip(alphabet, target))\n result = \"\"\n for c in data:\n if translation.has_key(c):\n result += translation[c]\n else:\n result += c\n\n return result\n\nprint rotate(\"foobar\", 1) \nprint rotate(\"foobar\", 2) \nprint rotate(\"foobar\", -1)\nprint rotate(\"foobar\", -2)\n\nResult:\ngppcbs\nhqqdct\nemmazq\ndllzyp\n\nThe make_trans() solution suggested by others is the way to go here.\n" ]
[ 9, 5, 3, 2, 1 ]
[]
[]
[ "encryption", "python" ]
stackoverflow_0001185775_encryption_python.txt
Q: How can I execute CGI files from PHP? I'm trying to make a web app that will manage my Mercurial repositories for me. I want it so that when I tell it to load repository X: Connect to a MySQL server and make sure X exists. Check if the user is allowed to access the repository. If above is true, get the location of X from a mysql server. Run a hgweb cgi script (python) containing the path of the repository. Here is the problem, I want to: take the hgweb script, modify it, and run it. But I do not want to: take the hgweb script, modify it, write it to a file and redirect there. I am using Apache to run the httpd process. A: You can run shell scripts from within PHP. There are various ways to do it, and complications with some hosts not providing the proper permissions, all of which are well-documented on php.net. That said, the simplest way is to simply enclose your command in backticks. So, to unzip a file, I could say: `unzip /path/to/file` SO, if your python script is such that it can be run from a command-line environment (or you could modify it so to run), this would seem to be the preferred method. A: Ryan Ballantyne has the right answer posted (I upvoted it). The backtick operator is the way to execute a shell script. The simplest solution is probably to modify the hgweb script so that it doesn't "contain" the path to the repository, per se. Instead, pass it as a command-line argument. This means you don't have to worry about modifying and writing the hgweb script anywhere. All you'd have to do is: //do stuff to get location of repository from MySQL into variable $x //run shell script $res = `python hgweb.py $x`; A: As far as you question, no, you're not likely to get php to execute a modified script without writing it somewhere, whether that's a file on the disk, a virtual file mapped to ram, or something similar. It sounds like you might be trying to pound a railroad spike with a twig. If you're to the point where you're filtering access based on user permissions stored in MySQL, have you looked at existing HG solutions to make sure there isn't something more applicable than hgweb? It's really built for doing exactly one thing well, and this is a fair bit beyond it's normal realm. I might suggest looking into apache's native authentication as a more convenient method for controlling access to repositories, then just serve the repo without modifying the script.
How can I execute CGI files from PHP?
I'm trying to make a web app that will manage my Mercurial repositories for me. I want it so that when I tell it to load repository X: Connect to a MySQL server and make sure X exists. Check if the user is allowed to access the repository. If above is true, get the location of X from a mysql server. Run a hgweb cgi script (python) containing the path of the repository. Here is the problem, I want to: take the hgweb script, modify it, and run it. But I do not want to: take the hgweb script, modify it, write it to a file and redirect there. I am using Apache to run the httpd process.
[ "You can run shell scripts from within PHP. There are various ways to do it, and complications with some hosts not providing the proper permissions, all of which are well-documented on php.net. That said, the simplest way is to simply enclose your command in backticks. So, to unzip a file, I could say:\n`unzip /path/to/file`\n\nSO, if your python script is such that it can be run from a command-line environment (or you could modify it so to run), this would seem to be the preferred method.\n", "Ryan Ballantyne has the right answer posted (I upvoted it). The backtick operator is the way to execute a shell script.\nThe simplest solution is probably to modify the hgweb script so that it doesn't \"contain\" the path to the repository, per se. Instead, pass it as a command-line argument. This means you don't have to worry about modifying and writing the hgweb script anywhere. All you'd have to do is:\n//do stuff to get location of repository from MySQL into variable $x\n//run shell script\n$res = `python hgweb.py $x`;\n\n", "As far as you question, no, you're not likely to get php to execute a modified script without writing it somewhere, whether that's a file on the disk, a virtual file mapped to ram, or something similar.\nIt sounds like you might be trying to pound a railroad spike with a twig. If you're to the point where you're filtering access based on user permissions stored in MySQL, have you looked at existing HG solutions to make sure there isn't something more applicable than hgweb? It's really built for doing exactly one thing well, and this is a fair bit beyond it's normal realm.\nI might suggest looking into apache's native authentication as a more convenient method for controlling access to repositories, then just serve the repo without modifying the script.\n" ]
[ 2, 2, 0 ]
[]
[]
[ "cgi", "mercurial", "php", "python" ]
stackoverflow_0001185867_cgi_mercurial_php_python.txt
Q: Can I use C++ features while extending Python? The Python manual says that you can create modules for Python in both C and C++. Can you take advantage of things like classes and templates when using C++? Wouldn't it create incompatibilities with the rest of the libraries and with the interpreter? A: It doesn't matter whether your implementation of the hook functions is implemented in C or in C++. In fact, I've already seen some Python extensions which make active use of C++ templates and even the Boost library. No problem. :-) A: The boost folks have a nice automated way to do the wrapping of C++ code for use by python. It is called: Boost.Python It deals with some of the constructs of C++ better than SWIG, particularly template metaprogramming. A: What you're interested in is a program called SWIG. It will generate Python wrappers and interfaces for C++ code. I use it with templates, inheritance, namespaces, etc. and it works well. A: You should be able to use all of the features of the C++ language. The Extending Python Documentation (2.6.2) says that you may use C++, but mentions the followings caveats: It is possible to write extension modules in C++. Some restrictions apply. If the main program (the Python interpreter) is compiled and linked by the C compiler, global or static objects with constructors cannot be used. This is not a problem if the main program is linked by the C++ compiler. Functions that will be called by the Python interpreter (in particular, module initialization functions) have to be declared using extern "C". It is unnecessary to enclose the Python header files in extern "C" {...} — they use this form already if the symbol __cplusplus is defined (all recent C++ compilers define this symbol). The first restriction, "global or static objects with constructors cannot be used", has to do with the way most C++ compiler initialize objects with this type of storage duration. For example, consider the following code: class Foo { Foo() { } }; static Foo f; int main(int argc, char** argv) {} The compiler has to emit special code so that the 'Foo' constructor gets invoked for 'f' before main gets executed. If you have objects with static storage duration in your Python extension and the Python interpreter is not compiled and linked for C++, then this special initialization code will not be created. The second restriction, "Functions that will be called by the Python interpreter (in particular, module initialization functions) have to be declared using extern "C"", has to do with C++ name mangling. Most C++ compilers mangle their names so that they can use the same linkers provided for C toolchains. For example say you had: void a_function_python_calls(void* foo); the C++ compiler may convert references to the name 'a_function_python_calls' to something like 'a_function_python_calls@1vga'. In which case you may get an unresolved external when trying to link with the Python library.
Can I use C++ features while extending Python?
The Python manual says that you can create modules for Python in both C and C++. Can you take advantage of things like classes and templates when using C++? Wouldn't it create incompatibilities with the rest of the libraries and with the interpreter?
[ "It doesn't matter whether your implementation of the hook functions is implemented in C or in C++. In fact, I've already seen some Python extensions which make active use of C++ templates and even the Boost library. No problem. :-)\n", "The boost folks have a nice automated way to do the wrapping of C++ code for use by python.\nIt is called: Boost.Python\nIt deals with some of the constructs of C++ better than SWIG, particularly template metaprogramming.\n", "What you're interested in is a program called SWIG. It will generate Python wrappers and interfaces for C++ code. I use it with templates, inheritance, namespaces, etc. and it works well.\n", "You should be able to use all of the features of the C++ language. The Extending Python Documentation (2.6.2) says that you may use C++, but mentions the followings caveats:\n\nIt is possible to write extension\n modules in C++. Some restrictions\n apply. If the main program (the Python\n interpreter) is compiled and linked by\n the C compiler, global or static\n objects with constructors cannot be\n used. This is not a problem if the\n main program is linked by the C++\n compiler. Functions that will be\n called by the Python interpreter (in\n particular, module initialization\n functions) have to be declared using\n extern \"C\". It is unnecessary to\n enclose the Python header files in\n extern \"C\" {...} — they use this form\n already if the symbol __cplusplus is\n defined (all recent C++ compilers\n define this symbol).\n\nThe first restriction, \"global or static objects with constructors cannot be used\", has to do with the way most C++ compiler initialize objects with this type of storage duration. For example, consider the following code:\nclass Foo { Foo() { } };\n\nstatic Foo f;\n\nint main(int argc, char** argv) {}\n\nThe compiler has to emit special code so that the 'Foo' constructor gets invoked for 'f' before main gets executed. If you have objects with static storage duration in your Python extension and the Python interpreter is not compiled and linked for C++, then this special initialization code will not be created.\nThe second restriction, \"Functions that will be called by the Python interpreter (in particular, module initialization functions) have to be declared using extern \"C\"\", has to do with C++ name mangling. Most C++ compilers mangle their names so that they can use the same linkers provided for C toolchains. For example say you had:\nvoid a_function_python_calls(void* foo);\n\nthe C++ compiler may convert references to the name 'a_function_python_calls' to something like 'a_function_python_calls@1vga'. In which case you may get an unresolved external when trying to link with the Python library.\n" ]
[ 9, 3, 2, 1 ]
[]
[]
[ "c", "c++", "python", "python_c_api", "python_c_extension" ]
stackoverflow_0001185878_c_c++_python_python_c_api_python_c_extension.txt
Q: Spoofing the origination IP address of an HTTP request This only needs to work on a single subnet and is not for malicious use. I have a load testing tool written in Python that basically blasts HTTP requests at a URL. I need to run performance tests against an IP-based load balancer, so the requests must come from a range of IP's. Most commercial performance tools provide this functionality, but I want to build it into my own. The tool uses Python's urllib2 for transport. Is it possible to send HTTP requests with spoofed IP addresses for the packets making up the request? A: This is a misunderstanding of HTTP. The HTTP protocol is based on top of TCP. The TCP protocol relies on a 3 way handshake to initialize requests. Needless to say, if you spoof your originating IP address, you will never get past the synchronization stage and no HTTP information will be sent (the server can't send it to a legal host). If you need to test an IP load balancer, this is not the way to do it. A: Quick note, as I just learned this yesterday: I think you've implied you know this already, but any responses to an HTTP request go to the IP address that shows up in the header. So if you are wanting to see those responses, you need to have control of the router and have it set up so that the spoofed IPs are all routed back to the IP you are using to view the responses. A: You want to set the source address used for the connection. Googling "urllib2 source address" gives http://bugs.python.org/file9988/urllib2_util.py. I havn't tried it. The system you're running on needs to be configured with the IPs you're testing from. A: You could just use IP aliasing on a Linux box and set up as many IP addresses as you want. The catch is that you can't predict what IP will get stamped in the the IP header unless you set it to another network and set an explicit route for that network. i.e. - current client address on eth0 = 192.168.1.10/24 server-side: ifconfig eth0:1 172.16.1.1 netmask 255.255.255.0 client-side: ifconfig eth0:1 172.16.1.2 netmask 255.255.255.0 route add -net 172.16.1.0/24 gw 172.16.1.1 metric 0 Repeat for as many subnets as you want. Restart apache to set listeners on all the new alias interfaces and you're off and running. A: I suggest seeing if you can configure your load balancer to make it's decision based on the X-Forwarded-For header, rather than the source IP of the packet containing the HTTP request. I know that most of the significant commercial load balancers have this capability. If you can't do that, then I suggest that you probably need to configure a linux box with a whole heap of secondary IP's - don't bother configuring static routes on the LB, just make your linux box the default gateway of the LB device.
Spoofing the origination IP address of an HTTP request
This only needs to work on a single subnet and is not for malicious use. I have a load testing tool written in Python that basically blasts HTTP requests at a URL. I need to run performance tests against an IP-based load balancer, so the requests must come from a range of IP's. Most commercial performance tools provide this functionality, but I want to build it into my own. The tool uses Python's urllib2 for transport. Is it possible to send HTTP requests with spoofed IP addresses for the packets making up the request?
[ "This is a misunderstanding of HTTP. The HTTP protocol is based on top of TCP. The TCP protocol relies on a 3 way handshake to initialize requests.\n\nNeedless to say, if you spoof your originating IP address, you will never get past the synchronization stage and no HTTP information will be sent (the server can't send it to a legal host).\nIf you need to test an IP load balancer, this is not the way to do it.\n", "Quick note, as I just learned this yesterday:\nI think you've implied you know this already, but any responses to an HTTP request go to the IP address that shows up in the header. So if you are wanting to see those responses, you need to have control of the router and have it set up so that the spoofed IPs are all routed back to the IP you are using to view the responses.\n", "You want to set the source address used for the connection. Googling \"urllib2 source address\" gives http://bugs.python.org/file9988/urllib2_util.py. I havn't tried it.\nThe system you're running on needs to be configured with the IPs you're testing from.\n", "You could just use IP aliasing on a Linux box and set up as many IP addresses as you want. The catch is that you can't predict what IP will get stamped in the the IP header unless you set it to another network and set an explicit route for that network. i.e. -\ncurrent client address on eth0 = 192.168.1.10/24\n\nserver-side:\nifconfig eth0:1 172.16.1.1 netmask 255.255.255.0\n\nclient-side:\nifconfig eth0:1 172.16.1.2 netmask 255.255.255.0 \nroute add -net 172.16.1.0/24 gw 172.16.1.1 metric 0\n\nRepeat for as many subnets as you want. Restart apache to set listeners on all the new alias interfaces and you're off and running.\n", "I suggest seeing if you can configure your load balancer to make it's decision based on the X-Forwarded-For header, rather than the source IP of the packet containing the HTTP request. I know that most of the significant commercial load balancers have this capability.\nIf you can't do that, then I suggest that you probably need to configure a linux box with a whole heap of secondary IP's - don't bother configuring static routes on the LB, just make your linux box the default gateway of the LB device.\n" ]
[ 50, 7, 5, 1, 1 ]
[]
[]
[ "http", "networking", "python", "sockets", "urllib2" ]
stackoverflow_0001180878_http_networking_python_sockets_urllib2.txt
Q: Python: is os.read() / os.write() on an os.pipe() threadsafe? Consider: pipe_read, pipe_write = os.pipe() Now, I would like to know two things: (1) I have two threads. If I guarantee that only one is reading os.read(pipe_read,n) and the other is only writing os.write(pipe_write), will I have any problem, even if the two threads do it simultaneously? Will I get all data that was written in the correct order? What happens if they do it simultaneously? Is it possible that a single write is read in pieces, like?: Thread 1: os.write(pipe_write, '1234567') Thread 2: os.read(pipe_read,big_number) --> '123' Thread 2: os.read(pipe_read,big_number) --> '4567' Or -- again, consider simultaneity -- will a single os.write(some_string) always return entirely by a single os.read(pipe_read, very_big_number)? (2) Consider more than one thread writing to the pipe_write end of the pipe using logging.handlers.FileHandler() -- I've read that the logging module is threadsafe. Does this mean that I can do this without losing data? I think I won't be able to control the order of the data in the pipe; but this is not a requirement. Requirements: all data written by some threads on the write end must come out at the read end a string written by a single logger.info(), logger.error(), ... has to stay in one piece. Are these reqs fulfilled? Thank you in advance, Jan-Philip Gehrcke A: os.read and os.write on the two fds returned from os.pipe is threadsafe, but you appear to demand more than that. Sub (1), yes, there is no "atomicity" guarantee for sinle reads or writes -- the scenario you depict (a single short write ends up producing two reads) is entirely possible. (In general, os.whatever is a thin wrapper on operating system functionality, and it's up to the OS to ensure, or fail to ensure, the kind of functionality you require; in this case, the Posix standard doesn't require the OS to ensure this kind of "atomicity"). You're guaranteed to get all data that was written, and in the correct order, but that's it. A single write of a large piece of data might stall once it's filled the OS-supplied buffer and only proceed once some other thread has read some of the initial data (beware deadlocks, of course!), etc, etc. Sub (2), yes, the logging module is threadsafe AND "atomic" in that data produced by a single call to logging.info, logging.warn, logging.error, etc, "stays in one piece" in terms of calls to the underlying handler (however if that handler in turn uses non-atomic means such as os.write, it may still e.g. stall in the kernel until the underlying buffer gets unclogged, etc, etc, as above).
Python: is os.read() / os.write() on an os.pipe() threadsafe?
Consider: pipe_read, pipe_write = os.pipe() Now, I would like to know two things: (1) I have two threads. If I guarantee that only one is reading os.read(pipe_read,n) and the other is only writing os.write(pipe_write), will I have any problem, even if the two threads do it simultaneously? Will I get all data that was written in the correct order? What happens if they do it simultaneously? Is it possible that a single write is read in pieces, like?: Thread 1: os.write(pipe_write, '1234567') Thread 2: os.read(pipe_read,big_number) --> '123' Thread 2: os.read(pipe_read,big_number) --> '4567' Or -- again, consider simultaneity -- will a single os.write(some_string) always return entirely by a single os.read(pipe_read, very_big_number)? (2) Consider more than one thread writing to the pipe_write end of the pipe using logging.handlers.FileHandler() -- I've read that the logging module is threadsafe. Does this mean that I can do this without losing data? I think I won't be able to control the order of the data in the pipe; but this is not a requirement. Requirements: all data written by some threads on the write end must come out at the read end a string written by a single logger.info(), logger.error(), ... has to stay in one piece. Are these reqs fulfilled? Thank you in advance, Jan-Philip Gehrcke
[ "os.read and os.write on the two fds returned from os.pipe is threadsafe, but you appear to demand more than that. Sub (1), yes, there is no \"atomicity\" guarantee for sinle reads or writes -- the scenario you depict (a single short write ends up producing two reads) is entirely possible. (In general, os.whatever is a thin wrapper on operating system functionality, and it's up to the OS to ensure, or fail to ensure, the kind of functionality you require; in this case, the Posix standard doesn't require the OS to ensure this kind of \"atomicity\"). You're guaranteed to get all data that was written, and in the correct order, but that's it. A single write of a large piece of data might stall once it's filled the OS-supplied buffer and only proceed once some other thread has read some of the initial data (beware deadlocks, of course!), etc, etc.\nSub (2), yes, the logging module is threadsafe AND \"atomic\" in that data produced by a single call to logging.info, logging.warn, logging.error, etc, \"stays in one piece\" in terms of calls to the underlying handler (however if that handler in turn uses non-atomic means such as os.write, it may still e.g. stall in the kernel until the underlying buffer gets unclogged, etc, etc, as above).\n" ]
[ 8 ]
[]
[]
[ "multithreading", "pipe", "python", "thread_safety" ]
stackoverflow_0001185660_multithreading_pipe_python_thread_safety.txt
Q: Customizing modelformset fields in Django I'd like to use the following form class in a modelformset. It takes a maps parameter and customizes the form fields accordingly. class MyModelForm(forms.ModelForm): def __init__(self, maps, *args, **kwargs): super(MyModelForm, self).__init__(*args, **kwargs) #customize fields here class Meta: model = MyModel My question is, how do I use this form in a modelformset? When I pass it using the form parameter like below, I get an exception. MyFormSet = modelformset_factory(MyModel, form=MyModelForm(maps)) I suspect it wants the form class only, if so how do I pass the maps parameter to the form? A: Keep in mind that Django uses class definition as a sort of DSL to define various things. As such, instantiating at places where it expects the class object will break things. One approach is to create your own form factory. Something like: def mymodelform_factory(maps): class MyModelForm(forms.ModelForm): def __init__(self, *args, **kwargs): super(MyModelForm, self).__init__(*args, **kwargs) #use maps to customize form delcaration here class Meta: model = myModel return MyModelForm Then you can do: MyFormSet = modelformset_factory(MyModel, form=mymodelform_factory(maps))
Customizing modelformset fields in Django
I'd like to use the following form class in a modelformset. It takes a maps parameter and customizes the form fields accordingly. class MyModelForm(forms.ModelForm): def __init__(self, maps, *args, **kwargs): super(MyModelForm, self).__init__(*args, **kwargs) #customize fields here class Meta: model = MyModel My question is, how do I use this form in a modelformset? When I pass it using the form parameter like below, I get an exception. MyFormSet = modelformset_factory(MyModel, form=MyModelForm(maps)) I suspect it wants the form class only, if so how do I pass the maps parameter to the form?
[ "Keep in mind that Django uses class definition as a sort of DSL to define various things. As such, instantiating at places where it expects the class object will break things.\nOne approach is to create your own form factory. Something like:\n def mymodelform_factory(maps):\n class MyModelForm(forms.ModelForm):\n def __init__(self, *args, **kwargs):\n super(MyModelForm, self).__init__(*args, **kwargs)\n #use maps to customize form delcaration here\n class Meta:\n model = myModel\n return MyModelForm\n\nThen you can do:\n MyFormSet = modelformset_factory(MyModel, form=mymodelform_factory(maps))\n\n" ]
[ 2 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001186753_django_python.txt
Q: Real world guide on using and/or setting up REST web services? I've only used XML RPC and I haven't really delved into SOAP but I'm trying to find a good comprehensive guide, with real world examples or even a walkthrough of some minimal REST application. I'm most comfortable with Python/PHP. A: I like the examples in the Richardson & Ruby book, "RESTful Web Services" from O'Reilly. A: There is a good example with the Google App Engine Documentation. http://code.google.com/appengine/articles/rpc.html. It also talks you through some security aspects of doing REST A: Here are a few links: http://www.infoq.com/articles/webber-rest-workflow http://microformats.org/wiki/rest/urls http://blog.feedly.com/2009/05/06/best-practices-for-building-json-rest-web-services/ http://barelyenough.org/blog/2008/05/versioning-rest-web-services/ http://bitworking.org/news/restful_json/ (I should note, that the last one uses relative url's - a practise I don't like. But the rest of the article is very good, so I linked it anyway.)
Real world guide on using and/or setting up REST web services?
I've only used XML RPC and I haven't really delved into SOAP but I'm trying to find a good comprehensive guide, with real world examples or even a walkthrough of some minimal REST application. I'm most comfortable with Python/PHP.
[ "I like the examples in the Richardson & Ruby book, \"RESTful Web Services\" from O'Reilly.\n", "There is a good example with the Google App Engine Documentation. http://code.google.com/appengine/articles/rpc.html. It also talks you through some security aspects of doing REST\n", "Here are a few links:\n\nhttp://www.infoq.com/articles/webber-rest-workflow\nhttp://microformats.org/wiki/rest/urls\nhttp://blog.feedly.com/2009/05/06/best-practices-for-building-json-rest-web-services/\nhttp://barelyenough.org/blog/2008/05/versioning-rest-web-services/\nhttp://bitworking.org/news/restful_json/\n\n(I should note, that the last one uses relative url's - a practise I don't like. But the rest of the article is very good, so I linked it anyway.)\n" ]
[ 1, 1, 1 ]
[]
[]
[ "php", "python", "rest", "soap", "xml" ]
stackoverflow_0001186839_php_python_rest_soap_xml.txt
Q: Python SOAPpy Errors Below is my Python code: Service part class Test: def hello(): return "Hello World" Server Part import SOAPpy from first_SOAP import * host = "127.0.0.1" port = 5551 SOAPpy.Config.debug = 1 server = SOAPpy.SOAPServer((host, port)) server.registerKWFunction(Test.hello) print "Server Runing" server.serve_forever( Client part import SOAPpy SOAPpy.Config.debug = 1 server = SOAPpy.SOAPProxy("http://127.0.0.1:5551/") print server.Test.hello() This the error that I'm getting: *** Outgoing HTTP headers ********************************************** POST / HTTP/1.0 Host: 127.0.0.1:5551 User-agent: SOAPpy 0.12.0 (http://pywebsvcs.sf.net) Content-type: text/xml; charset="UTF-8" Content-length: 350 SOAPAction: "Test.hello" ************************************************************************ *** Outgoing SOAP ****************************************************** <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" > <SOAP-ENV:Body> <Test.hello SOAP-ENC:root="1"> </Test.hello> </SOAP-ENV:Body> </SOAP-ENV:Envelope> ************************************************************************ code= 500 msg= Internal Server Error headers= Server: <a href="http://pywebsvcs.sf.net">SOAPpy 0.12.0</a> (Python 2.5.2) Date: Mon, 27 Jul 2009 07:25:40 GMT Content-type: text/xml; charset="UTF-8" Content-length: 674 content-type= text/xml; charset="UTF-8" data= <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xsi="http://www.w3.org/1999/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/1999/XMLSchema" > <SOAP-ENV:Body> <SOAP-ENV:Fault SOAP-ENC:root="1"> <faultcode>SOAP-ENV:Client</faultcode> <faultstring>Method Not Found</faultstring> <detail xsi:type="xsd:string">Test.hello : &lt;type 'exceptions.KeyError'&gt; None &lt;traceback object at 0x9fbcb44&gt;</detail> </SOAP-ENV:Fault> </SOAP-ENV:Body> </SOAP-ENV:Envelope> *** Incoming HTTP headers ********************************************** HTTP/1.? 500 Internal Server Error Server: <a href="http://pywebsvcs.sf.net">SOAPpy 0.12.0</a> (Python 2.5.2) Date: Mon, 27 Jul 2009 07:25:40 GMT Content-type: text/xml; charset="UTF-8" Content-length: 674 ************************************************************************ *** Incoming SOAP ****************************************************** <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xsi="http://www.w3.org/1999/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/1999/XMLSchema" > <SOAP-ENV:Body> <SOAP-ENV:Fault SOAP-ENC:root="1"> <faultcode>SOAP-ENV:Client</faultcode> <faultstring>Method Not Found</faultstring> <detail xsi:type="xsd:string">Test.hello : &lt;type 'exceptions.KeyError'&gt; None &lt;traceback object at 0x9fbcb44&gt;</detail> </SOAP-ENV:Fault> </SOAP-ENV:Body> </SOAP-ENV:Envelope> ************************************************************************ <Fault SOAP-ENV:Client: Method Not Found: Test.hello : <type 'exceptions.KeyError'> None <traceback object at 0x9fbcb44>> Traceback (most recent call last): File "/home/rajaneesh/workspace/raju/src/call.py", line 4, in <module> print server.Test.hello() File "/var/lib/python-support/python2.5/SOAPpy/Client.py", line 470, in __call__ return self.__r_call(*args, **kw) File "/var/lib/python-support/python2.5/SOAPpy/Client.py", line 492, in __r_call self.__hd, self.__ma) File "/var/lib/python-support/python2.5/SOAPpy/Client.py", line 406, in __call raise p SOAPpy.Types.faultType: <Fault SOAP-ENV:Client: Method Not Found: Test.hello : <type 'exceptions.KeyError'> None <traceback object at 0x9fbcb44>> A: Do you really want to make test() a class method? I suggest you change your code like this. class Test: def hello(self): return "Hello World" Then you must create an instance of the Test class and register: server.registerObject(Test()) Then the client can access the hello() method like this: print server.hello()
Python SOAPpy Errors
Below is my Python code: Service part class Test: def hello(): return "Hello World" Server Part import SOAPpy from first_SOAP import * host = "127.0.0.1" port = 5551 SOAPpy.Config.debug = 1 server = SOAPpy.SOAPServer((host, port)) server.registerKWFunction(Test.hello) print "Server Runing" server.serve_forever( Client part import SOAPpy SOAPpy.Config.debug = 1 server = SOAPpy.SOAPProxy("http://127.0.0.1:5551/") print server.Test.hello() This the error that I'm getting: *** Outgoing HTTP headers ********************************************** POST / HTTP/1.0 Host: 127.0.0.1:5551 User-agent: SOAPpy 0.12.0 (http://pywebsvcs.sf.net) Content-type: text/xml; charset="UTF-8" Content-length: 350 SOAPAction: "Test.hello" ************************************************************************ *** Outgoing SOAP ****************************************************** <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" > <SOAP-ENV:Body> <Test.hello SOAP-ENC:root="1"> </Test.hello> </SOAP-ENV:Body> </SOAP-ENV:Envelope> ************************************************************************ code= 500 msg= Internal Server Error headers= Server: <a href="http://pywebsvcs.sf.net">SOAPpy 0.12.0</a> (Python 2.5.2) Date: Mon, 27 Jul 2009 07:25:40 GMT Content-type: text/xml; charset="UTF-8" Content-length: 674 content-type= text/xml; charset="UTF-8" data= <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xsi="http://www.w3.org/1999/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/1999/XMLSchema" > <SOAP-ENV:Body> <SOAP-ENV:Fault SOAP-ENC:root="1"> <faultcode>SOAP-ENV:Client</faultcode> <faultstring>Method Not Found</faultstring> <detail xsi:type="xsd:string">Test.hello : &lt;type 'exceptions.KeyError'&gt; None &lt;traceback object at 0x9fbcb44&gt;</detail> </SOAP-ENV:Fault> </SOAP-ENV:Body> </SOAP-ENV:Envelope> *** Incoming HTTP headers ********************************************** HTTP/1.? 500 Internal Server Error Server: <a href="http://pywebsvcs.sf.net">SOAPpy 0.12.0</a> (Python 2.5.2) Date: Mon, 27 Jul 2009 07:25:40 GMT Content-type: text/xml; charset="UTF-8" Content-length: 674 ************************************************************************ *** Incoming SOAP ****************************************************** <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xsi="http://www.w3.org/1999/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/1999/XMLSchema" > <SOAP-ENV:Body> <SOAP-ENV:Fault SOAP-ENC:root="1"> <faultcode>SOAP-ENV:Client</faultcode> <faultstring>Method Not Found</faultstring> <detail xsi:type="xsd:string">Test.hello : &lt;type 'exceptions.KeyError'&gt; None &lt;traceback object at 0x9fbcb44&gt;</detail> </SOAP-ENV:Fault> </SOAP-ENV:Body> </SOAP-ENV:Envelope> ************************************************************************ <Fault SOAP-ENV:Client: Method Not Found: Test.hello : <type 'exceptions.KeyError'> None <traceback object at 0x9fbcb44>> Traceback (most recent call last): File "/home/rajaneesh/workspace/raju/src/call.py", line 4, in <module> print server.Test.hello() File "/var/lib/python-support/python2.5/SOAPpy/Client.py", line 470, in __call__ return self.__r_call(*args, **kw) File "/var/lib/python-support/python2.5/SOAPpy/Client.py", line 492, in __r_call self.__hd, self.__ma) File "/var/lib/python-support/python2.5/SOAPpy/Client.py", line 406, in __call raise p SOAPpy.Types.faultType: <Fault SOAP-ENV:Client: Method Not Found: Test.hello : <type 'exceptions.KeyError'> None <traceback object at 0x9fbcb44>>
[ "Do you really want to make test() a class method? I suggest you change your code like this.\nclass Test:\n def hello(self):\n return \"Hello World\"\n\nThen you must create an instance of the Test class and register:\nserver.registerObject(Test())\n\nThen the client can access the hello() method like this:\nprint server.hello()\n\n" ]
[ 0 ]
[]
[]
[ "python", "soappy" ]
stackoverflow_0001186922_python_soappy.txt
Q: How to install a module as an egg under IronPython? Maybe, it is a stupid question but I can't use python eggs with IronPython. I would like to test with IronPython 2.0.2 one module that I've developped. This modules is pure python. It works ok with python 2.6 and is installed as a python egg thanks to setuptools. I thought that the process for installing my module under IronPython was very similar but unfortunately it doesn't work. I can't install setuptools-0.6c9 with IronPython (Crash of IronPython). I've tried to copy manually my egg under IronPython site-packages and it doesn't work either. I've also tried to include the python 2.6 site-package in the IronPython path but it can't load my module. I've made some tests with other modules and it seems that IronPython can not load eggs. Did I miss something? A: AFAIK it's still not possible - it's work in progress. See this post, for example. IronPython's main strength is in integration with the .NET ecosystem - it's not a drop-in replacement for CPython. See this post for some other limitations of IronPython.
How to install a module as an egg under IronPython?
Maybe, it is a stupid question but I can't use python eggs with IronPython. I would like to test with IronPython 2.0.2 one module that I've developped. This modules is pure python. It works ok with python 2.6 and is installed as a python egg thanks to setuptools. I thought that the process for installing my module under IronPython was very similar but unfortunately it doesn't work. I can't install setuptools-0.6c9 with IronPython (Crash of IronPython). I've tried to copy manually my egg under IronPython site-packages and it doesn't work either. I've also tried to include the python 2.6 site-package in the IronPython path but it can't load my module. I've made some tests with other modules and it seems that IronPython can not load eggs. Did I miss something?
[ "AFAIK it's still not possible - it's work in progress. See this post, for example. IronPython's main strength is in integration with the .NET ecosystem - it's not a drop-in replacement for CPython. See this post for some other limitations of IronPython.\n" ]
[ 1 ]
[]
[]
[ "egg", "ironpython", "python" ]
stackoverflow_0001187110_egg_ironpython_python.txt
Q: In Python, how to tell if being called by exception handling code? I would like to write a function in Python (2.6) that can determine if it is being called from exception handling code somewhere up the stack. This is for a specialized logging use. In python's logging module, the caller has to explicitly specify that exception information should be logged (either by calling logger.exception() or by using the exc_info keyword). I would like my logger to do this automatically, based on whether it is being called from within exception handling code. I thought that checking sys.exc_info() might be the answer, but it also returns exception information from an already-handled exception. (From the docs: "This function returns a tuple of three values that give information about the exception that is currently being handled... If the current stack frame is not handling an exception, the information is taken from the calling stack frame, or its caller, and so on until a stack frame is found that is handling an exception. Here, 'handling an exception' is defined as 'executing or having executed an except clause.'") Also, since I want this to be transparent to the caller, I do not want to have to use exc_clear() or anything else in the except clause. What's the right way to do this? A: If you clear the exception using sys.exc_clear in your exception handlers, then sys.exc_info should work for you. For example: If you run the following script: import sys try: 1 / 0 except: print sys.exc_info() sys.exc_clear() print sys.exc_info() You should see this output: (, ZeroDivisionError('integer division or modulo by zero',), ) (None, None, None) Update: I don't believe there is a simple ("transparent") way of answering the question "Is an exception handler running?" without going to some trouble, and in my opinion it's not worth taking the trouble just for logging. It is of course easy to answer the question "Has an exception been raised (in this thread)?", even on a per-stack-frame basis (see the documentation for frame objects). A: Like everything in Python, an exception is an object. Therefore, you could keep a (weak!) reference to the last exception handled and then use sys.exc_info(). Note: in case of multithreading code, you may have issues with this approach. And there could be other corner cases as well. However, explicit is better than implicit; are you really sure that handling exception logging in the same way as normal one is a good feature to add to your system? In my humble opinion, not.
In Python, how to tell if being called by exception handling code?
I would like to write a function in Python (2.6) that can determine if it is being called from exception handling code somewhere up the stack. This is for a specialized logging use. In python's logging module, the caller has to explicitly specify that exception information should be logged (either by calling logger.exception() or by using the exc_info keyword). I would like my logger to do this automatically, based on whether it is being called from within exception handling code. I thought that checking sys.exc_info() might be the answer, but it also returns exception information from an already-handled exception. (From the docs: "This function returns a tuple of three values that give information about the exception that is currently being handled... If the current stack frame is not handling an exception, the information is taken from the calling stack frame, or its caller, and so on until a stack frame is found that is handling an exception. Here, 'handling an exception' is defined as 'executing or having executed an except clause.'") Also, since I want this to be transparent to the caller, I do not want to have to use exc_clear() or anything else in the except clause. What's the right way to do this?
[ "If you clear the exception using sys.exc_clear in your exception handlers, then sys.exc_info should work for you. For example: If you run the following script:\nimport sys\n\ntry:\n 1 / 0\nexcept:\n print sys.exc_info()\n sys.exc_clear()\nprint sys.exc_info()\n\nYou should see this output:\n\n(, ZeroDivisionError('integer division or modulo by zero',), )\n(None, None, None)\n\nUpdate: I don't believe there is a simple (\"transparent\") way of answering the question \"Is an exception handler running?\" without going to some trouble, and in my opinion it's not worth taking the trouble just for logging. It is of course easy to answer the question \"Has an exception been raised (in this thread)?\", even on a per-stack-frame basis (see the documentation for frame objects). \n", "Like everything in Python, an exception is an object. Therefore, you could keep a (weak!) reference to the last exception handled and then use sys.exc_info().\nNote: in case of multithreading code, you may have issues with this approach. And there could be other corner cases as well.\nHowever, explicit is better than implicit; are you really sure that handling exception logging in the same way as normal one is a good feature to add to your system?\nIn my humble opinion, not.\n" ]
[ 2, 0 ]
[]
[]
[ "exception", "python" ]
stackoverflow_0001187102_exception_python.txt
Q: How to make the program run again after unexpected exit in Python? I'm writing an IRC bot in Python, due to the alpha nature of it, it will likely get unexpected errors and exit. What's the techniques that I can use to make the program run again? A: You can use sys.exit() to tell that the program exited abnormally (generally, 1 is returned in case of error). Your Python script could look something like this: import sys def main(): # ... if __name__ == '__main__': try: main() except Exception as e: print >> sys.stderr, e sys.exit(1) else: sys.exit() You could call again main() in case of error, but the program might not be in a state where it can work correctly again. It may be safer to launch the program in a new process instead. So you could write a script which invokes the Python script, gets its return value when it finishes, and relaunches it if the return value is different from 0 (which is what sys.exit() uses as return value by default). This may look something like this: import subprocess command = 'thescript' args = ['arg1', 'arg2'] while True: ret_code = subprocess.call([command] + args) if ret_code == 0: break A: You can create wrapper using subprocess(http://docs.python.org/library/subprocess.html) which will spawn your application as a child process and track it's execution. A: The easiest way is to catch errors, and close the old and open a new instance of the program when you do catch em. Note that it will not always work (in cases it stops working without throwing an error).
How to make the program run again after unexpected exit in Python?
I'm writing an IRC bot in Python, due to the alpha nature of it, it will likely get unexpected errors and exit. What's the techniques that I can use to make the program run again?
[ "You can use sys.exit() to tell that the program exited abnormally (generally, 1 is returned in case of error).\nYour Python script could look something like this:\nimport sys\n\ndef main():\n # ...\n\nif __name__ == '__main__':\n try:\n main()\n except Exception as e:\n print >> sys.stderr, e\n sys.exit(1)\n else:\n sys.exit()\n\nYou could call again main() in case of error, but the program might not be in a state where it can work correctly again.\nIt may be safer to launch the program in a new process instead.\nSo you could write a script which invokes the Python script, gets its return value when it finishes, and relaunches it if the return value is different from 0 (which is what sys.exit() uses as return value by default).\nThis may look something like this:\nimport subprocess\n\ncommand = 'thescript'\nargs = ['arg1', 'arg2']\n\nwhile True:\n ret_code = subprocess.call([command] + args)\n\n if ret_code == 0:\n break\n\n", "You can create wrapper using subprocess(http://docs.python.org/library/subprocess.html) which will spawn your application as a child process and track it's execution.\n", "The easiest way is to catch errors, and close the old and open a new instance of the program when you do catch em.\nNote that it will not always work (in cases it stops working without throwing an error).\n" ]
[ 5, 1, 0 ]
[]
[]
[ "irc", "python" ]
stackoverflow_0001187653_irc_python.txt
Q: Python: update a list of tuples... fastest method This question is in relation to another question asked here: Sorting 1M records I have since figured out the problem I was having with sorting. I was sorting items from a dictionary into a list every time I updated the data. I have since realized that a lot of the power of Python's sort resides in the fact that it sorts data more quickly that is already partially sorted. So, here is the question. Suppose I have the following as a sample set: self.sorted_records = [(1, 1234567890), (20, 1245678903), (40, 1256789034), (70, 1278903456)] t[1] of each tuple in the list is a unique id. Now I want to update this list with the follwoing: updated_records = {1245678903:45, 1278903456:76} What is the fastest way for me to do so ending up with self.sorted_records = [(1, 1234567890), (45, 1245678903), (40, 1256789034), (76, 1278903456)] Currently I am doing something like this: updated_keys = updated_records.keys() for i, record in enumerate(self.sorted_data): if record[1] in updated_keys: updated_keys.remove(record[1]) self.sorted_data[i] = (updated_records[record[1]], record[1]) But I am sure there is a faster, more elegant solution out there. Any help? * edit It turns out I used bad exaples for the ids since they end up in sorted order when I do my update. I am actually interested in t[0] being in sorted order. After I do the update I was intending on resorting with the updated data, but it looks like bisect might be the ticket to insert in sorted order. end edit * A: You're scanning through all n records. You could instead do a binary search, which would be O(log(n)) instead of O(n). You can use the bisect module to do this. A: Since apparently you don't care about the ending value of self.sorted_records actually being sorted (you have values in order 1, 45, 20, 76 -- that's NOT sorted!-), AND you only appear to care about IDs in updated_records that are also in self.sorted_data, a listcomp (with side effects if you want to change the updated_record on the fly) would serve you well, i.e.: self.sorted_data = [(updated_records.pop(recid, value), recid) for (value, recid) in self.sorted_data] the .pop call removes from updated_records the keys (and corresponding values) that are ending up in the new self.sorted_data (and the "previous value for that recid", value, is supplied as the 2nd argument to pop to ensure no change where a recid is NOT in updated_record); this leaves in updated_record the "new" stuff so you can e.g append it to self.sorted_data before re-sorting, i.e I suspect you want to continue with something like self.sorted_data.extend(value, recid for recid, value in updated_records.iteritems()) self.sorted_data.sort() though this part DOES go beyond the question you're actually asking (and I'm giving it only because I've seen your previous questions;-). A: You'd probably be best served by some form of tree here (preserving sorted order while allowing O(log n) replacements.) There is no builtin balanaced tree type, but you can find many third party examples. Alternatively, you could either: Use a binary search to find the node. The bisect module will do this, but it compares based on the normal python comparison order, whereas you seem to be sorted based on the second element of each tuple. You could reverse this, or just write your own binary search (or simply take the code from bisect_left and modify it) Use both a dict and a list. The list contains the sorted keys only. You can wrap the dict class easily enough to ensure this is kept in sync. This allows you fast dict updating while maintaining sort order of the keys. This should prevent your problem of losing sort performance due to constant conversion between dict/list. Here's a quick implementation of such a thing: import bisect class SortedDict(dict): """Dictionary which is iterable in sorted order. O(n) sorted iteration O(1) lookup O(log n) replacement ( but O(n) insertion or new items) """ def __init__(self, *args, **kwargs): dict.__init__(self, *args, **kwargs) self._keys = sorted(dict.iterkeys(self)) def __setitem__(self, key, val): if key not in self: # New key - need to add to list of keys. pos = bisect.bisect_left(self._keys, key) self._keys.insert(pos, key) dict.__setitem__(self, key, val) def __delitem__(self, key): if key in self: pos = bisect.bisect_left(self._keys, key) del self._keys[pos] dict.__delitem__(self, key) def __iter__(self): for k in self._keys: yield k iterkeys = __iter__ def iteritems(self): for k in self._keys: yield (k, self[k]) def itervalues(self): for k in self._keys: yield self[k] def update(self, other): dict.update(self, other) self._keys = sorted(dict.iterkeys(self)) # Rebuild (faster if lots of changes made - may be slower if only minor changes to large dict) def keys(self): return list(self.iterkeys()) def values(self): return list(self.itervalues()) def items(self): return list(self.iteritems()) def __repr__(self): return "%s(%s)" % (self.__class__.__name__, ', '.join("%s=%r" % (k, self[k]) for k in self)) A: Since you want to replace by dictionary key, but have the array sorted by dictionary value, you definitely need a linear search for the key. In that sense, your algorithm is the best you can hope for. If you would preserve the old dictionary value, then you could use a binary search for the value, and then locate the key in the proximity of where the binary search lead you.
Python: update a list of tuples... fastest method
This question is in relation to another question asked here: Sorting 1M records I have since figured out the problem I was having with sorting. I was sorting items from a dictionary into a list every time I updated the data. I have since realized that a lot of the power of Python's sort resides in the fact that it sorts data more quickly that is already partially sorted. So, here is the question. Suppose I have the following as a sample set: self.sorted_records = [(1, 1234567890), (20, 1245678903), (40, 1256789034), (70, 1278903456)] t[1] of each tuple in the list is a unique id. Now I want to update this list with the follwoing: updated_records = {1245678903:45, 1278903456:76} What is the fastest way for me to do so ending up with self.sorted_records = [(1, 1234567890), (45, 1245678903), (40, 1256789034), (76, 1278903456)] Currently I am doing something like this: updated_keys = updated_records.keys() for i, record in enumerate(self.sorted_data): if record[1] in updated_keys: updated_keys.remove(record[1]) self.sorted_data[i] = (updated_records[record[1]], record[1]) But I am sure there is a faster, more elegant solution out there. Any help? * edit It turns out I used bad exaples for the ids since they end up in sorted order when I do my update. I am actually interested in t[0] being in sorted order. After I do the update I was intending on resorting with the updated data, but it looks like bisect might be the ticket to insert in sorted order. end edit *
[ "You're scanning through all n records. You could instead do a binary search, which would be O(log(n)) instead of O(n). You can use the bisect module to do this.\n", "Since apparently you don't care about the ending value of self.sorted_records actually being sorted (you have values in order 1, 45, 20, 76 -- that's NOT sorted!-), AND you only appear to care about IDs in updated_records that are also in self.sorted_data, a listcomp (with side effects if you want to change the updated_record on the fly) would serve you well, i.e.:\nself.sorted_data = [(updated_records.pop(recid, value), recid) \n for (value, recid) in self.sorted_data]\n\nthe .pop call removes from updated_records the keys (and corresponding values) that are ending up in the new self.sorted_data (and the \"previous value for that recid\", value, is supplied as the 2nd argument to pop to ensure no change where a recid is NOT in updated_record); this leaves in updated_record the \"new\" stuff so you can e.g append it to self.sorted_data before re-sorting, i.e I suspect you want to continue with something like\nself.sorted_data.extend(value, recid \n for recid, value in updated_records.iteritems())\nself.sorted_data.sort()\n\nthough this part DOES go beyond the question you're actually asking (and I'm giving it only because I've seen your previous questions;-).\n", "You'd probably be best served by some form of tree here (preserving sorted order while allowing O(log n) replacements.) There is no builtin balanaced tree type, but you can find many third party examples. Alternatively, you could either:\n\nUse a binary search to find the node. The bisect module will do this, but it compares based on the normal python comparison order, whereas you seem to be sorted based on the second element of each tuple. You could reverse this, or just write your own binary search (or simply take the code from bisect_left and modify it)\nUse both a dict and a list. The list contains the sorted keys only. You can wrap the dict class easily enough to ensure this is kept in sync. This allows you fast dict updating while maintaining sort order of the keys. This should prevent your problem of losing sort performance due to constant conversion between dict/list. \n\nHere's a quick implementation of such a thing:\nimport bisect\n\nclass SortedDict(dict):\n \"\"\"Dictionary which is iterable in sorted order.\n\n O(n) sorted iteration\n O(1) lookup\n O(log n) replacement ( but O(n) insertion or new items)\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n dict.__init__(self, *args, **kwargs)\n self._keys = sorted(dict.iterkeys(self))\n\n def __setitem__(self, key, val):\n if key not in self:\n # New key - need to add to list of keys.\n pos = bisect.bisect_left(self._keys, key)\n self._keys.insert(pos, key)\n dict.__setitem__(self, key, val)\n\n def __delitem__(self, key):\n if key in self:\n pos = bisect.bisect_left(self._keys, key)\n del self._keys[pos]\n dict.__delitem__(self, key)\n\n def __iter__(self):\n for k in self._keys: yield k\n iterkeys = __iter__\n\n def iteritems(self):\n for k in self._keys: yield (k, self[k])\n\n def itervalues(self):\n for k in self._keys: yield self[k]\n\n def update(self, other):\n dict.update(self, other)\n self._keys = sorted(dict.iterkeys(self)) # Rebuild (faster if lots of changes made - may be slower if only minor changes to large dict)\n\n def keys(self): return list(self.iterkeys())\n def values(self): return list(self.itervalues())\n def items(self): return list(self.iteritems())\n\n def __repr__(self):\n return \"%s(%s)\" % (self.__class__.__name__, ', '.join(\"%s=%r\" % (k, self[k]) for k in self))\n\n", "Since you want to replace by dictionary key, but have the array sorted by dictionary value, you definitely need a linear search for the key. In that sense, your algorithm is the best you can hope for.\nIf you would preserve the old dictionary value, then you could use a binary search for the value, and then locate the key in the proximity of where the binary search lead you.\n" ]
[ 2, 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001186501_python.txt
Q: How to specify which eth interface Django test server should listen on? As the title says, in a multiple ethernet interfaces with multiple IP environment, the default Django test server is not attached to the network that I can access from my PC. Is there any way to specify the interface which Django test server should use? -- Added -- The network configuration is here. I'm connecting to the machine via 143.248.x.y address from my PC. (My PC is also in 143.248.a.b network.) But I cannot find this address. Normal apache works very well as well as other custom daemons running on other ports. The one who configured this machine is not me, so I don't know much details of the network... eth0 Link encap:Ethernet HWaddr 00:15:17:88:97:78 inet addr:192.168.6.100 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::215:17ff:fe88:9778/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:441917680 errors:0 dropped:0 overruns:0 frame:0 TX packets:357190979 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:191664873035 (178.5 GB) TX bytes:324846526526 (302.5 GB) eth1 Link encap:Ethernet HWaddr 00:15:17:88:97:79 inet addr:172.10.1.100 Bcast:172.10.1.255 Mask:255.255.255.0 inet6 addr: fe80::215:17ff:fe88:9779/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1113794891 errors:0 dropped:97 overruns:0 frame:0 TX packets:699821135 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:843942929141 (785.9 GB) TX bytes:838436421169 (780.8 GB) Base address:0x2000 Memory:b8800000-b8820000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1085510396 errors:0 dropped:0 overruns:0 frame:0 TX packets:1085510396 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:422100792153 (393.1 GB) TX bytes:422100792153 (393.1 GB) peth0 Link encap:Ethernet HWaddr 00:15:17:88:97:78 inet6 addr: fe80::215:17ff:fe88:9778/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:441918386 errors:0 dropped:742 overruns:0 frame:0 TX packets:515286699 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:199626686230 (185.9 GB) TX bytes:337365591758 (314.1 GB) Base address:0x2020 Memory:b8820000-b8840000 veth0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) veth1 Link encap:Ethernet HWaddr 00:00:00:00:00:00 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) veth2 Link encap:Ethernet HWaddr 00:00:00:00:00:00 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) veth3 Link encap:Ethernet HWaddr 00:00:00:00:00:00 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vif0.0 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vif0.1 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vif0.2 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vif0.3 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) -- Added (2) -- Finally I used w3m (a text-mode web browser which runs on terminal) to connect from localhost. :P A: I think the OP is referring to having multiple interfaces configured on the test machine. You can specify the IP address that Django will bind to as follows: # python manage.py runserver 0.0.0.0:8000 This would bind Django to all interfaces on port 8000. You can pass any active IP address in place of 0.0.0.0, so simply use the IP address of the interface you want to bind to. Hope this helps. A: Yes, if the IP of your interface is for example 192.168.1.2 and you want to run on port 8080, start the development server like this: ./manage.py runserver 192.168.1.2:8080 A: No. It's not how it works. The interface has an IP address, you have a network with the test server and your PC. You should connect to that IP (possibly with an alternative port that you specified), and that's all. If you only have these two devices in the network, it is most likely that both of them should have static IP addresses. (or, if there is not mutual network, you cannot connect to each other).
How to specify which eth interface Django test server should listen on?
As the title says, in a multiple ethernet interfaces with multiple IP environment, the default Django test server is not attached to the network that I can access from my PC. Is there any way to specify the interface which Django test server should use? -- Added -- The network configuration is here. I'm connecting to the machine via 143.248.x.y address from my PC. (My PC is also in 143.248.a.b network.) But I cannot find this address. Normal apache works very well as well as other custom daemons running on other ports. The one who configured this machine is not me, so I don't know much details of the network... eth0 Link encap:Ethernet HWaddr 00:15:17:88:97:78 inet addr:192.168.6.100 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::215:17ff:fe88:9778/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:441917680 errors:0 dropped:0 overruns:0 frame:0 TX packets:357190979 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:191664873035 (178.5 GB) TX bytes:324846526526 (302.5 GB) eth1 Link encap:Ethernet HWaddr 00:15:17:88:97:79 inet addr:172.10.1.100 Bcast:172.10.1.255 Mask:255.255.255.0 inet6 addr: fe80::215:17ff:fe88:9779/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1113794891 errors:0 dropped:97 overruns:0 frame:0 TX packets:699821135 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:843942929141 (785.9 GB) TX bytes:838436421169 (780.8 GB) Base address:0x2000 Memory:b8800000-b8820000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1085510396 errors:0 dropped:0 overruns:0 frame:0 TX packets:1085510396 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:422100792153 (393.1 GB) TX bytes:422100792153 (393.1 GB) peth0 Link encap:Ethernet HWaddr 00:15:17:88:97:78 inet6 addr: fe80::215:17ff:fe88:9778/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:441918386 errors:0 dropped:742 overruns:0 frame:0 TX packets:515286699 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:199626686230 (185.9 GB) TX bytes:337365591758 (314.1 GB) Base address:0x2020 Memory:b8820000-b8840000 veth0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) veth1 Link encap:Ethernet HWaddr 00:00:00:00:00:00 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) veth2 Link encap:Ethernet HWaddr 00:00:00:00:00:00 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) veth3 Link encap:Ethernet HWaddr 00:00:00:00:00:00 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vif0.0 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vif0.1 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vif0.2 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vif0.3 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) -- Added (2) -- Finally I used w3m (a text-mode web browser which runs on terminal) to connect from localhost. :P
[ "I think the OP is referring to having multiple interfaces configured on the test machine.\nYou can specify the IP address that Django will bind to as follows:\n# python manage.py runserver 0.0.0.0:8000\n\nThis would bind Django to all interfaces on port 8000. You can pass any active IP address in place of 0.0.0.0, so simply use the IP address of the interface you want to bind to.\nHope this helps.\n", "Yes, if the IP of your interface is for example 192.168.1.2 and you want to run on port 8080, start the development server like this:\n./manage.py runserver 192.168.1.2:8080\n\n", "No. It's not how it works. The interface has an IP address, you have a network with the test server and your PC. You should connect to that IP (possibly with an alternative port that you specified), and that's all. If you only have these two devices in the network, it is most likely that both of them should have static IP addresses. (or, if there is not mutual network, you cannot connect to each other).\n" ]
[ 45, 2, 1 ]
[]
[]
[ "django", "ethernet", "networking", "python" ]
stackoverflow_0001188205_django_ethernet_networking_python.txt
Q: Why Python omits attribute in the SOAP message? I have a web service that returns following type: <xsd:complexType name="TaggerResponse"> <xsd:sequence> <xsd:element name="msg" type="xsd:string"></xsd:element> </xsd:sequence> <xsd:attribute name="status" type="tns:Status"></xsd:attribute> </xsd:complexType> The type contains one element (msg) and one attribute (status). To communicate with the web service I use SOAPpy library. Below is a sample result return by the web service (SOAP message): <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Body> <SOAP-ENV:TagResponse> <parameters status="2"> <msg>text</msg> </parameters> </SOAP-ENV:TagResponse> </SOAP-ENV:Body> </SOAP-ENV:Envelope> Python parses this message as: <SOAPpy.Types.structType parameters at 157796908>: {'msg': 'text'} As you can see the attribute is lost. What should I do to get the value of "status"? A: The response example you posted (the actual XML coming back from the WS request) does not have the value in it you are looking for! I would suggest this is why SOAPpy cannot return it to you. If it is a case of making your code have consistent behaviour in cases where the value is returned and when it isn't then try using dict's get() method to get the value: attribute_value = result.get("attribute", None) Then you can test the result for none. You can also do so like this: if not "attribute" in result: ...handle case where there is no attribute value...
Why Python omits attribute in the SOAP message?
I have a web service that returns following type: <xsd:complexType name="TaggerResponse"> <xsd:sequence> <xsd:element name="msg" type="xsd:string"></xsd:element> </xsd:sequence> <xsd:attribute name="status" type="tns:Status"></xsd:attribute> </xsd:complexType> The type contains one element (msg) and one attribute (status). To communicate with the web service I use SOAPpy library. Below is a sample result return by the web service (SOAP message): <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Body> <SOAP-ENV:TagResponse> <parameters status="2"> <msg>text</msg> </parameters> </SOAP-ENV:TagResponse> </SOAP-ENV:Body> </SOAP-ENV:Envelope> Python parses this message as: <SOAPpy.Types.structType parameters at 157796908>: {'msg': 'text'} As you can see the attribute is lost. What should I do to get the value of "status"?
[ "The response example you posted (the actual XML coming back from the WS request) does not have the value in it you are looking for! I would suggest this is why SOAPpy cannot return it to you.\nIf it is a case of making your code have consistent behaviour in cases where the value is returned and when it isn't then try using dict's get() method to get the value:\nattribute_value = result.get(\"attribute\", None)\n\nThen you can test the result for none. You can also do so like this:\nif not \"attribute\" in result:\n ...handle case where there is no attribute value...\n\n" ]
[ 1 ]
[]
[]
[ "python", "soap", "soappy", "web_services", "wsdl" ]
stackoverflow_0001188367_python_soap_soappy_web_services_wsdl.txt
Q: Writing in file's actual position in Python I want to read a line in a file and insert the new line ("\n") character in the n position on a line, so that a 9-character line, for instance, gets converted into three 3-character lines, like this: "123456789" (before) "123\n456\n789" (after) I've tried with this: f = open(file, "r+") f.write("123456789") f.seek(3, 0) f.write("\n") f.seek(0) f.read() -> '123\n56789' I want it not to substitute the character in position n, but only to insert another ("\n") char in that position. Any idea about how to do this? Thanks A: I don't think there is any way to do that in the way you are trying to: you would have to read in to the end of the file from the position you want to insert, then write your new character at the position you wish it to be, then write the original data back after it. This is the same way things would work in C or any language with a seek() type API. Alternatively, read the file into a string, then use list methods to insert your data. source_file = open("myfile", "r") file_data = list(source_file.read()) source_file.close() file_data.insert(position, data) open("myfile", "wb").write(file_data) A: with open(file, 'r+') as f: data = f.read() f.seek(0) for i in range(len(data)): # could also use 'for i, chara in enumerate(data):' and then 'f.write(chara)' instead of 'f.write(data[i])' if (i + 1) % 3 == 0: # could also do 'if i % 3 == 2:', but that may be slightly confusing f.write('\n') else: f.write(data[i]) I don't think it's all that Pythonic (due to the range(len(data))), but it should work, unless your data file is really really large (in which case you'll have to process the data in the file part by part and store the results in another file to prevent overwriting data you haven't processed yet). (More on the with statement.) A: You can think a file is just an array of characters, and if you want to insert a new element in the middle of an array, then you have to shift all the elements that are after it. You could do what you say if the file contained a "linked list" of chars or "extends", but then you would need a special editor to see it sequentially.
Writing in file's actual position in Python
I want to read a line in a file and insert the new line ("\n") character in the n position on a line, so that a 9-character line, for instance, gets converted into three 3-character lines, like this: "123456789" (before) "123\n456\n789" (after) I've tried with this: f = open(file, "r+") f.write("123456789") f.seek(3, 0) f.write("\n") f.seek(0) f.read() -> '123\n56789' I want it not to substitute the character in position n, but only to insert another ("\n") char in that position. Any idea about how to do this? Thanks
[ "I don't think there is any way to do that in the way you are trying to: you would have to read in to the end of the file from the position you want to insert, then write your new character at the position you wish it to be, then write the original data back after it. This is the same way things would work in C or any language with a seek() type API.\nAlternatively, read the file into a string, then use list methods to insert your data.\nsource_file = open(\"myfile\", \"r\")\nfile_data = list(source_file.read())\nsource_file.close()\nfile_data.insert(position, data)\nopen(\"myfile\", \"wb\").write(file_data)\n\n", "with open(file, 'r+') as f:\n data = f.read()\n f.seek(0)\n\n for i in range(len(data)): # could also use 'for i, chara in enumerate(data):' and then 'f.write(chara)' instead of 'f.write(data[i])'\n if (i + 1) % 3 == 0: # could also do 'if i % 3 == 2:', but that may be slightly confusing\n f.write('\\n')\n else:\n f.write(data[i])\n\nI don't think it's all that Pythonic (due to the range(len(data))), but it should work, unless your data file is really really large (in which case you'll have to process the data in the file part by part and store the results in another file to prevent overwriting data you haven't processed yet).\n(More on the with statement.)\n", "You can think a file is just an array of characters, and if you want to insert a new element in the middle of an array, then you have to shift all the elements that are after it.\nYou could do what you say if the file contained a \"linked list\" of chars or \"extends\", but then you would need a special editor to see it sequentially.\n" ]
[ 7, 1, 0 ]
[]
[]
[ "file", "python" ]
stackoverflow_0001188214_file_python.txt
Q: Sensible python source line wrapping for printout I am working on a latex document that will require typesetting significant amounts of python source code. I'm using pygments (the python module, not the online demo) to encapsulate this python in latex, which works well except in the case of long individual lines - which simply continue off the page. I could manually wrap these lines except that this just doesn't seem that elegant a solution to me, and I prefer spending time puzzling about crazy automated solutions than on repetitive tasks. What I would like is some way of processing the python source code to wrap the lines to a certain maximum character length, while preserving functionality. I've had a play around with some python and the closest I've come is inserting \\\n in the last whitespace before the maximum line length - but of course, if this ends up in strings and comments, things go wrong. Quite frankly, I'm not sure how to approach this problem. So, is anyone aware of a module or tool that can process source code so that no lines exceed a certain length - or at least a good way to start to go about coding something like that? A: You might want to extend your current approach a bit, but using the tokenize module from the standard library to determine where to put your line breaks. That way you can see the actual tokens (COMMENT, STRING, etc.) of your source code rather than just the whitespace-separated words. Here is a short example of what tokenize can do: >>> from cStringIO import StringIO >>> from tokenize import tokenize >>> >>> python_code = ''' ... def foo(): # This is a comment ... print 'foo' ... ''' >>> >>> fp = StringIO(python_code) >>> >>> tokenize(fp.readline) 1,0-1,1: NL '\n' 2,0-2,3: NAME 'def' 2,4-2,7: NAME 'foo' 2,7-2,8: OP '(' 2,8-2,9: OP ')' 2,9-2,10: OP ':' 2,11-2,30: COMMENT '# This is a comment' 2,30-2,31: NEWLINE '\n' 3,0-3,4: INDENT ' ' 3,4-3,9: NAME 'print' 3,10-3,15: STRING "'foo'" 3,15-3,16: NEWLINE '\n' 4,0-4,0: DEDENT '' 4,0-4,0: ENDMARKER '' A: I use the listings package in LaTeX to insert source code; it does syntax highlight, linebreaks et al. Put the following in your preamble: \usepackage{listings} %\lstloadlanguages{Python} # Load only these languages \newcommand{\MyHookSign}{\hbox{\ensuremath\hookleftarrow}} \lstset{ % Language language=Python, % Basic setup %basicstyle=\footnotesize, basicstyle=\scriptsize, keywordstyle=\bfseries, commentstyle=, % Looks frame=single, % Linebreaks breaklines, prebreak={\space\MyHookSign}, % Line numbering tabsize=4, stepnumber=5, numbers=left, firstnumber=1, %numberstyle=\scriptsize, numberstyle=\tiny, % Above and beyond ASCII! extendedchars=true } The package has hook for inline code, including entire files, showing it as figures, ... A: I'd check a reformat tool in an editor like NetBeans. When you reformat java it properly fixes the lengths of lines both inside and outside of comments, if the same algorithm were applied to Python, it would work. For Java it allows you to set any wrapping width and a bunch of other parameters. I'd be pretty surprised if that didn't exist either native or as a plugin. Can't tell for sure just from the description, but it's worth a try: http://www.netbeans.org/features/python/
Sensible python source line wrapping for printout
I am working on a latex document that will require typesetting significant amounts of python source code. I'm using pygments (the python module, not the online demo) to encapsulate this python in latex, which works well except in the case of long individual lines - which simply continue off the page. I could manually wrap these lines except that this just doesn't seem that elegant a solution to me, and I prefer spending time puzzling about crazy automated solutions than on repetitive tasks. What I would like is some way of processing the python source code to wrap the lines to a certain maximum character length, while preserving functionality. I've had a play around with some python and the closest I've come is inserting \\\n in the last whitespace before the maximum line length - but of course, if this ends up in strings and comments, things go wrong. Quite frankly, I'm not sure how to approach this problem. So, is anyone aware of a module or tool that can process source code so that no lines exceed a certain length - or at least a good way to start to go about coding something like that?
[ "You might want to extend your current approach a bit, but using the tokenize module from the standard library to determine where to put your line breaks. That way you can see the actual tokens (COMMENT, STRING, etc.) of your source code rather than just the whitespace-separated words.\nHere is a short example of what tokenize can do:\n>>> from cStringIO import StringIO\n>>> from tokenize import tokenize\n>>> \n>>> python_code = '''\n... def foo(): # This is a comment\n... print 'foo'\n... '''\n>>> \n>>> fp = StringIO(python_code)\n>>> \n>>> tokenize(fp.readline)\n1,0-1,1: NL '\\n'\n2,0-2,3: NAME 'def'\n2,4-2,7: NAME 'foo'\n2,7-2,8: OP '('\n2,8-2,9: OP ')'\n2,9-2,10: OP ':'\n2,11-2,30: COMMENT '# This is a comment'\n2,30-2,31: NEWLINE '\\n'\n3,0-3,4: INDENT ' '\n3,4-3,9: NAME 'print'\n3,10-3,15: STRING \"'foo'\"\n3,15-3,16: NEWLINE '\\n'\n4,0-4,0: DEDENT ''\n4,0-4,0: ENDMARKER ''\n\n", "I use the listings package in LaTeX to insert source code; it does syntax highlight, linebreaks et al.\nPut the following in your preamble:\n\\usepackage{listings}\n%\\lstloadlanguages{Python} # Load only these languages\n\\newcommand{\\MyHookSign}{\\hbox{\\ensuremath\\hookleftarrow}}\n\n\\lstset{\n % Language\n language=Python,\n % Basic setup\n %basicstyle=\\footnotesize,\n basicstyle=\\scriptsize,\n keywordstyle=\\bfseries,\n commentstyle=,\n % Looks\n frame=single,\n % Linebreaks\n breaklines,\n prebreak={\\space\\MyHookSign},\n % Line numbering\n tabsize=4,\n stepnumber=5,\n numbers=left,\n firstnumber=1,\n %numberstyle=\\scriptsize,\n numberstyle=\\tiny,\n % Above and beyond ASCII!\n extendedchars=true\n}\n\nThe package has hook for inline code, including entire files, showing it as figures, ...\n", "I'd check a reformat tool in an editor like NetBeans.\nWhen you reformat java it properly fixes the lengths of lines both inside and outside of comments, if the same algorithm were applied to Python, it would work.\nFor Java it allows you to set any wrapping width and a bunch of other parameters. I'd be pretty surprised if that didn't exist either native or as a plugin.\nCan't tell for sure just from the description, but it's worth a try:\nhttp://www.netbeans.org/features/python/\n" ]
[ 3, 2, 1 ]
[]
[]
[ "code_formatting", "latex", "pygments", "python", "syntax_highlighting" ]
stackoverflow_0001035721_code_formatting_latex_pygments_python_syntax_highlighting.txt
Q: Python POST ordered params I have a web service that accepts passed in params using http POST but in a specific order, eg (name,password,data). I have tried to use httplib but all the Python http POST libraries seem to take a dictionary, which is an unordered data structure. Any thoughts on how to http POST params in order for Python? Thanks! A: Why would you need a specific order in the POST parameters in the first place? As far as I know there are no requirements that POST parameter order is preserved by web servers. Every language I have used, has used a dictionary type object to hold these parameters as they are inherently key/value pairs.
Python POST ordered params
I have a web service that accepts passed in params using http POST but in a specific order, eg (name,password,data). I have tried to use httplib but all the Python http POST libraries seem to take a dictionary, which is an unordered data structure. Any thoughts on how to http POST params in order for Python? Thanks!
[ "Why would you need a specific order in the POST parameters in the first place? As far as I know there are no requirements that POST parameter order is preserved by web servers.\nEvery language I have used, has used a dictionary type object to hold these parameters as they are inherently key/value pairs.\n" ]
[ 2 ]
[]
[]
[ "http", "python" ]
stackoverflow_0001188737_http_python.txt
Q: Reusing a Django app within a single project In trying to save as much time as possible in my development and make as many of my apps as reusable as possible, I have run into a bit of a roadblock. In one site I have a blog app and a news app, which are largely identical, and obviously it would be easier if I could make a single app and extend it where necessary, and then have it function as two separate apps with separate databases, etc. To clarify, consider the following: hypothetically speaking, I would like to have a single, generic news_content app, containing all the relevant models, views, url structure and templatetags, which I could then include and extend where necessary as many times as I like into a single project. It breaks down as follows: news_content/ templatetags/ __init__.py news_content.py __init__.py models.py (defines generic models - news_item, category, etc.) views.py (generic views for news, archiving, etc.) urls.py admin.py Is there a way to include this app multiple times in a project under various names? I feel like it should be obvious and I'm just not thinking clearly about it. Does anybody have any experience with this? I'd appreciate any advice people can give. Thank you. A: What's the actual difference between blogs and news? Perhaps that difference ought to be part of the blog/news app and you include it just once. If you have a blog page with blog entries and a news page with news entries and the only difference is a field in the database (kind_of_item = "blog" vs. kind_of_item = "news") then perhaps have you have this. urls.py (r'^/(?P<kind>blog)/$', 'view.stuff'), (r'^/(?P<kind>news)/$', 'view.stuff'), views.py def stuff( request, kind ): content= news_blog.objects.filter( kind=kind ) return render_to_response( kind+"_page", { 'content': content } ) Perhaps you don't need the same app twice, but need to extend the app to handle both use cases. A: In this case you could create the common piece of code as a Python module instead of a whole new application. Then for each instance you would like to use it, create an app and import the bits from that module. A: I'm not 100% sure I understand your question, so I'm going to list my understanding, and let me know if it is different from yours. You want to have a "news" and a "blog" section of your website with identical functionality. You want to have "news" and "blog" entries stored separately in the database so they don't end up intermingling. If this is the case, I'd suggest making an API to your views. Something like this: views.py: def view_article(request, article_slug, template_name='view_article.html', form_class=CommentForm, model_class=NewsArticle, success_url=None, ): urls.py: (r'^news/(?P<article_slug>[-\w]+)/$', 'view_article', {}, "view_news_article"), (r'^blog/(?P<article_slug>[-\w]+)/$', 'view_article', {'model_class': BlogArticle}, "view_blog_article"), This makes your app highly reusable by offering the ability to override the template, form, model, and success_url straight from urls.py.
Reusing a Django app within a single project
In trying to save as much time as possible in my development and make as many of my apps as reusable as possible, I have run into a bit of a roadblock. In one site I have a blog app and a news app, which are largely identical, and obviously it would be easier if I could make a single app and extend it where necessary, and then have it function as two separate apps with separate databases, etc. To clarify, consider the following: hypothetically speaking, I would like to have a single, generic news_content app, containing all the relevant models, views, url structure and templatetags, which I could then include and extend where necessary as many times as I like into a single project. It breaks down as follows: news_content/ templatetags/ __init__.py news_content.py __init__.py models.py (defines generic models - news_item, category, etc.) views.py (generic views for news, archiving, etc.) urls.py admin.py Is there a way to include this app multiple times in a project under various names? I feel like it should be obvious and I'm just not thinking clearly about it. Does anybody have any experience with this? I'd appreciate any advice people can give. Thank you.
[ "What's the actual difference between blogs and news? Perhaps that difference ought to be part of the blog/news app and you include it just once.\nIf you have a blog page with blog entries and a news page with news entries and the only difference is a field in the database (kind_of_item = \"blog\" vs. kind_of_item = \"news\") then perhaps have you have this.\nurls.py\n(r'^/(?P<kind>blog)/$', 'view.stuff'),\n(r'^/(?P<kind>news)/$', 'view.stuff'),\n\nviews.py\ndef stuff( request, kind ):\n content= news_blog.objects.filter( kind=kind )\n return render_to_response( kind+\"_page\", { 'content': content } )\n\nPerhaps you don't need the same app twice, but need to extend the app to handle both use cases.\n", "In this case you could create the common piece of code as a Python module instead of a whole new application.\nThen for each instance you would like to use it, create an app and import the bits from that module.\n", "I'm not 100% sure I understand your question, so I'm going to list my understanding, and let me know if it is different from yours.\n\nYou want to have a \"news\" and a \"blog\" section of your website with identical functionality.\nYou want to have \"news\" and \"blog\" entries stored separately in the database so they don't end up intermingling.\n\nIf this is the case, I'd suggest making an API to your views. Something like this:\nviews.py:\ndef view_article(request, article_slug,\n template_name='view_article.html',\n form_class=CommentForm,\n model_class=NewsArticle,\n success_url=None,\n ):\n\nurls.py:\n(r'^news/(?P<article_slug>[-\\w]+)/$', 'view_article', {}, \"view_news_article\"),\n(r'^blog/(?P<article_slug>[-\\w]+)/$', 'view_article', {'model_class': BlogArticle}, \"view_blog_article\"),\n\nThis makes your app highly reusable by offering the ability to override the template, form, model, and success_url straight from urls.py.\n" ]
[ 3, 2, 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001188052_django_python.txt
Q: Python: encryption as means to prevent data tampering Many of my company's clients use our data acquisition software in a research basis. Due to the nature of research in general, some of the clients ask that data is encrypted to prevent tampering -- there could be serious ramifications if their data was shown to be falsified. Some of our binary software encrypts output files with a password stored in the source, that looks like random characters. At the software level, we are able to open up encrypted files for read-only operations. If someone really wanted to find out the password so that they could alter data, it would be possible, but it would be a lot of work. I'm looking into using Python for rapid development of another piece of software. To duplicate the functionality of encryption to defeat/discourage data tampering, the best idea I've come up with so far is to just use ctypes with a DLL for file reading/writing operations, so that the method of encryption and decryption is "sufficiently" obfuscated. We are well aware that an "uncrackable" method is unattainable, but at the same time I'm obviously not comfortable with just having the encryption/decryption approaches sitting there in plain text in the Python source code. A "very strong discouragement of data tampering" would be good enough, I think. What would be the best approach to attain a happy medium of encryption or other proof of data integrity using Python? I saw another post talking about generating a "tamper proof signature", but if a signature was generated in pure Python then it would be trivial to generate a signature for any arbitrary data. We might be able to phone home to prove data integrity, but that seems like a major inconvenience for everyone involved. A: As a general principle, you don't want to use encryption to protect against tampering, instead you want to use a digital signature. Encryption gives you confidentiality, but you are after integrity. Compute a hash value over your data and either store the hash value in a place where you know it cannot be tampered with or digitally sign it. In your case, it seems like you want to ensure that only your software can have generated the files? Like you say, there cannot exist a really secure way to do this when your users have access to the software since they can tear it apart and find any secret keys you include. Given that constraint, I think your idea of using a DLL is about as good as you can do it. A: If you are embedding passwords somewhere, you are already hosed. You can't guarantee anything. However, you could use public key/private key encryption to make sure the data hasn't been tampered with. The way it works is this: You generate a public key / private key pair. Keep the private key secure, distribute the public key. Hash the data and then sign the hash with the private key. Use the public key to verify the hash. This effectively renders the data read-only outside your company, and provides your program a simple way to verify that the data hasn't been modified without distributing passwords. A: Here's another issue. Presumably, your data acquisition software is collecting data from some external source (like some sort of measuring device), then doing whatever processsing is necessary on the raw data and storing the results. Regardless of what method you use in your program, another possible attack vector would be to feed in bad data to the program, and the program itself has no way of knowing that you are feeding in made up data rather than data that came from the measuring device. But this might not be fixable. Another possible attack vector (and probably the one you are concerned about is tampering with the data on the computer after it has been stored. Here's an idea to mitigate that risk: set up a separate server (this could either be something your company would run, or more likely it would be something the client would set up) with a password protected web service that allows a user to add (but not remove) data records. Then have your program, when it collects data, send it to the server (using the password/connection string which is stored in the program). Have your program only write the data to the local machine if it receives confirmation that the data has been successfully stored on the server. Now suppose an attacker tries to tamper with the data on the client. If he can reverse engineer the program then he can of course still send it to the server for storage, just as the program did. But the server will still have the original data, so the tampering will be detectable because the server will end up with both the original and modified data - the client won't be able to erase the original records. (The client program of course does not need to know how to erase records on the server.)
Python: encryption as means to prevent data tampering
Many of my company's clients use our data acquisition software in a research basis. Due to the nature of research in general, some of the clients ask that data is encrypted to prevent tampering -- there could be serious ramifications if their data was shown to be falsified. Some of our binary software encrypts output files with a password stored in the source, that looks like random characters. At the software level, we are able to open up encrypted files for read-only operations. If someone really wanted to find out the password so that they could alter data, it would be possible, but it would be a lot of work. I'm looking into using Python for rapid development of another piece of software. To duplicate the functionality of encryption to defeat/discourage data tampering, the best idea I've come up with so far is to just use ctypes with a DLL for file reading/writing operations, so that the method of encryption and decryption is "sufficiently" obfuscated. We are well aware that an "uncrackable" method is unattainable, but at the same time I'm obviously not comfortable with just having the encryption/decryption approaches sitting there in plain text in the Python source code. A "very strong discouragement of data tampering" would be good enough, I think. What would be the best approach to attain a happy medium of encryption or other proof of data integrity using Python? I saw another post talking about generating a "tamper proof signature", but if a signature was generated in pure Python then it would be trivial to generate a signature for any arbitrary data. We might be able to phone home to prove data integrity, but that seems like a major inconvenience for everyone involved.
[ "As a general principle, you don't want to use encryption to protect against tampering, instead you want to use a digital signature. Encryption gives you confidentiality, but you are after integrity.\nCompute a hash value over your data and either store the hash value in a place where you know it cannot be tampered with or digitally sign it.\nIn your case, it seems like you want to ensure that only your software can have generated the files? Like you say, there cannot exist a really secure way to do this when your users have access to the software since they can tear it apart and find any secret keys you include. Given that constraint, I think your idea of using a DLL is about as good as you can do it.\n", "If you are embedding passwords somewhere, you are already hosed. You can't guarantee anything.\nHowever, you could use public key/private key encryption to make sure the data hasn't been tampered with.\nThe way it works is this:\n\nYou generate a public key / private key pair.\nKeep the private key secure, distribute the public key.\nHash the data and then sign the hash with the private key.\nUse the public key to verify the hash.\n\nThis effectively renders the data read-only outside your company, and provides your program a simple way to verify that the data hasn't been modified without distributing passwords.\n", "Here's another issue. Presumably, your data acquisition software is collecting data from some external source (like some sort of measuring device), then doing whatever processsing is necessary on the raw data and storing the results. Regardless of what method you use in your program, another possible attack vector would be to feed in bad data to the program, and the program itself has no way of knowing that you are feeding in made up data rather than data that came from the measuring device. But this might not be fixable.\nAnother possible attack vector (and probably the one you are concerned about is tampering with the data on the computer after it has been stored. Here's an idea to mitigate that risk: set up a separate server (this could either be something your company would run, or more likely it would be something the client would set up) with a password protected web service that allows a user to add (but not remove) data records. Then have your program, when it collects data, send it to the server (using the password/connection string which is stored in the program). Have your program only write the data to the local machine if it receives confirmation that the data has been successfully stored on the server.\nNow suppose an attacker tries to tamper with the data on the client. If he can reverse engineer the program then he can of course still send it to the server for storage, just as the program did. But the server will still have the original data, so the tampering will be detectable because the server will end up with both the original and modified data - the client won't be able to erase the original records. (The client program of course does not need to know how to erase records on the server.)\n" ]
[ 13, 3, 0 ]
[]
[]
[ "data_integrity", "encryption", "python", "tampering" ]
stackoverflow_0001178789_data_integrity_encryption_python_tampering.txt
Q: Make SetupTools/easy_install aware of installed Debian Packages? I'm installing an egg with easy_install which requires ruledispatch. It isn't available in PyPI, and when I use PEAK's version it FTBFS. There is, however, a python-dispatch package which provides the same functionality as ruledispatch. How can I get easy_install to stop trying to install ruledispatch, and to allow it to recognize that ruledispatch is already installed as python-ruledispatch? Running Debian etch with Python 2.4 A: The path least fiddly is likely: easy_install --no-deps Look at the egginfo of what you just installed Install all dependencies except ruledispatch by hand Optionally, prod the people responsible to list their stuff on pypi / not have dependencies that the package installer can't possibly satisfy / use dependency_links / use a custom package index / something. If the python-ruledispatch from the .deb is the same as the egg depends on or compatible, this should work.
Make SetupTools/easy_install aware of installed Debian Packages?
I'm installing an egg with easy_install which requires ruledispatch. It isn't available in PyPI, and when I use PEAK's version it FTBFS. There is, however, a python-dispatch package which provides the same functionality as ruledispatch. How can I get easy_install to stop trying to install ruledispatch, and to allow it to recognize that ruledispatch is already installed as python-ruledispatch? Running Debian etch with Python 2.4
[ "The path least fiddly is likely:\n\neasy_install --no-deps\nLook at the egginfo of what you just installed\nInstall all dependencies except ruledispatch by hand\nOptionally, prod the people responsible to list their stuff on pypi / not have dependencies that the package installer can't possibly satisfy / use dependency_links / use a custom package index / something.\n\nIf the python-ruledispatch from the .deb is the same as the egg depends on or compatible, this should work.\n" ]
[ 3 ]
[]
[]
[ "debian", "easy_install", "etch", "python", "setuptools" ]
stackoverflow_0001188812_debian_easy_install_etch_python_setuptools.txt
Q: SharePoint via SOAP using Python I have been following the solution noted here - as this is exactly what I need to achieve; how can i use sharepoint (via soap?) from python? however when I run one of the last lines of this code I get the following error; >>> client = SoapClient(url, {'opener' : opener}) Traceback (most recent call last): File "<stdin>", line 1, in ? File "build\bdist.win32\egg\suds\client.py", line 456, in __init__ AttributeError: 'str' object has no attribute 'options' Any advice or suggestion as to how to solve this most welcome! A: According to https://fedorahosted.org/suds/browser/trunk/suds/client.py?rev=504 434 class SoapClient: ... 445 """ 446 447 def __init__(self, client, method): 448 """ 449 @param client: A suds client. 450 @type client: L{Client} 451 @param method: A target method. 452 @type method: L{Method} 453 """ 454 self.client = client 455 self.method = method 456 self.options = client.options 457 self.cookiejar = CookieJar() The first parameter of SoapClient is not a string but an object of the Client class. Your parameter is not an instance of the required class.
SharePoint via SOAP using Python
I have been following the solution noted here - as this is exactly what I need to achieve; how can i use sharepoint (via soap?) from python? however when I run one of the last lines of this code I get the following error; >>> client = SoapClient(url, {'opener' : opener}) Traceback (most recent call last): File "<stdin>", line 1, in ? File "build\bdist.win32\egg\suds\client.py", line 456, in __init__ AttributeError: 'str' object has no attribute 'options' Any advice or suggestion as to how to solve this most welcome!
[ "According to https://fedorahosted.org/suds/browser/trunk/suds/client.py?rev=504\n434 class SoapClient:\n...\n445 \"\"\"\n446 \n447 def __init__(self, client, method):\n448 \"\"\"\n449 @param client: A suds client.\n450 @type client: L{Client}\n451 @param method: A target method.\n452 @type method: L{Method}\n453 \"\"\"\n454 self.client = client\n455 self.method = method\n456 self.options = client.options\n457 self.cookiejar = CookieJar()\n\nThe first parameter of SoapClient is not a string but an object of the Client class. Your parameter is not an instance of the required class.\n" ]
[ 1 ]
[]
[]
[ "python", "sharepoint", "soap", "suds" ]
stackoverflow_0001078593_python_sharepoint_soap_suds.txt
Q: Why the trailing slash in the web service is so important? I was testing a web service in PHP and Python. The address of the web service was, let's say, http://my.domain.com/my/webservice. When I tested the web service in PHP using that URL everything worked fine. But, when I used the same location but in Python using SOAPpy I got an error. Below is the code I used to communicate with the web service (Python): from SOAPpy import WSDL server = SOAPProxy('http://my.domain.com/my/webservice', namespace) server.myFunction() The respond I got from the server: HTTPError: <HTTPError 301 Moved Permanently> I figure out that if I add a trailing slash to the web service location it works! from SOAPpy import WSDL server = SOAPProxy('http://my.domain.com/my/webservice/', namespace) server.myFunction() Why the lack of the trailing slash causes the error? A: They're different URLs. http://my.domain.com/my/webservice implies a file webservice in the my folder. http://my.domain.com/my/webservice/ implies the default document inside the my/webservice folder. Many webservers will automatically correct such URLs, but it is not required for them to do so. A: Because the actual server URL is: http://my.domain.com/my/webservice/ The PHP library must be following redirects by default. A: The error is a 301 redirect meaning the you are being redirected to the URL with the slash on the end by the web server. It seems that PHP will auto follow this redirect and thus not throw the error, whereas Python won't. You will need to do the following: Try to Connect to the initial URL Catch any 301 redirect and possibly 302 redirects as well If there was a redirect then try to connect to that URL instead. The new URL should be available in the response headers. HTH. A: [Disclaimer: This is a copy of my answer from here. I know some people don't like this kind of copying, but this explains why the slash is important.] Imagine you serve a page http://mydomain.com/bla that contains <a href="more.html">Read more...</a> On click, the user's browser would retrieve http://mydomain.com/more.html. Had you instead served http://mydomain.com/bla/ (with the same content), the browser would retrieve http://mydomain.com/bla/more.html. To avoid this ambiguity, the redirection appends a slash if the URL points to a directory. A: What a SOAP-URL looks like is up to the server, if a slash is necessary depends on the server and the SOAP implementation. In your case, I assume that the target server is an apache server and the SOAP URL is actually a directory that contains your SOAP handling script. When you access http://my.domain.com/my/webservice on the server, apache decides that the directory is properly addressed as http://my.domain.com/my/webservice/ and sends a 301 redirect. SOAP uses a http POST, its up to the client to decide if the redirect should be followed or not, I assume that it just doesn't expect one. Other implementations of SOAP, e.g. Apache Axis in Java have URLs that look like Servlets, e.g. http://domain.com/soap/webservice without slash, in this case the URL without slash is correct, there is no directory that exists anyway. Axis fails on redirects as well, I think.
Why the trailing slash in the web service is so important?
I was testing a web service in PHP and Python. The address of the web service was, let's say, http://my.domain.com/my/webservice. When I tested the web service in PHP using that URL everything worked fine. But, when I used the same location but in Python using SOAPpy I got an error. Below is the code I used to communicate with the web service (Python): from SOAPpy import WSDL server = SOAPProxy('http://my.domain.com/my/webservice', namespace) server.myFunction() The respond I got from the server: HTTPError: <HTTPError 301 Moved Permanently> I figure out that if I add a trailing slash to the web service location it works! from SOAPpy import WSDL server = SOAPProxy('http://my.domain.com/my/webservice/', namespace) server.myFunction() Why the lack of the trailing slash causes the error?
[ "They're different URLs. http://my.domain.com/my/webservice implies a file webservice in the my folder. http://my.domain.com/my/webservice/ implies the default document inside the my/webservice folder.\nMany webservers will automatically correct such URLs, but it is not required for them to do so.\n", "Because the actual server URL is:\nhttp://my.domain.com/my/webservice/\nThe PHP library must be following redirects by default.\n", "The error is a 301 redirect meaning the you are being redirected to the URL with the slash on the end by the web server. \nIt seems that PHP will auto follow this redirect and thus not throw the error, whereas Python won't. You will need to do the following:\n\nTry to Connect to the initial URL\nCatch any 301 redirect and possibly 302 redirects as well\nIf there was a redirect then try to connect to that URL instead.\n\nThe new URL should be available in the response headers. \nHTH.\n", "[Disclaimer: This is a copy of my answer from here. I know some people don't like this kind of copying, but this explains why the slash is important.]\nImagine you serve a page\nhttp://mydomain.com/bla\n\nthat contains\n<a href=\"more.html\">Read more...</a>\n\nOn click, the user's browser would retrieve http://mydomain.com/more.html. Had you instead served\nhttp://mydomain.com/bla/\n\n(with the same content), the browser would retrieve http://mydomain.com/bla/more.html. To avoid this ambiguity, the redirection appends a slash if the URL points to a directory.\n", "What a SOAP-URL looks like is up to the server, if a slash is necessary depends on the server and the SOAP implementation.\nIn your case, I assume that the target server is an apache server and the SOAP URL is actually a directory that contains your SOAP handling script.\nWhen you access http://my.domain.com/my/webservice on the server, apache decides that the directory is properly addressed as http://my.domain.com/my/webservice/ and sends a 301 redirect.\nSOAP uses a http POST, its up to the client to decide if the redirect should be followed or not, I assume that it just doesn't expect one.\nOther implementations of SOAP, e.g. Apache Axis in Java have URLs that look like Servlets, e.g. http://domain.com/soap/webservice without slash, in this case the URL without slash is correct, there is no directory that exists anyway.\nAxis fails on redirects as well, I think.\n" ]
[ 20, 3, 3, 2, 0 ]
[]
[]
[ "php", "python", "soappy", "wsdl" ]
stackoverflow_0001188927_php_python_soappy_wsdl.txt