content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Python: Lazy String Decoding I'm writing a parser, and there is LOTS of text to decode but most of my users will only care about a few fields from all the data. So I only want to do the decoding when a user actually uses some of the data. Is this a good way to do it? class LazyString(str): def __init__(self, v) : self.value = v def __str__(self) : r = "" s = self.value for i in xrange(0, len(s), 2) : r += chr(int(s[i:i+2], 16)) return r def p_buffer(p): """buffer : HASH chars""" p[0] = LazyString(p[2]) Is that the only method I need to override? A: I'm not sure how implementing a string subclass is of much benefit here. It seems to me that if you're processing a stream containing petabytes of data, whenever you've created an object that you don't need to you've already lost the game. Your first priority should be to ignore as much input as you possibly can. You could certainly build a string-like class that did this: class mystr(str): def __init__(self, value): self.value = value self._decoded = None @property def decoded(self): if self._decoded == None: self._decoded = self.value.decode("hex") return self._decoded def __repr__(self): return self.decoded def __len__(self): return len(self.decoded) def __getitem__(self, i): return self.decoded.__getitem__(i) def __getslice__(self, i, j): return self.decoded.__getslice__(i, j) and so on. A weird thing about doing this is that if you subclass str, every method that you don't explicitly implement will be called on the value that's passed to the constructor: >>> s = mystr('a0a1a2') >>> s  ¡¢ >>> len(s) 3 >>> s.capitalize() 'A0a1a2' A: I don't see any kind on lazy evaluation in your code. The fact that you use xrange only means that the list of integers from 0 to len(s) will be generated on demand. The whole string r will be decoded during string conversion anyway. The best way to implement lazy sequence in Python is using generators. You could try something like this: def lazy(v): for i in xrange(0, len(v), 2): yield int(v[i:i+2], 16) list(lazy("0a0a0f")) Out: [10, 10, 15] A: What you're doing is built in already: s = "i am a string!".encode('hex') # what you do r = "" for i in xrange(0, len(s), 2) : r += chr(int(s[i:i+2], 16)) # but decoding is builtin print r==s.decode('hex') # => True As you can see your whole decoding is s.decode('hex'). But "lazy" decoding sounds like premature optimization to me. You'd need gigabytes of data to even notice it. Try profiling, the .decode is 50 times faster that your old code already. Maybe you want somthing like this: class DB(object): # dunno what data it is ;) def __init__(self, data): self.data = data self.decoded = {} # maybe cache if the field data is long def __getitem__(self, name): try: return self.decoded[name] except KeyError: # this copies the fields data self.decoded[name] = ret = self.data[ self._get_field_slice( name ) ].decode('hex') return ret def _get_field_slice(self, name): # find out what part to decode, return the index in the data return slice( ... ) db = DB(encoded_data) print db["some_field"] # find out where the field is, get its data and decode it A: The methods you need to override really depend on how are planning to use you new string type. However you str based type looks a little suspicious to me, have you looked into the implementation of str to check that it has the value attribute that you are setting in your __init__()? Performing a dir(str) does not indicate that there is any such attribute on str. This being the case the normal str methods will not be operating on your data at all, I doubt that is the effect you want otherwise what would be the advantage of sub-classing. Sub-classing base data types is a little strange anyway unless you have very specific requirements. For the lazy evaluation you want you are probably better of creating your class that contains a string rather than sub-classing str and write your client code to work with that class. You will then be free to add the just in time evaluation you want in a number of ways an example using the descriptor protocol can be found in this presentation: Python's Object Model (search for "class Jit(object)" to get to the relevant section) A: The question is incomplete, in that the answer will depend on details of the encoding you use. Say, if you encode a list of strings as pascal strings (i.e. prefixed with string length encoded as a fixed-size integer), and say you want to read the 100th string from the list, you may seek() forward for each of the first 99 strings and not read their contents at all. This will give some performance gain if the strings are large. If, OTOH, you encode a list of strings as concatenated 0-terminated stirngs, you would have to read all bytes until the 100th 0. Also, you're speaking about some "fields" but your example looks completely different.
Python: Lazy String Decoding
I'm writing a parser, and there is LOTS of text to decode but most of my users will only care about a few fields from all the data. So I only want to do the decoding when a user actually uses some of the data. Is this a good way to do it? class LazyString(str): def __init__(self, v) : self.value = v def __str__(self) : r = "" s = self.value for i in xrange(0, len(s), 2) : r += chr(int(s[i:i+2], 16)) return r def p_buffer(p): """buffer : HASH chars""" p[0] = LazyString(p[2]) Is that the only method I need to override?
[ "I'm not sure how implementing a string subclass is of much benefit here. It seems to me that if you're processing a stream containing petabytes of data, whenever you've created an object that you don't need to you've already lost the game. Your first priority should be to ignore as much input as you possibly can.\nYou could certainly build a string-like class that did this:\nclass mystr(str):\n def __init__(self, value):\n self.value = value\n self._decoded = None\n @property\n def decoded(self):\n if self._decoded == None:\n self._decoded = self.value.decode(\"hex\")\n return self._decoded\n def __repr__(self):\n return self.decoded\n def __len__(self):\n return len(self.decoded)\n def __getitem__(self, i):\n return self.decoded.__getitem__(i)\n def __getslice__(self, i, j):\n return self.decoded.__getslice__(i, j)\n\nand so on. A weird thing about doing this is that if you subclass str, every method that you don't explicitly implement will be called on the value that's passed to the constructor:\n>>> s = mystr('a0a1a2')\n>>> s\n ¡¢\n>>> len(s)\n3\n>>> s.capitalize()\n'A0a1a2'\n\n", "I don't see any kind on lazy evaluation in your code. The fact that you use xrange only means that the list of integers from 0 to len(s) will be generated on demand. The whole string r will be decoded during string conversion anyway.\nThe best way to implement lazy sequence in Python is using generators. You could try something like this:\ndef lazy(v):\n for i in xrange(0, len(v), 2):\n yield int(v[i:i+2], 16)\n\nlist(lazy(\"0a0a0f\"))\nOut: [10, 10, 15]\n\n", "What you're doing is built in already:\ns = \"i am a string!\".encode('hex')\n# what you do\nr = \"\"\nfor i in xrange(0, len(s), 2) :\n r += chr(int(s[i:i+2], 16))\n# but decoding is builtin\nprint r==s.decode('hex') # => True\n\nAs you can see your whole decoding is s.decode('hex'). \nBut \"lazy\" decoding sounds like premature optimization to me. You'd need gigabytes of data to even notice it. Try profiling, the .decode is 50 times faster that your old code already.\nMaybe you want somthing like this:\nclass DB(object): # dunno what data it is ;)\n def __init__(self, data):\n self.data = data\n self.decoded = {} # maybe cache if the field data is long\n def __getitem__(self, name):\n try:\n return self.decoded[name]\n except KeyError:\n # this copies the fields data\n self.decoded[name] = ret = self.data[ self._get_field_slice( name ) ].decode('hex')\n return ret\n def _get_field_slice(self, name):\n # find out what part to decode, return the index in the data\n return slice( ... )\n\ndb = DB(encoded_data) \nprint db[\"some_field\"] # find out where the field is, get its data and decode it\n\n", "The methods you need to override really depend on how are planning to use you new string type. \nHowever you str based type looks a little suspicious to me, have you looked into the implementation of str to check that it has the value attribute that you are setting in your __init__()? Performing a dir(str) does not indicate that there is any such attribute on str. This being the case the normal str methods will not be operating on your data at all, I doubt that is the effect you want otherwise what would be the advantage of sub-classing.\nSub-classing base data types is a little strange anyway unless you have very specific requirements. For the lazy evaluation you want you are probably better of creating your class that contains a string rather than sub-classing str and write your client code to work with that class. You will then be free to add the just in time evaluation you want in a number of ways an example using the descriptor protocol can be found in this presentation: Python's Object Model (search for \"class Jit(object)\" to get to the relevant section)\n", "The question is incomplete, in that the answer will depend on details of the encoding you use.\nSay, if you encode a list of strings as pascal strings (i.e. prefixed with string length encoded as a fixed-size integer), and say you want to read the 100th string from the list, you may seek() forward for each of the first 99 strings and not read their contents at all. This will give some performance gain if the strings are large.\nIf, OTOH, you encode a list of strings as concatenated 0-terminated stirngs, you would have to read all bytes until the 100th 0.\nAlso, you're speaking about some \"fields\" but your example looks completely different.\n" ]
[ 3, 1, 0, 0, 0 ]
[]
[]
[ "lazy_evaluation", "python" ]
stackoverflow_0001656048_lazy_evaluation_python.txt
Q: web2py - display a SQL query in a form i have a SQL query family_members = db(\ db.member.parent_membership_id==parent_id.membership_id\ ).select(\ db.member.first_name, db.member.parent_membership_id) I want to display "family_members" as a table in my form. How can i do this? A: In view: {{=family_members}} A: You can follow the example I've shown you in a previous question. Make sure to also check the documentation on the web2py website, seeing the work you are doing with this framework I would recommend to buy the web2py official manual which is really not expensive and will save you a lot of precious time. You can also read it online from the link I gave you, or download a few free chapters. Basically you have two options, use the SQLTABLE if you want further control on the table, you can transform the result given by the above option (it's a class you can use to modify the contents), or create it entirely on your own with the HTML helpers (TABLE and so on, from the gluon library). To illustrate that a little bit: family_members = db(...).select(...) # your rows construct table = SQLTABLE(family_members, orderby=True, _class='sortable', _width="100%") If you want to add a column, for example: table[0][0].append(TH("details")) for i, value in enumerate(table[1]): table[1][i].append(TD("line %d" % i, _align="center"))
web2py - display a SQL query in a form
i have a SQL query family_members = db(\ db.member.parent_membership_id==parent_id.membership_id\ ).select(\ db.member.first_name, db.member.parent_membership_id) I want to display "family_members" as a table in my form. How can i do this?
[ "In view:\n{{=family_members}}\n\n", "You can follow the example I've shown you in a previous question.\nMake sure to also check the documentation on the web2py website, seeing the work you are doing with this framework I would recommend to buy the web2py official manual which is really not expensive and will save you a lot of precious time. You can also read it online from the link I gave you, or download a few free chapters.\nBasically you have two options,\n\nuse the SQLTABLE\nif you want further control on the table, you can transform the result given by the above option (it's a class you can use to modify the contents), or create it entirely on your own with the HTML helpers (TABLE and so on, from the gluon library).\n\nTo illustrate that a little bit:\nfamily_members = db(...).select(...) # your rows construct\ntable = SQLTABLE(family_members, orderby=True, _class='sortable', _width=\"100%\")\n\nIf you want to add a column, for example:\ntable[0][0].append(TH(\"details\"))\nfor i, value in enumerate(table[1]):\n table[1][i].append(TD(\"line %d\" % i, _align=\"center\"))\n\n" ]
[ 3, 2 ]
[]
[]
[ "python", "web2py" ]
stackoverflow_0001657480_python_web2py.txt
Q: A case of outwardly equal lists of sets behaving differently under Python 2.5 (I think ...) Four years ago I wrote a Sudoku puzzle solver, and now I'm trying to understand how it works so that I can reuse parts of it for a KenKen puzzle solver. I thought I'd better compactify loops into list comprehensions and pick more self-explanatory names for variables. There's a class Puz which contains the input puzzle as a list of (81) digits; 1 through 9 where the cell's value is known, and 0 where it is not. Class Puz also contains a working version of the puzzle, except that here each of the (81) items in the list is a set; where the answer for a cell is known, the set contains one value from 1 to 9, e.g., set([4]), and where the answer is unknown, the set contains the remaining possibilities, e.g., set([3,5,7,9]). When Puz._init__(self, puz) is called, those "maybe" sets in the work list are set to set([1,2,3,4,5,6,7,8,9]), and the first step in getting to a solution is to strike out all the values which appear as answers in that cell's row, column, and 3x3 block. Originally, the work list was filled in using a for loop: for 0 to 80, if it's an answer, put in the answer as a set, else put in set( range( 1, 10)). I couldn't figure out how to get this kind of conditional into a list comprehension, so I broke it out into a separate "filling function", of which 3 versions are shown below. The fill_funcs differ in their "not-an-answer branches": return set( range( 1,(self.dim+1))) return set( self.r_dim_1_based) return self.set_dim_1_based As you see, increasing amounts of processing are moved outside the function back to where the little variables are initialized. The trouble is, the first two variations slip into the Sudoku solver and work exactly the way the original code did. BUT --- the third variation breaks, saying that the 6th set in the working list is (or becomes) empty. YET --- the lists of sets produced by all three variations evaluate as equal: p.W1 == p.W2 == p.W3 --> True I'm stumped. Here's some code for making the lists of sets: #!/usr/bin/env python import copy from math import sqrt ''' Puzzle #15, from The Guardian: 050624: #41: rated difficult ''' puz = [ 0,0,1, 9,0,0, 3,0,0, 0,0,0, 0,0,0, 2,0,0, 7,6,0, 0,2,0, 0,0,9, 3,0,0, 0,6,0, 0,0,5, 0,0,2, 1,0,3, 4,0,0, 4,0,0, 0,9,0, 0,0,3, 1,0,0, 0,3,0, 0,9,7, 0,0,4, 0,0,0, 0,0,0, 0,0,5, 0,0,8, 6,0,0 ] class GroupInfo: pass class Puz(GroupInfo): def __init__(self, puz): self.A = copy.deepcopy(puz) self.ncells = len( self.A) self.r_ncells = range( 0,self.ncells) self.dim = int( sqrt( self.ncells)) assert (self.dim ** 2) == self.ncells, "puz is not square" self.r_dim_0_based = range( 0,self.dim) self.r_dim_1_based = range( 1, self.dim + 1) self.set_dim_1_based = set( self.r_dim_1_based) ## <<---- causes to fail! ##### with 'empty set at W[5]' !?!?!? def W1_fill_func( val): if (val == 0): return set( range( 1,(self.dim+1))) else: return set( [val]) self.W1 = [ W1_fill_func( self.A[cid]) for cid in self.r_ncells ] def W2_fill_func( val): if (val == 0): return set( self.r_dim_1_based) else: return set( [val]) self.W2 = [ W2_fill_func( self.A[cid]) for cid in self.r_ncells ] def W3_fill_func( val): if (val == 0): return self.set_dim_1_based else: return set( [val]) self.W3 = [ W3_fill_func( self.A[cid]) for cid in self.r_ncells ] return #def Puz.__init__() #class Puz p = Puz(puz) print p.W1 == p.W2 == p.W3 A: self.W3 as you've coded it contains many references to the same set object -- as soon as you call any mutating method on one of those references, you've changed all the others. You need to ensure W3_fill_func returns independent copies of the set of interest, just like all others do, e.g. by changing its return to return set(self.set_dim_1_based).
A case of outwardly equal lists of sets behaving differently under Python 2.5 (I think ...)
Four years ago I wrote a Sudoku puzzle solver, and now I'm trying to understand how it works so that I can reuse parts of it for a KenKen puzzle solver. I thought I'd better compactify loops into list comprehensions and pick more self-explanatory names for variables. There's a class Puz which contains the input puzzle as a list of (81) digits; 1 through 9 where the cell's value is known, and 0 where it is not. Class Puz also contains a working version of the puzzle, except that here each of the (81) items in the list is a set; where the answer for a cell is known, the set contains one value from 1 to 9, e.g., set([4]), and where the answer is unknown, the set contains the remaining possibilities, e.g., set([3,5,7,9]). When Puz._init__(self, puz) is called, those "maybe" sets in the work list are set to set([1,2,3,4,5,6,7,8,9]), and the first step in getting to a solution is to strike out all the values which appear as answers in that cell's row, column, and 3x3 block. Originally, the work list was filled in using a for loop: for 0 to 80, if it's an answer, put in the answer as a set, else put in set( range( 1, 10)). I couldn't figure out how to get this kind of conditional into a list comprehension, so I broke it out into a separate "filling function", of which 3 versions are shown below. The fill_funcs differ in their "not-an-answer branches": return set( range( 1,(self.dim+1))) return set( self.r_dim_1_based) return self.set_dim_1_based As you see, increasing amounts of processing are moved outside the function back to where the little variables are initialized. The trouble is, the first two variations slip into the Sudoku solver and work exactly the way the original code did. BUT --- the third variation breaks, saying that the 6th set in the working list is (or becomes) empty. YET --- the lists of sets produced by all three variations evaluate as equal: p.W1 == p.W2 == p.W3 --> True I'm stumped. Here's some code for making the lists of sets: #!/usr/bin/env python import copy from math import sqrt ''' Puzzle #15, from The Guardian: 050624: #41: rated difficult ''' puz = [ 0,0,1, 9,0,0, 3,0,0, 0,0,0, 0,0,0, 2,0,0, 7,6,0, 0,2,0, 0,0,9, 3,0,0, 0,6,0, 0,0,5, 0,0,2, 1,0,3, 4,0,0, 4,0,0, 0,9,0, 0,0,3, 1,0,0, 0,3,0, 0,9,7, 0,0,4, 0,0,0, 0,0,0, 0,0,5, 0,0,8, 6,0,0 ] class GroupInfo: pass class Puz(GroupInfo): def __init__(self, puz): self.A = copy.deepcopy(puz) self.ncells = len( self.A) self.r_ncells = range( 0,self.ncells) self.dim = int( sqrt( self.ncells)) assert (self.dim ** 2) == self.ncells, "puz is not square" self.r_dim_0_based = range( 0,self.dim) self.r_dim_1_based = range( 1, self.dim + 1) self.set_dim_1_based = set( self.r_dim_1_based) ## <<---- causes to fail! ##### with 'empty set at W[5]' !?!?!? def W1_fill_func( val): if (val == 0): return set( range( 1,(self.dim+1))) else: return set( [val]) self.W1 = [ W1_fill_func( self.A[cid]) for cid in self.r_ncells ] def W2_fill_func( val): if (val == 0): return set( self.r_dim_1_based) else: return set( [val]) self.W2 = [ W2_fill_func( self.A[cid]) for cid in self.r_ncells ] def W3_fill_func( val): if (val == 0): return self.set_dim_1_based else: return set( [val]) self.W3 = [ W3_fill_func( self.A[cid]) for cid in self.r_ncells ] return #def Puz.__init__() #class Puz p = Puz(puz) print p.W1 == p.W2 == p.W3
[ "self.W3 as you've coded it contains many references to the same set object -- as soon as you call any mutating method on one of those references, you've changed all the others. You need to ensure W3_fill_func returns independent copies of the set of interest, just like all others do, e.g. by changing its return to return set(self.set_dim_1_based).\n" ]
[ 2 ]
[]
[]
[ "python", "python_2.5", "set" ]
stackoverflow_0001659039_python_python_2.5_set.txt
Q: Django unique_together and flagging objects as "deleted" I'm implementing the first option discussed in "Marking for deletion in Django", that is, when an object is no longer active, I set a boolean to mark it as inactive. The specific reason I'm using this method is that although the object is no longer in active use, it may still be referenced and displayed in various records and reporting outputs. I don't want Django's ripple delete to remove the old records. How should I go about enforcing uniqueness of active objects? Initially, I thought I should use unique_together to enforce my constraints at a database level. This works fine until I delete an object, at which point adding a new active object with the same name violates the uniqueness requirement. I could simply reflag the object as active, but I actually want a new object. I'm looking for something that lets me say something like "unique together when active = True". I could enforce this in the model creation code, but it seems like enforcing it at the database level is a better idea. Any advice about which of these is the best approach? Any better suggestions? Note: django-reversion is cool, but totally does not work for my application since I DO need to access "deleted" objects from time to time. A: You could have a unique constraint: class Meta: unique_together = ( ('name', 'active'),) However, that means you can only have one active and one inactive object with the same name. If you make active a NullBooleanField, then you can have NULL for active, and have (IIRC) a limitless number of objects that are inactive with the same name. PostgreSQL, at least, interprets a NULL value as part of a constraint as not breaking the constraint. A: I think I understand where you're coming from when you say that unique together should remain on the database level. In the theoretical best of situations, the best design would be one which does not rely on understanding the underlying DB enforcement of unique together and instead acts as if that constraint is a permanent rule for that table. Working from that idea, what if the differentiation between active/inactive happened on the "table level" and not with some fancy model hacking? Consider the following: class BaseModel(models.Model): # all of your fields here active = models.BooleanField() class ActiveModel(BaseModel): class Meta: unique_together = ('whatever', 'fields') def make_inactive(self): # create a new instance of InactiveModel based on # this instances's values, then delete this instance class InactiveModel(BaseModel): def make_active(self): # create a new instance of InactiveModel based on # this instances's values, then delete this instance In this way, whenever you create a new model, you do so using ActiveModel. The unique_together is then enforced for active models only. To mark a model inactive, you do model.make_inactive instead of the former model.active=False. You can still continue to execute queries against BaseModel and access both active and inactive models.
Django unique_together and flagging objects as "deleted"
I'm implementing the first option discussed in "Marking for deletion in Django", that is, when an object is no longer active, I set a boolean to mark it as inactive. The specific reason I'm using this method is that although the object is no longer in active use, it may still be referenced and displayed in various records and reporting outputs. I don't want Django's ripple delete to remove the old records. How should I go about enforcing uniqueness of active objects? Initially, I thought I should use unique_together to enforce my constraints at a database level. This works fine until I delete an object, at which point adding a new active object with the same name violates the uniqueness requirement. I could simply reflag the object as active, but I actually want a new object. I'm looking for something that lets me say something like "unique together when active = True". I could enforce this in the model creation code, but it seems like enforcing it at the database level is a better idea. Any advice about which of these is the best approach? Any better suggestions? Note: django-reversion is cool, but totally does not work for my application since I DO need to access "deleted" objects from time to time.
[ "You could have a unique constraint:\nclass Meta:\n unique_together = ( ('name', 'active'),)\n\nHowever, that means you can only have one active and one inactive object with the same name.\nIf you make active a NullBooleanField, then you can have NULL for active, and have (IIRC) a limitless number of objects that are inactive with the same name. PostgreSQL, at least, interprets a NULL value as part of a constraint as not breaking the constraint.\n", "I think I understand where you're coming from when you say that unique together should remain on the database level. In the theoretical best of situations, the best design would be one which does not rely on understanding the underlying DB enforcement of unique together and instead acts as if that constraint is a permanent rule for that table.\nWorking from that idea, what if the differentiation between active/inactive happened on the \"table level\" and not with some fancy model hacking?\nConsider the following:\nclass BaseModel(models.Model):\n # all of your fields here\n active = models.BooleanField()\n\n\nclass ActiveModel(BaseModel): \n class Meta:\n unique_together = ('whatever', 'fields') \n\n def make_inactive(self):\n # create a new instance of InactiveModel based on\n # this instances's values, then delete this instance\n\n\nclass InactiveModel(BaseModel):\n def make_active(self):\n # create a new instance of InactiveModel based on\n # this instances's values, then delete this instance\n\nIn this way, whenever you create a new model, you do so using ActiveModel. The unique_together is then enforced for active models only. To mark a model inactive, you do model.make_inactive instead of the former model.active=False. You can still continue to execute queries against BaseModel and access both active and inactive models.\n" ]
[ 4, 0 ]
[]
[]
[ "database_design", "django", "django_models", "python" ]
stackoverflow_0001658945_database_design_django_django_models_python.txt
Q: How to run code on Pylons startup I have a Python 2.6 web app built on Pylons 0.9.7. The code in my controller only runs the first time a client requests it, which is fair enough, but is there any way I can run some code as soon as the server starts and is ready to accept requests, without waiting until a request is actually received? A: It's an environment setting, if that's what you're asking. Specifically: lib/app_globals, modify _ _ init _ _ (). (Ignore the spaces there, silly emboldening function!) See: http://pylonshq.com/docs/en/0.9.7/configuration/#environment Alternative methods are getting your helper script (that which is launching the server) to run it prior to running the site.
How to run code on Pylons startup
I have a Python 2.6 web app built on Pylons 0.9.7. The code in my controller only runs the first time a client requests it, which is fair enough, but is there any way I can run some code as soon as the server starts and is ready to accept requests, without waiting until a request is actually received?
[ "It's an environment setting, if that's what you're asking.\nSpecifically: lib/app_globals, modify _ _ init _ _ (). (Ignore the spaces there, silly emboldening function!)\nSee: http://pylonshq.com/docs/en/0.9.7/configuration/#environment\nAlternative methods are getting your helper script (that which is launching the server) to run it prior to running the site.\n" ]
[ 2 ]
[]
[]
[ "pylons", "python" ]
stackoverflow_0001658986_pylons_python.txt
Q: sqlalchemy: AttributeError: 'tuple' object has no attribute 'insert' I was playing around making a simple haiku site using sqlalchemy and pylons. It basically takes a haiku, writes it to a database, and displays the haiku. The problem appears when I get the data from the form and try and write it to a database, Pylons give me this error: AttributeError: 'tuple' object has no attribute 'insert' after I run this line of code: ins = self.haiku_table.insert(values=form_dict) Main Code: import logging from pylons import request, response, session, tmpl_context as c from pylons.controllers.util import abort, redirect_to from myku.lib.base import BaseController, render from sqlalchemy.sql import select import meta import myku.lib.helpers as h log = logging.getLogger(__name__) class IndexController(BaseController): def __init__(self): self.haiku_table = meta.haiku_table self.conn = meta.engine.connect() BaseController.__init__(self) def index(self, genre, title): ss = select([self.haiku_table], self.haiku_table.c.genre==str(genre).lower(), self.haiku_table.c.title==str(title).lower()) result = self.conn.execute(ss) return result def new_haiku(self): return render('/newku.html') def submit(self): title = request.params.get('title') haiku = request.params.get('haiku') genre = request.params.get('genre') author = request.params.get('author') form_dict = {'title': title, 'haiku': haiku, 'genre': genre, 'author': author} ins = self.haiku_table.insert(values=form_dict) result = self.conn.execute(ins) return res and the code for the meta file: from sqlalchemy.engine import create_engine from sqlalchemy import schema, types metadata = schema.MetaData() haiku_table = ('haiku', metadata, schema.Column('title', types.Text(), primary_key=True), schema.Column('haiku', types.Text()), schema.Column('genre', types.Text()), schema.Column('author', types.Text()) ) engine = create_engine('sqlite:///F:\\MyKu\\myku\\haiku') metadata.bind = engine metadata.create_all(checkfirst=True) Any ideas? I have no clue A: Well, it looks like you're creating haiku_table and not doing anything else to it before trying to use the .insert function which obviously is not part of a tuple Looks like when you create a table with SQLAlchemy, you want the format: haiku_table = Table('haiku', metadata, schema.Column('title', types.Text(), primary_key=True), .... etc ) You will need to import Table from the sqlachlemy module as well. This makes haiku_table be a Table instance of SQLAlchemy and not simply a tuple. I think that's all you're missing.
sqlalchemy: AttributeError: 'tuple' object has no attribute 'insert'
I was playing around making a simple haiku site using sqlalchemy and pylons. It basically takes a haiku, writes it to a database, and displays the haiku. The problem appears when I get the data from the form and try and write it to a database, Pylons give me this error: AttributeError: 'tuple' object has no attribute 'insert' after I run this line of code: ins = self.haiku_table.insert(values=form_dict) Main Code: import logging from pylons import request, response, session, tmpl_context as c from pylons.controllers.util import abort, redirect_to from myku.lib.base import BaseController, render from sqlalchemy.sql import select import meta import myku.lib.helpers as h log = logging.getLogger(__name__) class IndexController(BaseController): def __init__(self): self.haiku_table = meta.haiku_table self.conn = meta.engine.connect() BaseController.__init__(self) def index(self, genre, title): ss = select([self.haiku_table], self.haiku_table.c.genre==str(genre).lower(), self.haiku_table.c.title==str(title).lower()) result = self.conn.execute(ss) return result def new_haiku(self): return render('/newku.html') def submit(self): title = request.params.get('title') haiku = request.params.get('haiku') genre = request.params.get('genre') author = request.params.get('author') form_dict = {'title': title, 'haiku': haiku, 'genre': genre, 'author': author} ins = self.haiku_table.insert(values=form_dict) result = self.conn.execute(ins) return res and the code for the meta file: from sqlalchemy.engine import create_engine from sqlalchemy import schema, types metadata = schema.MetaData() haiku_table = ('haiku', metadata, schema.Column('title', types.Text(), primary_key=True), schema.Column('haiku', types.Text()), schema.Column('genre', types.Text()), schema.Column('author', types.Text()) ) engine = create_engine('sqlite:///F:\\MyKu\\myku\\haiku') metadata.bind = engine metadata.create_all(checkfirst=True) Any ideas? I have no clue
[ "Well, it looks like you're creating haiku_table and not doing anything else to it before trying to use the .insert function which obviously is not part of a tuple\nLooks like when you create a table with SQLAlchemy, you want the format:\nhaiku_table = Table('haiku', metadata,\n schema.Column('title', types.Text(), primary_key=True),\n .... etc\n )\n\nYou will need to import Table from the sqlachlemy module as well.\nThis makes haiku_table be a Table instance of SQLAlchemy and not simply a tuple. I think that's all you're missing.\n" ]
[ 1 ]
[]
[]
[ "pylons", "python", "sqlalchemy" ]
stackoverflow_0001659160_pylons_python_sqlalchemy.txt
Q: MacPython: programmatically finding all serial ports I am looking for a solution to programmatically return all available serial ports with python. At the moment I am entering ls /dev/tty.* or ls /dev/cu.* into the terminal to list ports and hardcoding them into the pyserial class. A: You could do something like this: import glob def scan(): return glob.glob('/dev/tty*') + glob.glob('/dev/cu*') for port in scan(): # do something to check this port is open. Then, take a look at pyserial for some good utility functions to check if a port is open and so forth. A: What about just doing the os.listdir / glob equivalent of ls to perform the equivalent of that ls? Of course it's not going to be the case that some usable device is connected to each such special file (but, that holds for ls as well;-), but for "finding all serial ports", as you ask in your Q's title, I'm not sure how else you might proceed.
MacPython: programmatically finding all serial ports
I am looking for a solution to programmatically return all available serial ports with python. At the moment I am entering ls /dev/tty.* or ls /dev/cu.* into the terminal to list ports and hardcoding them into the pyserial class.
[ "You could do something like this:\nimport glob\ndef scan():\n return glob.glob('/dev/tty*') + glob.glob('/dev/cu*')\n\nfor port in scan():\n # do something to check this port is open.\n\nThen, take a look at pyserial for some good utility functions to check if a port is open and so forth.\n", "What about just doing the os.listdir / glob equivalent of ls to perform the equivalent of that ls? Of course it's not going to be the case that some usable device is connected to each such special file (but, that holds for ls as well;-), but for \"finding all serial ports\", as you ask in your Q's title, I'm not sure how else you might proceed.\n" ]
[ 6, 1 ]
[]
[]
[ "macos", "python", "serial_port" ]
stackoverflow_0001659283_macos_python_serial_port.txt
Q: Creating a shared library in MATLAB A researcher has created a small simulation in MATLAB and we want to make it accessible to others. My plan is to take the simulation, clean up a few things and turn it into a set of functions. Then I plan to compile it into a C library and use SWIG to create a Python wrapper. At that point, I should be able to call the simulation from a small Django application. At least I hope so. Do I have the right plan? Are there are any serious pitfalls that I'm not aware of at the moment? A: One thing to remember is that the MATLAB compiler does not actually compile the MATLAB code into native machine instructions. It simply wraps it into a stand-alone executable or a library with its own runtime engine that runs it. You would be able to run your code without MATLAB installed, and you would be able to interface it with other languages, but it will still be interpreted MATLAB code, so there would be no speedup. A: I remember that I was able to wrap a MATLAB simulation into a DLL file and then call it from a Delphi application. It worked really well. A: I'd also try ctypes first. Use the MATLAB compiler to compile the code into C. Compile the C code into a DLL. Use ctypes to load and call code from this DLL The hardest step is probably 1, but if you already know MATLAB and have used the MATLAB compiler, you should not have serious problems with it. A: Perhaps try ctypes instead of SWIG. If it has been included as a part of Python 2.5, then it must be good :-)
Creating a shared library in MATLAB
A researcher has created a small simulation in MATLAB and we want to make it accessible to others. My plan is to take the simulation, clean up a few things and turn it into a set of functions. Then I plan to compile it into a C library and use SWIG to create a Python wrapper. At that point, I should be able to call the simulation from a small Django application. At least I hope so. Do I have the right plan? Are there are any serious pitfalls that I'm not aware of at the moment?
[ "One thing to remember is that the MATLAB compiler does not actually compile the MATLAB code into native machine instructions. It simply wraps it into a stand-alone executable or a library with its own runtime engine that runs it. You would be able to run your code without MATLAB installed, and you would be able to interface it with other languages, but it will still be interpreted MATLAB code, so there would be no speedup.\n", "I remember that I was able to wrap a MATLAB simulation into a DLL file and then call it from a Delphi application. It worked really well.\n", "I'd also try ctypes first.\n\nUse the MATLAB compiler to compile the code into C.\nCompile the C code into a DLL.\nUse ctypes to load and call code from this DLL\n\nThe hardest step is probably 1, but if you already know MATLAB and have used the MATLAB compiler, you should not have serious problems with it.\n", "Perhaps try ctypes instead of SWIG. If it has been included as a part of Python 2.5, then it must be good :-)\n" ]
[ 5, 3, 2, 1 ]
[]
[]
[ "c", "matlab", "python" ]
stackoverflow_0000005136_c_matlab_python.txt
Q: Inheriting directly from a built-in type versus its wrapper class in Python I'm currently reading Dive Into Python by Mark Pilgrim, and have gotten to the section on inheritance. In section 5.5, Pilgrim mentions the differences between inheriting from the wrapper class UserDict vs inheriting from the built-in dict type. I'm having trouble understanding why anyone would even bother with the wrapper class... What are the benefits of inheriting from the UserDict wrapper class (or any of the other UserXxx classes)? Your input is much appreciated. Thanks! A: You're right: The need for this class has been largely supplanted by the ability to subclass directly from dict (a feature that became available starting with Python version 2.2). Prior to the introduction of dict, the UserDict class was used to create dictionary-like sub-classes that obtained new behaviors by overriding existing methods or adding new ones. Note the first sentence. This comes from the documentation of UserDict. Oh, and in Python 3 it's gone. A: The wrapper classes have been removed from Python 3, as they haven't been all that useful for a while now. The mixin class, UserDict.DictMixin, is a completely different story -- its useful features are now found all over the "abstract base classes" in the collections module (Python 2.6 and 3.*). A: I found, on the page you linked to, a hint as to the answer: In versions of Python prior to 2.2, you could not directly subclass built-in datatypes like strings, lists, and dictionaries. To compensate for this, Python comes with wrapper classes that mimic the behavior of these built-in datatypes: UserString, UserList, and UserDict. Using a combination of normal and special methods, the UserDict class does an excellent imitation of a dictionary. In Python 2.2 and later, you can inherit classes directly from built-in datatypes like dict. In reality, today you probably want to sub-class dict, rather than UserDict.
Inheriting directly from a built-in type versus its wrapper class in Python
I'm currently reading Dive Into Python by Mark Pilgrim, and have gotten to the section on inheritance. In section 5.5, Pilgrim mentions the differences between inheriting from the wrapper class UserDict vs inheriting from the built-in dict type. I'm having trouble understanding why anyone would even bother with the wrapper class... What are the benefits of inheriting from the UserDict wrapper class (or any of the other UserXxx classes)? Your input is much appreciated. Thanks!
[ "You're right:\n\nThe need for this class has been\n largely supplanted by the ability to\n subclass directly from dict (a feature\n that became available starting with\n Python version 2.2). Prior to the\n introduction of dict, the UserDict\n class was used to create\n dictionary-like sub-classes that\n obtained new behaviors by overriding\n existing methods or adding new ones.\n\nNote the first sentence. This comes from the documentation of UserDict.\nOh, and in Python 3 it's gone.\n", "The wrapper classes have been removed from Python 3, as they haven't been all that useful for a while now. The mixin class, UserDict.DictMixin, is a completely different story -- its useful features are now found all over the \"abstract base classes\" in the collections module (Python 2.6 and 3.*).\n", "I found, on the page you linked to, a hint as to the answer:\n\nIn versions of Python prior to 2.2,\n you could not directly subclass\n built-in datatypes like strings,\n lists, and dictionaries. To compensate\n for this, Python comes with wrapper\n classes that mimic the behavior of\n these built-in datatypes: UserString,\n UserList, and UserDict. Using a\n combination of normal and special\n methods, the UserDict class does an\n excellent imitation of a dictionary.\n In Python 2.2 and later, you can\n inherit classes directly from built-in\n datatypes like dict.\n\nIn reality, today you probably want to sub-class dict, rather than UserDict.\n" ]
[ 3, 3, 1 ]
[]
[]
[ "built_in", "inheritance", "python", "types", "wrapper" ]
stackoverflow_0001659337_built_in_inheritance_python_types_wrapper.txt
Q: **kwargs search mechanism in an object (python) Want to be able to provide a search interface for a collection of objects to be used by passing a list of keyword arguments like so: playerID = players.search(nameFirst='ichiro', nameLast='suzuki') Where players.search is defined like so: def search(self, **args): ret = [] for playerID, player in self.iteritems(): for key, value in args.iteritems(): if getattr(player, key) == value: ret.append(player.playerID) return ret Obviously the above code doesn't work. I want to, to borrow some SQL idioms, to work like where player.key == value and player.keyN = valueN, and so on for N number of kwargs passed. Any ideas? Thanks! A: You should be able to change it to a list comprehension with the all builtin, which returns True iff all the elements in its argument are true (or if the iterable is empty). Something like this should do the trick: for playerID, player in self.iteritems(): if all(getattr(player, key) == value for key, value in args.iteritems()): ret.append(player.playerID) A: I want to, to borrow some SQL idioms, to work like where player.key == value and player.keyN = valueN, and so on for N number of kwargs passed. So you're currently implementing an OR and want to implement an AND instead -- is that it? If so, then the all suggested in @Mark's answer would work -- or alternatively, and equivalently albeit at a lower level of abstraction: def search(self, **args): ret = [] for playerID, player in self.iteritems(): for key, value in args.iteritems(): if getattr(player, key) != value: break else: ret.append(player.playerID) return ret I'm not quite sure why you're looping on iteritems and then ignoring the key you're getting (appending player.playerID rather than the playerID key directly). Anyway, another high-abstraction approach, assuming you don't need the keys...: def search(self, **args): def vals(p): return dict((k, getattr(p, k, None)) for k in args) return [p.playerID for p in self.itervalues() if vals(p) == args] This one doesn't "short-circuit" but is otherwise equivalent to Mark's. Fully equivalent, but quite concise: def search(self, **args): return [p.playerID for p in self.itervalues() if all(getattr(p, k, None)==args[k] for k in args)] If these code snippets don't meet your needs, and you can clarify why exactly they don't (ideally with an example or three!-), I'm sure they can be tweaked to satisfy said needs.
**kwargs search mechanism in an object (python)
Want to be able to provide a search interface for a collection of objects to be used by passing a list of keyword arguments like so: playerID = players.search(nameFirst='ichiro', nameLast='suzuki') Where players.search is defined like so: def search(self, **args): ret = [] for playerID, player in self.iteritems(): for key, value in args.iteritems(): if getattr(player, key) == value: ret.append(player.playerID) return ret Obviously the above code doesn't work. I want to, to borrow some SQL idioms, to work like where player.key == value and player.keyN = valueN, and so on for N number of kwargs passed. Any ideas? Thanks!
[ "You should be able to change it to a list comprehension with the all builtin, which returns True iff all the elements in its argument are true (or if the iterable is empty). Something like this should do the trick:\nfor playerID, player in self.iteritems():\n if all(getattr(player, key) == value for key, value in args.iteritems()):\n ret.append(player.playerID)\n\n", "\nI want to, to borrow some SQL idioms,\n to work like where player.key == value\n and player.keyN = valueN, and so on\n for N number of kwargs passed.\n\nSo you're currently implementing an OR and want to implement an AND instead -- is that it?\nIf so, then the all suggested in @Mark's answer would work -- or alternatively, and equivalently albeit at a lower level of abstraction:\ndef search(self, **args):\n ret = []\n for playerID, player in self.iteritems():\n for key, value in args.iteritems():\n if getattr(player, key) != value: break\n else:\n ret.append(player.playerID)\n\n return ret\n\nI'm not quite sure why you're looping on iteritems and then ignoring the key you're getting (appending player.playerID rather than the playerID key directly).\nAnyway, another high-abstraction approach, assuming you don't need the keys...:\ndef search(self, **args):\n def vals(p):\n return dict((k, getattr(p, k, None)) for k in args)\n return [p.playerID for p in self.itervalues() if vals(p) == args]\n\nThis one doesn't \"short-circuit\" but is otherwise equivalent to Mark's. Fully equivalent, but quite concise:\ndef search(self, **args):\n return [p.playerID for p in self.itervalues()\n if all(getattr(p, k, None)==args[k] for k in args)]\n\nIf these code snippets don't meet your needs, and you can clarify why exactly they don't (ideally with an example or three!-), I'm sure they can be tweaked to satisfy said needs.\n" ]
[ 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001659418_python.txt
Q: python executing existent (&big) c++ code I have a program in C++ that uses the cryptopp library to decrypt/encrypt messages. It offers two interface methods encrypt & decrypt that receive a string and operate on it through cryptopp methods. Is there some way to use both methods in Python without manually wrapping all the cryptopp & files included? Example: import cppEncryptDecrypt string foo="testing" result = encrypt(foo) print "Encrypted string:",result A: If you can make a DLL from that C++ code, exposing those two methods (ideally as "extern C", that makes all interfacing tasks so much simpler), ctypes can be the answer, not requiring any third party tool or extension. Otherwise, it's your choice between cython, good old SWIG, SIP, Boost, ... -- many, many such 3rd party tools will let your Python code call those two C++ entry points without any need for wrapping anything else but them. A: As Alex suggested you can make a dll, export the function you want to access from python and use ctypes(http://docs.python.org/library/ctypes.html) module to access e.g. >>> libc = cdll.LoadLibrary("libc.so.6") >>> printf = libc.printf >>> printf("Hello, %s\n", "World!") Hello, World or there is alternate simpler approach, which many people do not consider but is equally useful in many cases i.e. directly call the program from command line. You said you have already working program, so I assume it does both encrypt/decrypt from commandline? if yes why don't you just call the program from os.system, or subprocess module, instead of delving into code and changing it and maintaining it. I would say go the second way unless it can't fulfill your requirements.
python executing existent (&big) c++ code
I have a program in C++ that uses the cryptopp library to decrypt/encrypt messages. It offers two interface methods encrypt & decrypt that receive a string and operate on it through cryptopp methods. Is there some way to use both methods in Python without manually wrapping all the cryptopp & files included? Example: import cppEncryptDecrypt string foo="testing" result = encrypt(foo) print "Encrypted string:",result
[ "If you can make a DLL from that C++ code, exposing those two methods (ideally as \"extern C\", that makes all interfacing tasks so much simpler), ctypes can be the answer, not requiring any third party tool or extension. Otherwise, it's your choice between cython, good old SWIG, SIP, Boost, ... -- many, many such 3rd party tools will let your Python code call those two C++ entry points without any need for wrapping anything else but them.\n", "As Alex suggested you can make a dll, export the function you want to access from python and use ctypes(http://docs.python.org/library/ctypes.html) module to access e.g.\n>>> libc = cdll.LoadLibrary(\"libc.so.6\")\n>>> printf = libc.printf\n>>> printf(\"Hello, %s\\n\", \"World!\")\nHello, World\n\nor there is alternate simpler approach, which many people do not consider but is equally useful in many cases i.e. directly call the program from command line. You said you have already working program, so I assume it does both encrypt/decrypt from commandline? if yes why don't you just call the program from os.system, or subprocess module, instead of delving into code and changing it and maintaining it.\nI would say go the second way unless it can't fulfill your requirements.\n" ]
[ 6, 4 ]
[]
[]
[ "c++", "python" ]
stackoverflow_0001659159_c++_python.txt
Q: Python base class method call: unexpected behavior Why does str(A()) seemingly call A.__repr__() and not dict.__str__() in the example below? class A(dict): def __repr__(self): return 'repr(A)' def __str__(self): return dict.__str__(self) class B(dict): def __str__(self): return dict.__str__(self) print 'call: repr(A) expect: repr(A) get:', repr(A()) # works print 'call: str(A) expect: {} get:', str(A()) # does not work print 'call: str(B) expect: {} get:', str(B()) # works Output: call: repr(A) expect: repr(A) get: repr(A) call: str(A) expect: {} get: repr(A) call: str(B) expect: {} get: {} A: str(A()) does call __str__, in turn calling dict.__str__(). It is dict.__str__() that returns the value repr(A). A: I have modified the code to clear things out: class A(dict): def __repr__(self): print "repr of A called", return 'repr(A)' def __str__(self): print "str of A called", return dict.__str__(self) class B(dict): def __str__(self): print "str of B called", return dict.__str__(self) And the output is: >>> print 'call: repr(A) expect: repr(A) get:', repr(A()) call: repr(A) expect: repr(A) get: repr of A called repr(A) >>> print 'call: str(A) expect: {} get:', str(A()) call: str(A) expect: {} get: str of A called repr of A called repr(A) >>> print 'call: str(B) expect: {} get:', str(B()) call: str(B) expect: {} get: str of B called {} Meaning that str function calls the repr function automatically. And since it was redefined with class A, it returns the 'unexpected' value. A: I have posted a workaround solution to it. Check it out ... you might find it useful: http://blog.teltub.com/2009/10/workaround-solution-to-python-strrepr.html P.S. Read the original post where I introduced the issue as well ... There problem is the unexpected behavior that catches you by surprise ...
Python base class method call: unexpected behavior
Why does str(A()) seemingly call A.__repr__() and not dict.__str__() in the example below? class A(dict): def __repr__(self): return 'repr(A)' def __str__(self): return dict.__str__(self) class B(dict): def __str__(self): return dict.__str__(self) print 'call: repr(A) expect: repr(A) get:', repr(A()) # works print 'call: str(A) expect: {} get:', str(A()) # does not work print 'call: str(B) expect: {} get:', str(B()) # works Output: call: repr(A) expect: repr(A) get: repr(A) call: str(A) expect: {} get: repr(A) call: str(B) expect: {} get: {}
[ "str(A()) does call __str__, in turn calling dict.__str__(). \nIt is dict.__str__() that returns the value repr(A).\n", "I have modified the code to clear things out:\nclass A(dict):\n def __repr__(self):\n print \"repr of A called\",\n return 'repr(A)'\n def __str__(self):\n print \"str of A called\",\n return dict.__str__(self)\n\nclass B(dict):\n def __str__(self):\n print \"str of B called\",\n return dict.__str__(self)\n\nAnd the output is:\n>>> print 'call: repr(A) expect: repr(A) get:', repr(A())\ncall: repr(A) expect: repr(A) get: repr of A called repr(A)\n>>> print 'call: str(A) expect: {} get:', str(A())\ncall: str(A) expect: {} get: str of A called repr of A called repr(A)\n>>> print 'call: str(B) expect: {} get:', str(B())\ncall: str(B) expect: {} get: str of B called {}\n\nMeaning that str function calls the repr function automatically. And since it was redefined with class A, it returns the 'unexpected' value.\n", "I have posted a workaround solution to it. Check it out ... you might find it useful:\nhttp://blog.teltub.com/2009/10/workaround-solution-to-python-strrepr.html\nP.S. Read the original post where I introduced the issue as well ... There problem is the unexpected behavior that catches you by surprise ...\n" ]
[ 9, 3, 2 ]
[]
[]
[ "dictionary", "inheritance", "python" ]
stackoverflow_0000780670_dictionary_inheritance_python.txt
Q: Why is Python a favourite among people working in animation industry? What is that needs to be coded in Python instead of C/C++ etc? I know its advantages etc. I want to know why exactly makes Python The language for people in this industry? A: Perhaps that's because it's a scripting language for Blender? A: I work in this industry, and here's what I've observed: It's a nice, tidy language that's not hard to pick up. You don't have to be a language guru to use it. It embeds nicely in C/C++ applications. It has data types, so numeric operations can be done without type shimmering. Speaking of numeric operations, NumPy! Network effect -- everybody else uses it, so we get a virtuous cycle of interoperable scripting. A: A few other points I've not seen in the existing answers: it's free it's fast [enough] it runs on every platform I know of (AIX, HPUX, Linux, Mac OS X, Windows..) quick to learn large, powerful libraries numeric graphical etc simple, consistent syntax the existing user-base is large because it's easy-to-learn, you don't have to be a "programmer" to use it A: Because Python is what Basic should have been ;) Its a language designed from the beginning to be used by non-programmers, but with the power to be truly used as a general purpose programming language. A: Aside from the fact that it's already in use, the main advantage is that it's quick to use. Java, C, and friends almost all require tedious coding that merely restates what you already know. Python is designed to be quick to write, quick to modify, and as general as possible. As an example, functions in java require you to declare the type of each of the input variables. In python, as long as you pass input variables that work with the function, it's valid. This makes your code extremely flexible. You don't waste time declaring variables as one type or another, you just use them. Some people will tell you that java produces code that is "more correct", but in animation and graphics, producing code that works in as short a time as possible is usually the goal. A: My guess is that it is the tool for the job because it is easy to prototype extra features. A: I used python for molecular animation using PyMol. Many molecular visualization programs have their own scripting languages. PyMol's scripting language is python, a real programming language. So, if your task requires calculations of any kind, text parsing or calling the net, you are welcome. Before I assume that in "conventional" animation the situation is similar.
Why is Python a favourite among people working in animation industry?
What is that needs to be coded in Python instead of C/C++ etc? I know its advantages etc. I want to know why exactly makes Python The language for people in this industry?
[ "Perhaps that's because it's a scripting language for Blender?\n", "I work in this industry, and here's what I've observed:\n\nIt's a nice, tidy language that's not hard to pick up. You don't have to be a language guru to use it.\nIt embeds nicely in C/C++ applications.\nIt has data types, so numeric operations can be done without type shimmering.\nSpeaking of numeric operations, NumPy!\nNetwork effect -- everybody else uses it, so we get a virtuous cycle of interoperable scripting.\n\n", "A few other points I've not seen in the existing answers:\n\nit's free\nit's fast [enough]\nit runs on every platform I know of (AIX, HPUX, Linux, Mac OS X, Windows..)\nquick to learn\nlarge, powerful libraries \n\n\nnumeric\ngraphical\netc\n\nsimple, consistent syntax\nthe existing user-base is large\nbecause it's easy-to-learn, you don't have to be a \"programmer\" to use it\n\n", "Because Python is what Basic should have been ;)\nIts a language designed from the beginning to be used by non-programmers, but with the power to be truly used as a general purpose programming language.\n", "Aside from the fact that it's already in use, the main advantage is that it's quick to use. Java, C, and friends almost all require tedious coding that merely restates what you already know. Python is designed to be quick to write, quick to modify, and as general as possible.\nAs an example, functions in java require you to declare the type of each of the input variables. In python, as long as you pass input variables that work with the function, it's valid. This makes your code extremely flexible. You don't waste time declaring variables as one type or another, you just use them.\nSome people will tell you that java produces code that is \"more correct\", but in animation and graphics, producing code that works in as short a time as possible is usually the goal.\n", "My guess is that it is the tool for the job because it is easy to prototype extra features. \n", "I used python for molecular animation using PyMol. Many molecular visualization programs have their own scripting languages. PyMol's scripting language is python, a real programming language. So, if your task requires calculations of any kind, text parsing or calling the net, you are welcome. Before I assume that in \"conventional\" animation the situation is similar.\n" ]
[ 6, 6, 4, 3, 2, 1, 1 ]
[]
[]
[ "animation", "oop", "python" ]
stackoverflow_0001659620_animation_oop_python.txt
Q: Python socket client to a Java socket server I have a Java socket server that is expecting exactly n bytes from some port. I want to write a Python clients that just sends bytes on some port to the Java server. Since Python does not have primitives, I'm not sure to send exactly n bytes. Any suggestions? More details: I have a Java DatagramSocket that takes in n bytes: DatagramPacket dp = new DatagramPacket(new byte[n], n); A: somesocket.send takes a byte-string argument s -- just ensure that len(s) == n, and you will be sending exacty n bytes. What do "primitives" have to do with it?! To turn arbitrary bunches of data into byte strings (and back), see the struct module in Python's standard library (for the specific but frequent case of homogeneous arrays of simple types such as floats, the array module is often even better). A: If you are using a datagram socket (e. g. the UDP protocol over IP), the Socket API guarantees that if your n is less than the maximum payload size, then your data will be sent in a single packet. So just calling socket.send would be sufficient. The easiest way to send data over a stream socket is to use the socket.sendall method, as send for streams doesn't guarantee that all the data is actually sent (and you should repeatedly call send in order to transmit all the data you have). Here is an example: >>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) >>> s.connect(('localhost', 12345)) >>> data = 'your data of length n' >>> s.sendall(data) As @Alex has already mentioned, there is nothing related to some kind of "primitives" in Python. It is just and issue with the Socket API. A: Thanks to your answers, I figured out what I was looking for. What I wanted was struct.unpack and struct.pack to allow me to pack the python float 1.2345 to a string representation of a C double. That is: >>> struct.pack('d', 1.2345) '\x8d\x97n\x12\x83\xc0\xf3?' >>> struct.unpack('d', struct.pack('d', 1.2345))[0] 1.2344999999999999 Thanks!
Python socket client to a Java socket server
I have a Java socket server that is expecting exactly n bytes from some port. I want to write a Python clients that just sends bytes on some port to the Java server. Since Python does not have primitives, I'm not sure to send exactly n bytes. Any suggestions? More details: I have a Java DatagramSocket that takes in n bytes: DatagramPacket dp = new DatagramPacket(new byte[n], n);
[ "somesocket.send takes a byte-string argument s -- just ensure that len(s) == n, and you will be sending exacty n bytes. What do \"primitives\" have to do with it?!\nTo turn arbitrary bunches of data into byte strings (and back), see the struct module in Python's standard library (for the specific but frequent case of homogeneous arrays of simple types such as floats, the array module is often even better).\n", "If you are using a datagram socket (e. g. the UDP protocol over IP), the Socket API guarantees that if your n is less than the maximum payload size, then your data will be sent in a single packet. So just calling socket.send would be sufficient.\nThe easiest way to send data over a stream socket is to use the socket.sendall method, as send for streams doesn't guarantee that all the data is actually sent (and you should repeatedly call send in order to transmit all the data you have). Here is an example:\n>>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n>>> s.connect(('localhost', 12345))\n>>> data = 'your data of length n'\n>>> s.sendall(data)\n\nAs @Alex has already mentioned, there is nothing related to some kind of \"primitives\" in Python. It is just and issue with the Socket API.\n", "Thanks to your answers, I figured out what I was looking for. What I wanted was struct.unpack and struct.pack to allow me to pack the python float 1.2345 to a string representation of a C double.\nThat is:\n>>> struct.pack('d', 1.2345)\n'\\x8d\\x97n\\x12\\x83\\xc0\\xf3?'\n>>> struct.unpack('d', struct.pack('d', 1.2345))[0]\n1.2344999999999999\n\nThanks!\n" ]
[ 1, 1, 0 ]
[]
[]
[ "client", "java", "python", "sockets" ]
stackoverflow_0001659584_client_java_python_sockets.txt
Q: What is the difference between BaseHTTPServer and SimpleHTTPServer? When and where to use them? What is the difference between BaseHTTPServer and SimpleHTTPServer? When and where should I use these? A: BaseHTTPServer is a HTTP server library. It understands the HTTP protocol and let your code handle requests. It doesn't have any "logic" on it's own. SimpleHTTPServer is built on top of BaseHTTPServer and handles requests in a similar way normal HTTP servers do, i.e. serve files from the file-system. In most cases you will want only BaseHTTPServer, as a base for implementing some development server for a web application. If you are interested in working on a web application, not writing a HTTP server, you are probably looking for the WSGI interface. It allows you to write web aplications without depending on a specific server. There are also multiple frameworks that simplify the process.
What is the difference between BaseHTTPServer and SimpleHTTPServer? When and where to use them?
What is the difference between BaseHTTPServer and SimpleHTTPServer? When and where should I use these?
[ "BaseHTTPServer is a HTTP server library. It understands the HTTP protocol and let your code handle requests. It doesn't have any \"logic\" on it's own. SimpleHTTPServer is built on top of BaseHTTPServer and handles requests in a similar way normal HTTP servers do, i.e. serve files from the file-system. In most cases you will want only BaseHTTPServer, as a base for implementing some development server for a web application.\nIf you are interested in working on a web application, not writing a HTTP server, you are probably looking for the WSGI interface. It allows you to write web aplications without depending on a specific server. There are also multiple frameworks that simplify the process.\n" ]
[ 17 ]
[]
[]
[ "basehttpserver", "http", "python", "simplehttpserver" ]
stackoverflow_0001660045_basehttpserver_http_python_simplehttpserver.txt
Q: How to write a memory efficient Python program? It's said that Python automatically manages memory. I'm confused because I have a Python program consistently uses more than 2GB of memory. It's a simple multi-thread binary data downloader and unpacker. def GetData(url): req = urllib2.Request(url) response = urllib2.urlopen(req) data = response.read() // data size is about 15MB response.close() count = struct.unpack("!I", data[:4]) for i in range(0, count): UNPACK FIXED LENGTH OF BINARY DATA HERE yield (field1, field2, field3) class MyThread(threading.Thread): def __init__(self, total, daterange, tickers): threading.Thread.__init__(self) def stop(self): self._Thread__stop() def run(self): GET URL FOR EACH REQUEST data = [] items = GetData(url) for item in items: data.append(';'.join(item)) f = open(filename, 'w') f.write(os.linesep.join(data)) f.close() There are 15 threads running. Each request gets 15MB of data and unpack it and saved to local text file. How could this program consume more than 2GB of memory? Do I need to do any memory recycling jobs in this case? How can I see how much memory each objects or functions use? I would appreciate all your advices or tips on how to keep a python program running in a memory efficient mode. Edit: Here is the output of "cat /proc/meminfo" MemTotal: 7975216 kB MemFree: 732368 kB Buffers: 38032 kB Cached: 4365664 kB SwapCached: 14016 kB Active: 2182264 kB Inactive: 4836612 kB A: Like others have said, you need at least the following two changes: Do not create a huge list of integers with range # use xrange for i in xrange(0, count): # UNPACK FIXED LENGTH OF BINARY DATA HERE yield (field1, field2, field3) do not create a huge string as the full file body to be written at once # use writelines f = open(filename, 'w') f.writelines((datum + os.linesep) for datum in data) f.close() Even better, you could write the file as: items = GetData(url) f = open(filename, 'w') for item in items: f.write(';'.join(item) + os.linesep) f.close() A: The major culprit here is as mentioned above the range() call. It will create a list with 15 million members, and that will eat up 200 MB of your memory, and with 15 processes, that's 3GB. But also don't read in the whole 15MB file into data(), read bit by bit from the response. Sticking those 15MB into a variable will use up 15MB of memory more than reading bit by bit from the response. You might want to consider simply just extracting data until you run out if indata, and comparing the count of data you extracted with what the first bytes said it should be. Then you need neither range() nor xrange(). Seems more pythonic to me. :) A: Consider using xrange() instead of range(), I believe that xrange is a generator whereas range() expands the whole list. I'd say either don't read the whole file into memory, or don't keep the whole unpacked structure in memory. Currently you keep both in memory, at the same time, this is going to be quite big. So you've got at least two copies of your data in memory, plus some metadata. Also the final line f.write(os.linesep.join(data)) May actually mean you've temporarily got a third copy in memory (a big string with the entire output file). So I'd say you're doing it in quite an inefficient way, keeping the entire input file, entire output file and a fair amount of intermediate data in memory at once. Using the generator to parse it is quite a nice idea. Consider writing each record out after you've generated it (it can then be discarded and the memory reused), or if that causes too many write requests, batch them into, say, 100 rows at once. Likewise, reading the response could be done in chunks. As they're fixed records this should be reasonably easy. A: The last line should surely be f.close()? Those trailing parens are kinda important. A: You can make this program more memory efficient by not reading all 15MB from the TCP connection, but instead processing each line as it is read. This will make the remote servers wait for you, of course, but that's okay. Python is just not very memory efficient. It wasn't built for that. A: You could do more of your work in compiled C code if you convert this to a list comprehension: data = [] items = GetData(url) for item in items: data.append(';'.join(item)) to: data = [';'.join(items) for items in GetData(url)] This is actually slightly different from your original code. In your version, GetData returns a 3-tuple, which comes back in items. You then iterate over this triplet, and append ';'.join(item) for each item in it. This means that you get 3 entries added to data for every triplet read from GetData, each one ';'.join'ed. If the items are just strings, then ';'.join will give you back a string with every other character a ';' - that is ';'.join("ABC") will give back "A;B;C". I think what you actually wanted was to have each triplet saved back to the data list as the 3 values of the triplet, separated by semicolons. That is what my version generates. This may also help somewhat with your original memory problem, as you are no longer creating as many Python values. Remember that a variable in Python has much more overhead than one in a language like C. Since each value is itself an object, and add the overhead of each name reference to that object, you can easily expand the theoretical storage requirement several-fold. In your case, reading 15Mb X 15 = 225Mb + the overhead of each item of each triple being stored as a string entry in your data list could quickly grow to your 2Gb observed size. At minimum, my version of your data list will have only 1/3 the entries in it, plus the separate item references are skipped, plus the iteration is done in compiled code. A: There are 2 obvious places where you keep large data objects in memory (data variable in GetData() and data in MyThread.run() - these two will take about 500Mb) and probably there are other places in the skipped code. There are both easy to make memory efficient. Use response.read(4) instead of reading whole response at once and do it the same way in code behind UNPACK FIXED LENGTH OF BINARY DATA HERE. Change data.append(...) in MyThread.run() to if not first: f.write(os.linesep) f.write(';'.join(item)) These changes will save you a lot of memory. A: Make sure you delete the threads after they are stopped. (using del)
How to write a memory efficient Python program?
It's said that Python automatically manages memory. I'm confused because I have a Python program consistently uses more than 2GB of memory. It's a simple multi-thread binary data downloader and unpacker. def GetData(url): req = urllib2.Request(url) response = urllib2.urlopen(req) data = response.read() // data size is about 15MB response.close() count = struct.unpack("!I", data[:4]) for i in range(0, count): UNPACK FIXED LENGTH OF BINARY DATA HERE yield (field1, field2, field3) class MyThread(threading.Thread): def __init__(self, total, daterange, tickers): threading.Thread.__init__(self) def stop(self): self._Thread__stop() def run(self): GET URL FOR EACH REQUEST data = [] items = GetData(url) for item in items: data.append(';'.join(item)) f = open(filename, 'w') f.write(os.linesep.join(data)) f.close() There are 15 threads running. Each request gets 15MB of data and unpack it and saved to local text file. How could this program consume more than 2GB of memory? Do I need to do any memory recycling jobs in this case? How can I see how much memory each objects or functions use? I would appreciate all your advices or tips on how to keep a python program running in a memory efficient mode. Edit: Here is the output of "cat /proc/meminfo" MemTotal: 7975216 kB MemFree: 732368 kB Buffers: 38032 kB Cached: 4365664 kB SwapCached: 14016 kB Active: 2182264 kB Inactive: 4836612 kB
[ "Like others have said, you need at least the following two changes:\n\nDo not create a huge list of integers with range\n# use xrange\nfor i in xrange(0, count):\n # UNPACK FIXED LENGTH OF BINARY DATA HERE\n yield (field1, field2, field3)\n\ndo not create a huge string as the full file body to be written at once\n# use writelines\nf = open(filename, 'w')\nf.writelines((datum + os.linesep) for datum in data)\nf.close()\n\n\nEven better, you could write the file as:\n items = GetData(url)\n f = open(filename, 'w')\n for item in items:\n f.write(';'.join(item) + os.linesep)\n f.close()\n\n", "The major culprit here is as mentioned above the range() call. It will create a list with 15 million members, and that will eat up 200 MB of your memory, and with 15 processes, that's 3GB.\nBut also don't read in the whole 15MB file into data(), read bit by bit from the response. Sticking those 15MB into a variable will use up 15MB of memory more than reading bit by bit from the response.\nYou might want to consider simply just extracting data until you run out if indata, and comparing the count of data you extracted with what the first bytes said it should be. Then you need neither range() nor xrange(). Seems more pythonic to me. :)\n", "Consider using xrange() instead of range(), I believe that xrange is a generator whereas range() expands the whole list.\nI'd say either don't read the whole file into memory, or don't keep the whole unpacked structure in memory.\nCurrently you keep both in memory, at the same time, this is going to be quite big. So you've got at least two copies of your data in memory, plus some metadata.\nAlso the final line \n f.write(os.linesep.join(data))\n\nMay actually mean you've temporarily got a third copy in memory (a big string with the entire output file).\nSo I'd say you're doing it in quite an inefficient way, keeping the entire input file, entire output file and a fair amount of intermediate data in memory at once.\nUsing the generator to parse it is quite a nice idea. Consider writing each record out after you've generated it (it can then be discarded and the memory reused), or if that causes too many write requests, batch them into, say, 100 rows at once.\nLikewise, reading the response could be done in chunks. As they're fixed records this should be reasonably easy.\n", "The last line should surely be f.close()? Those trailing parens are kinda important. \n", "You can make this program more memory efficient by not reading all 15MB from the TCP connection, but instead processing each line as it is read. This will make the remote servers wait for you, of course, but that's okay. \nPython is just not very memory efficient. It wasn't built for that.\n", "You could do more of your work in compiled C code if you convert this to a list comprehension:\ndata = []\nitems = GetData(url)\nfor item in items:\n data.append(';'.join(item))\n\nto:\ndata = [';'.join(items) for items in GetData(url)]\n\nThis is actually slightly different from your original code. In your version, GetData returns a 3-tuple, which comes back in items. You then iterate over this triplet, and append ';'.join(item) for each item in it. This means that you get 3 entries added to data for every triplet read from GetData, each one ';'.join'ed. If the items are just strings, then ';'.join will give you back a string with every other character a ';' - that is ';'.join(\"ABC\") will give back \"A;B;C\". I think what you actually wanted was to have each triplet saved back to the data list as the 3 values of the triplet, separated by semicolons. That is what my version generates.\nThis may also help somewhat with your original memory problem, as you are no longer creating as many Python values. Remember that a variable in Python has much more overhead than one in a language like C. Since each value is itself an object, and add the overhead of each name reference to that object, you can easily expand the theoretical storage requirement several-fold. In your case, reading 15Mb X 15 = 225Mb + the overhead of each item of each triple being stored as a string entry in your data list could quickly grow to your 2Gb observed size. At minimum, my version of your data list will have only 1/3 the entries in it, plus the separate item references are skipped, plus the iteration is done in compiled code.\n", "There are 2 obvious places where you keep large data objects in memory (data variable in GetData() and data in MyThread.run() - these two will take about 500Mb) and probably there are other places in the skipped code. There are both easy to make memory efficient. Use response.read(4) instead of reading whole response at once and do it the same way in code behind UNPACK FIXED LENGTH OF BINARY DATA HERE. Change data.append(...) in MyThread.run() to \nif not first:\n f.write(os.linesep)\nf.write(';'.join(item))\n\nThese changes will save you a lot of memory.\n", "Make sure you delete the threads after they are stopped. (using del)\n" ]
[ 11, 9, 6, 5, 2, 2, 2, 1 ]
[]
[]
[ "memory", "memory_management", "python" ]
stackoverflow_0001659659_memory_memory_management_python.txt
Q: How can I pass a function's name to a function, and then call it? How can I pass a functions name to a function and then call it? Is it possible to do this without using getattribute? How can I pass a class name to a function and then instantiate the class? I know I just could pass the instance of the class directly to the function but it is important that the class gets instantiated after calling the function. A: def outer(f): # any name: function, class, any callable return f() # class will be instantiated within the scope of the function A: namespace = globals() result = namespace[func_name]() instance = namespace[class_name](*some_args) You can use your own dictionary (namespace) instead of globals. It is unclear why do you need artificial constrains such as not passing directly function/class objects, not using getattr(). A: If you have a limited number of options your planning to use, you could set up a dictionary with string keys and values of the functions/classes. A: Why not use the function and the class directly? class A(object): pass def f(): pass def g(func, cls): func() x = cls() g(f, A)
How can I pass a function's name to a function, and then call it?
How can I pass a functions name to a function and then call it? Is it possible to do this without using getattribute? How can I pass a class name to a function and then instantiate the class? I know I just could pass the instance of the class directly to the function but it is important that the class gets instantiated after calling the function.
[ "def outer(f): # any name: function, class, any callable\n return f() # class will be instantiated within the scope of the function\n\n", "namespace = globals()\nresult = namespace[func_name]()\ninstance = namespace[class_name](*some_args)\n\nYou can use your own dictionary (namespace) instead of globals.\nIt is unclear why do you need artificial constrains such as not passing directly function/class objects, not using getattr().\n", "If you have a limited number of options your planning to use, you could set up a dictionary with string keys and values of the functions/classes.\n", "Why not use the function and the class directly?\nclass A(object):\n pass\ndef f():\n pass\ndef g(func, cls):\n func()\n x = cls()\ng(f, A)\n\n" ]
[ 5, 2, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001660351_python.txt
Q: module to abstract limitations of GQL I am after a Python module for Google App Engine that abstracts away limitations of the GQL. Specifically I want to store big files (> 1MB) and retrieve all records for a model (> 1000). I have my own code that handles this at present but would prefer to build on existing work, if available. Thanks A: I'm not aware of any libraries that do that. You may want to reconsider what you're doing, at least in terms of retrieving more than 1000 results - those operations are not available because they're expensive, and needing to evade them is usually (though not always) a sign that you need to rearchitect your app to do less work at read time.
module to abstract limitations of GQL
I am after a Python module for Google App Engine that abstracts away limitations of the GQL. Specifically I want to store big files (> 1MB) and retrieve all records for a model (> 1000). I have my own code that handles this at present but would prefer to build on existing work, if available. Thanks
[ "I'm not aware of any libraries that do that. You may want to reconsider what you're doing, at least in terms of retrieving more than 1000 results - those operations are not available because they're expensive, and needing to evade them is usually (though not always) a sign that you need to rearchitect your app to do less work at read time.\n" ]
[ 1 ]
[]
[]
[ "google_app_engine", "gql", "python" ]
stackoverflow_0001658829_google_app_engine_gql_python.txt
Q: Designing the storage for a very large game world I'm starting up game programming again. 10 years ago I was making games in qbasic and I havn't done any game programming since, so I am quite rusty. I have been programming all the time though, I am web developer/DBA/admin now. I have several questions, but I'm going to limit it to one per post. The game I am working on is going to be large, very large world. It is going to be somewhat like URW, but a even larger world and more like an 'RPG'. What I have been trying to decide, is what is the best way layout the map, save it, and access it. I thought up the idea of using sqlite to store the data. I could then even use the sqlite db as the save file for the game, nice and easy. Anyone have any tips about how I should go about this or ideas for other storage methods? Here are the requirements for my game: I need full random access to spot in the game world (the NPC's, monsters, animals will all be active all the time). I'm using Stackless Python 3.1, options are quite limited unless I do a lot of work. Needs to be able to handle a very large world. Concurrency support would be a plus, but I don't think I will need it. A: Don't mess with relational databases unless you're forced to use them by external factors. Look at Python's pickle, shelve. Shelve is fast and scales well. It eliminates messy conversion between Python and non-Python representation. Edit. More important advice. Do not get bogged down in technology choices. Get the locations, items, characters, rules, etc. to work. In Python. As simply and correctly as possible. Do not burn a single brain calorie on anything but core model, correctness, and a basic feature set to prove things work. Once you have a model that actually works, and you can exercise with some sophisticated unit tests, then you can make technology choices. Once you have a model, you can meaningfully scale it up to millions of locations and see what kind of storage is required. The model can't change -- it's the essence of the application. Only the access layer and persistence layer can change to adjust performance. A: It sounds like what you're asking for is a type of spacial index. For a very large 2d game I'd recommend using a quadtree. Quadtree works well when you have a large area and activity tends to happen in localized regions of the area, which is the case for most RPG-type games. It will keep your storage requirements low and hopefully speed up collision detection as well. As for saving the game, things like player and monster stats can go into a database, if you're worried about those changing often. For the actual level layout I'd recommend using a binary file format specific to your game. There's not many database-type queries you usually need to perform on the level layout and you can make great optimizations using your own format. I wouldn't know how to begin storing a quadtree-like format in a database (although I'm sure its possible). A: I am using non-relationnal database to store big amounts of datas. If you can work on a 64 bits hardware, MongoDB with its Python driver is really very good. I do not know if this is ok with Stackless, but it is a possiblity.
Designing the storage for a very large game world
I'm starting up game programming again. 10 years ago I was making games in qbasic and I havn't done any game programming since, so I am quite rusty. I have been programming all the time though, I am web developer/DBA/admin now. I have several questions, but I'm going to limit it to one per post. The game I am working on is going to be large, very large world. It is going to be somewhat like URW, but a even larger world and more like an 'RPG'. What I have been trying to decide, is what is the best way layout the map, save it, and access it. I thought up the idea of using sqlite to store the data. I could then even use the sqlite db as the save file for the game, nice and easy. Anyone have any tips about how I should go about this or ideas for other storage methods? Here are the requirements for my game: I need full random access to spot in the game world (the NPC's, monsters, animals will all be active all the time). I'm using Stackless Python 3.1, options are quite limited unless I do a lot of work. Needs to be able to handle a very large world. Concurrency support would be a plus, but I don't think I will need it.
[ "Don't mess with relational databases unless you're forced to use them by external factors.\nLook at Python's pickle, shelve.\nShelve is fast and scales well. It eliminates messy conversion between Python and non-Python representation.\n\nEdit.\nMore important advice. Do not get bogged down in technology choices. Get the locations, items, characters, rules, etc. to work. In Python. As simply and correctly as possible.\nDo not burn a single brain calorie on anything but core model, correctness, and a basic feature set to prove things work.\nOnce you have a model that actually works, and you can exercise with some sophisticated unit tests, then you can make technology choices.\nOnce you have a model, you can meaningfully scale it up to millions of locations and see what kind of storage is required. The model can't change -- it's the essence of the application. Only the access layer and persistence layer can change to adjust performance. \n", "It sounds like what you're asking for is a type of spacial index. For a very large 2d game I'd recommend using a quadtree. Quadtree works well when you have a large area and activity tends to happen in localized regions of the area, which is the case for most RPG-type games. It will keep your storage requirements low and hopefully speed up collision detection as well.\nAs for saving the game, things like player and monster stats can go into a database, if you're worried about those changing often. For the actual level layout I'd recommend using a binary file format specific to your game. There's not many database-type queries you usually need to perform on the level layout and you can make great optimizations using your own format. I wouldn't know how to begin storing a quadtree-like format in a database (although I'm sure its possible).\n", "I am using non-relationnal database to store big amounts of datas. If you can work on a 64 bits hardware, MongoDB with its Python driver is really very good. I do not know if this is ok with Stackless, but it is a possiblity.\n" ]
[ 11, 3, 1 ]
[]
[]
[ "python", "python_3.x", "python_stackless", "sqlite" ]
stackoverflow_0001650627_python_python_3.x_python_stackless_sqlite.txt
Q: TypeError: cannot concatenate 'str' and 'instance' objects (python urllib) Writing a python program, and I came up with this error while using the urllib.urlopen function. Traceback (most recent call last): File "ChurchScraper.py", line 58, in <module> html = GetAllChurchPages() File "ChurchScraper.py", line 48, in GetAllChurchPages CPs = CPs + urllib.urlopen(url) TypeError: cannot concatenate 'str' and 'instance' objects url = 'http://website.com/index.php?cID=' + str(cID) CPs = CPs + urllib.urlopen(url) A: urlopen(url) returns a file-like object. To obtain the string contents, try CPs = CPs + urllib.urlopen(url).read() A: urllib.urllopen doesn't return a string, it returns an object doc If all went well, a file-like object is returned. A: The problem is in this line: CPs = CPs + urllib.urlopen(url) I assume CPs is a string however urllib.urlopen(url) returns a file like object. If you want to join the contents of the file at url with CPs then you need to do something like this: CPs = CPs + urllib.urlopen(url).read(). A: What is CPs? It looks like it is a string. urlopen will return an instance of a file-like object, not a string. See - http://docs.python.org/library/urllib.html. The error is not thrown from the urlopen, but because you are trying to concatenate a string with an object instance.
TypeError: cannot concatenate 'str' and 'instance' objects (python urllib)
Writing a python program, and I came up with this error while using the urllib.urlopen function. Traceback (most recent call last): File "ChurchScraper.py", line 58, in <module> html = GetAllChurchPages() File "ChurchScraper.py", line 48, in GetAllChurchPages CPs = CPs + urllib.urlopen(url) TypeError: cannot concatenate 'str' and 'instance' objects url = 'http://website.com/index.php?cID=' + str(cID) CPs = CPs + urllib.urlopen(url)
[ "urlopen(url) returns a file-like object. To obtain the string contents, try\nCPs = CPs + urllib.urlopen(url).read()\n\n", "urllib.urllopen doesn't return a string, it returns an object\ndoc\nIf all went well, a file-like object is returned.\n\n", "The problem is in this line: CPs = CPs + urllib.urlopen(url) I assume CPs is a string however urllib.urlopen(url) returns a file like object.\nIf you want to join the contents of the file at url with CPs then you need to do something like this: CPs = CPs + urllib.urlopen(url).read().\n", "What is CPs? It looks like it is a string. urlopen will return an instance of a file-like object, not a string. See - http://docs.python.org/library/urllib.html.\nThe error is not thrown from the urlopen, but because you are trying to concatenate a string with an object instance.\n" ]
[ 6, 2, 1, 0 ]
[]
[]
[ "python", "urllib" ]
stackoverflow_0001660954_python_urllib.txt
Q: least square solution to camera matrix [numpy] I would like to use use numpy's least square algorithm to solve for a camera matrix from 6 known 3D -> 2D point correspondence. I have been using this website as a reference: http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/OWENS/LECT9/node4.html Currently my camera matrix seems to have very small values: [[ -1.01534118e-11 3.87508914e-11 -2.75515236e-11 5.57599976e+02] [ -1.84008233e-11 2.78083388e-11 -9.67788509e-11 9.77599976e+02] [ -2.59237076e-14 -8.57647287e-15 -9.09272657e-14 1.00000000e+00]] I would like to be able to constrain the numpy solver to prevent it from solving for the trivial solution where the Camera matrix is nearly zero. Does anyone know how to constrain numpy.linalg.lstsqr()? A: I need to get scipy installed properly Just a note for installing scipy, ubuntu distributions since 8.04 have had a broken scipy build. That has been taken care of in the latest 9.10 beta build. You could build scipy from scratch, but it isn't in general an easy thing to do. Just a heads up because it took some effort for us here to get that figured out. Maybe it'll save you some frustration =) A: I suspect you may need to use the fmin_* routines in scipy.optimize. The optimization tutorial covers basic use and scipy.optimize.fmin_slsqp can include constraints. A: Would least squares staying near a point x0 be of any use, i.e. is there a camera matrix x0 you want to be near to ? "Keep away from some x0" is non-convex, nasty; keep near x0 or x1 ..., i.e. minimize |Ax-b|^2 + w^2 (|x-x0|^2 + |x-x1|^2 + ...) is easy.
least square solution to camera matrix [numpy]
I would like to use use numpy's least square algorithm to solve for a camera matrix from 6 known 3D -> 2D point correspondence. I have been using this website as a reference: http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/OWENS/LECT9/node4.html Currently my camera matrix seems to have very small values: [[ -1.01534118e-11 3.87508914e-11 -2.75515236e-11 5.57599976e+02] [ -1.84008233e-11 2.78083388e-11 -9.67788509e-11 9.77599976e+02] [ -2.59237076e-14 -8.57647287e-15 -9.09272657e-14 1.00000000e+00]] I would like to be able to constrain the numpy solver to prevent it from solving for the trivial solution where the Camera matrix is nearly zero. Does anyone know how to constrain numpy.linalg.lstsqr()?
[ "\nI need to get scipy installed properly\n\nJust a note for installing scipy, ubuntu distributions since 8.04 have had a broken scipy build. That has been taken care of in the latest 9.10 beta build. You could build scipy from scratch, but it isn't in general an easy thing to do. Just a heads up because it took some effort for us here to get that figured out. Maybe it'll save you some frustration =)\n", "I suspect you may need to use the fmin_* routines in scipy.optimize. The optimization tutorial covers basic use and scipy.optimize.fmin_slsqp can include constraints. \n", "Would least squares staying near a point x0\nbe of any use, i.e. is there a camera matrix x0 you want to be near to ?\n\"Keep away from some x0\" is non-convex, nasty; keep near x0 or x1 ..., i.e. minimize\n|Ax-b|^2 + w^2 (|x-x0|^2 + |x-x1|^2 + ...) is easy.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "computer_vision", "numpy", "python" ]
stackoverflow_0001634555_computer_vision_numpy_python.txt
Q: Prototyping with Python code before compiling I have been mulling over writing a peak-fitting library for a while. I know Python fairly well and plan on implementing everything in Python to begin with but envisage that I may have to re-implement some core routines in a compiled language eventually. IIRC, one of Python's original remits was as a prototyping language, however Python is pretty liberal in allowing functions, functors, objects to be passed to functions and methods, whereas I suspect the same is not true of say C or Fortran. What should I know about designing functions/classes which I envisage will have to interface into the compiled language? And how much of these potential problems are dealt with by libraries such as cTypes, bgen, SWIG, Boost.Python, Cython or Python SIP? For this particular use case (a fitting library), I imagine allowing users to define mathematical functions (Guassian, Lorentzian etc.) as Python functions which can then to be passed an interpreted by the compiled code fitting library. Passing and returning arrays is also essential. A: Finally a question that I can really put a value answer to :). I have investigated f2py, boost.python, swig, cython and pyrex for my work (PhD in optical measurement techniques). I used swig extensively, boost.python some and pyrex and cython a lot. I also used ctypes. This is my breakdown: Disclaimer: This is my personal experience. I am not involved with any of these projects. swig: does not play well with c++. It should, but name mangling problems in the linking step was a major headache for me on linux & Mac OS X. If you have C code and want it interfaced to python, it is a good solution. I wrapped the GTS for my needs and needed to write basically a C shared library which I could connect to. I would not recommend it. Ctypes: I wrote a libdc1394 (IEEE Camera library) wrapper using ctypes and it was a very straigtforward experience. You can find the code on https://launchpad.net/pydc1394. It is a lot of work to convert headers to python code, but then everything works reliably. This is a good way if you want to interface an external library. Ctypes is also in the stdlib of python, so everyone can use your code right away. This is also a good way to play around with a new lib in python quickly. I can recommend it to interface to external libs. Boost.Python: Very enjoyable. If you already have C++ code of your own that you want to use in python, go for this. It is very easy to translate c++ class structures into python class structures this way. I recommend it if you have c++ code that you need in python. Pyrex/Cython: Use Cython, not Pyrex. Period. Cython is more advanced and more enjoyable to use. Nowadays, I do everything with cython that i used to do with SWIG or Ctypes. It is also the best way if you have python code that runs too slow. The process is absolutely fantastic: you convert your python modules into cython modules, build them and keep profiling and optimizing like it still was python (no change of tools needed). You can then apply as much (or as little) C code mixed with your python code. This is by far faster then having to rewrite whole parts of your application in C; you only rewrite the inner loop. Timings: ctypes has the highest call overhead (~700ns), followed by boost.python (322ns), then directly by swig (290ns). Cython has the lowest call overhead (124ns) and the best feedback where it spends time on (cProfile support!). The numbers are from my box calling a trivial function that returns an integer from an interactive shell; module import overhead is therefore not timed, only function call overhead is. It is therefore easiest and most productive to get python code fast by profiling and using cython. Summary: For your problem, use Cython ;). I hope this rundown will be useful for some people. I'll gladly answer any remaining question. Edit: I forget to mention: for numerical purposes (that is, connection to NumPy) use Cython; they have support for it (because they basically develop cython for this purpose). So this should be another +1 for your decision. A: I haven't used SWIG or SIP, but I find writing Python wrappers with boost.python to be very powerful and relatively easy to use. I'm not clear on what your requirements are for passing types between C/C++ and python, but you can do that easily by either exposing a C++ type to python, or by using a generic boost::python::object argument to your C++ API. You can also register converters to automatically convert python types to C++ types and vice versa. If you plan use boost.python, the tutorial is a good place to start. I have implemented something somewhat similar to what you need. I have a C++ function that accepts a python function and an image as arguments, and applies the python function to each pixel in the image. Image* unary(boost::python::object op, Image& im) { Image* out = new Image(im.width(), im.height(), im.channels()); for(unsigned int i=0; i<im.size(); i++) { (*out)[i] == extract<float>(op(im[i])); } return out; } In this case, Image is a C++ object exposed to python (an image with float pixels), and op is a python defined function (or really any python object with a __call__ attribute). You can then use this function as follows (assuming unary is located in the called image that also contains Image and a load function): import image im = image.load('somefile.tiff') double_im = image.unary(lambda x: 2.0*x, im) As for using arrays with boost, I personally haven't done this, but I know the functionality to expose arrays to python using boost is available - this might be helpful. A: The best way to plan for an eventual transition to compiled code is to write the performance sensitive portions as a module of simple functions in a functional style (stateless and without side effects), which accept and return basic data types. This will provide a one-to-one mapping from your Python prototype code to the eventual compiled code, and will let you use ctypes easily and avoid a whole bunch of headaches. For peak fitting, you'll almost certainly need to use arrays, which will complicate things a little, but is still very doable with ctypes. If you really want to use more complicated data structures, or modify the passed arguments, SWIG or Python's standard C-extension interface will let you do what you want, but with some amount of hassle. For what you're doing, you may also want to check out NumPy, which might do some of the work you would want to push to C, as well as offering some additional help in moving data back and forth between Python and C. A: f2py (part of numpy) is a simpler alternative to SWIG and boost.python for wrapping C/Fortran number-crunching code. A: In my experience, there are two easy ways to call into C code from Python code. There are other approaches, all of which are more annoying and/or verbose. The first and easiest is to compile a bunch of C code as a separate shared library and then call functions in that library using ctypes. Unfortunately, passing anything other than basic data types is non-trivial. The second easiest way is to write a Python module in C and then call functions in that module. You can pass anything you want to these C functions without having to jump through any hoops. And it's easy to call Python functions or methods from these C functions, as described here: https://docs.python.org/extending/extending.html#calling-python-functions-from-c I don't have enough experience with SWIG to offer intelligent commentary. And while it is possible to do things like pass custom Python objects to C functions through ctypes, or to define new Python classes in C, these things are annoying and verbose and I recommend taking one of the two approaches described above. A: Python is pretty liberal in allowing functions, functors, objects to be passed to functions and methods, whereas I suspect the same is not true of say C or Fortran. In C you cannot pass a function as an argument to a function but you can pass a function pointer which is just as good a function. I don't know how much that would help when you are trying to integrate C and Python code but I just wanted to clear up one misconception. A: In addition to the tools above, I can recommend using Pyrex (for creating Python extension modules) or Psyco (as JIT compiler for Python).
Prototyping with Python code before compiling
I have been mulling over writing a peak-fitting library for a while. I know Python fairly well and plan on implementing everything in Python to begin with but envisage that I may have to re-implement some core routines in a compiled language eventually. IIRC, one of Python's original remits was as a prototyping language, however Python is pretty liberal in allowing functions, functors, objects to be passed to functions and methods, whereas I suspect the same is not true of say C or Fortran. What should I know about designing functions/classes which I envisage will have to interface into the compiled language? And how much of these potential problems are dealt with by libraries such as cTypes, bgen, SWIG, Boost.Python, Cython or Python SIP? For this particular use case (a fitting library), I imagine allowing users to define mathematical functions (Guassian, Lorentzian etc.) as Python functions which can then to be passed an interpreted by the compiled code fitting library. Passing and returning arrays is also essential.
[ "Finally a question that I can really put a value answer to :). \nI have investigated f2py, boost.python, swig, cython and pyrex for my work (PhD in optical measurement techniques). I used swig extensively, boost.python some and pyrex and cython a lot. I also used ctypes. This is my breakdown:\nDisclaimer: This is my personal experience. I am not involved with any of these projects. \nswig:\ndoes not play well with c++. It should, but name mangling problems in the linking step was a major headache for me on linux & Mac OS X. If you have C code and want it interfaced to python, it is a good solution. I wrapped the GTS for my needs and needed to write basically a C shared library which I could connect to. I would not recommend it.\nCtypes:\nI wrote a libdc1394 (IEEE Camera library) wrapper using ctypes and it was a very straigtforward experience. You can find the code on https://launchpad.net/pydc1394. It is a lot of work to convert headers to python code, but then everything works reliably. This is a good way if you want to interface an external library. Ctypes is also in the stdlib of python, so everyone can use your code right away. This is also a good way to play around with a new lib in python quickly. I can recommend it to interface to external libs. \nBoost.Python: Very enjoyable. If you already have C++ code of your own that you want to use in python, go for this. It is very easy to translate c++ class structures into python class structures this way. I recommend it if you have c++ code that you need in python. \nPyrex/Cython: Use Cython, not Pyrex. Period. Cython is more advanced and more enjoyable to use. Nowadays, I do everything with cython that i used to do with SWIG or Ctypes. It is also the best way if you have python code that runs too slow. The process is absolutely fantastic: you convert your python modules into cython modules, build them and keep profiling and optimizing like it still was python (no change of tools needed). You can then apply as much (or as little) C code mixed with your python code. This is by far faster then having to rewrite whole parts of your application in C; you only rewrite the inner loop. \nTimings: ctypes has the highest call overhead (~700ns), followed by boost.python (322ns), then directly by swig (290ns). Cython has the lowest call overhead (124ns) and the best feedback where it spends time on (cProfile support!). The numbers are from my box calling a trivial function that returns an integer from an interactive shell; module import overhead is therefore not timed, only function call overhead is. It is therefore easiest and most productive to get python code fast by profiling and using cython.\nSummary: For your problem, use Cython ;). I hope this rundown will be useful for some people. I'll gladly answer any remaining question.\n\nEdit: I forget to mention: for numerical purposes (that is, connection to NumPy) use Cython; they have support for it (because they basically develop cython for this purpose). So this should be another +1 for your decision. \n", "I haven't used SWIG or SIP, but I find writing Python wrappers with boost.python to be very powerful and relatively easy to use.\nI'm not clear on what your requirements are for passing types between C/C++ and python, but you can do that easily by either exposing a C++ type to python, or by using a generic boost::python::object argument to your C++ API. You can also register converters to automatically convert python types to C++ types and vice versa.\nIf you plan use boost.python, the tutorial is a good place to start.\nI have implemented something somewhat similar to what you need. I have a C++ function that \naccepts a python function and an image as arguments, and applies the python function to each pixel in the image.\nImage* unary(boost::python::object op, Image& im)\n{\n Image* out = new Image(im.width(), im.height(), im.channels());\n for(unsigned int i=0; i<im.size(); i++)\n {\n (*out)[i] == extract<float>(op(im[i]));\n }\n return out;\n}\n\nIn this case, Image is a C++ object exposed to python (an image with float pixels), and op is a python defined function (or really any python object with a __call__ attribute). You can then use this function as follows (assuming unary is located in the called image that also contains Image and a load function):\nimport image\nim = image.load('somefile.tiff')\ndouble_im = image.unary(lambda x: 2.0*x, im)\n\nAs for using arrays with boost, I personally haven't done this, but I know the functionality to expose arrays to python using boost is available - this might be helpful.\n", "The best way to plan for an eventual transition to compiled code is to write the performance sensitive portions as a module of simple functions in a functional style (stateless and without side effects), which accept and return basic data types.\nThis will provide a one-to-one mapping from your Python prototype code to the eventual compiled code, and will let you use ctypes easily and avoid a whole bunch of headaches.\nFor peak fitting, you'll almost certainly need to use arrays, which will complicate things a little, but is still very doable with ctypes.\nIf you really want to use more complicated data structures, or modify the passed arguments, SWIG or Python's standard C-extension interface will let you do what you want, but with some amount of hassle.\nFor what you're doing, you may also want to check out NumPy, which might do some of the work you would want to push to C, as well as offering some additional help in moving data back and forth between Python and C.\n", "f2py (part of numpy) is a simpler alternative to SWIG and boost.python for wrapping C/Fortran number-crunching code.\n", "In my experience, there are two easy ways to call into C code from Python code. There are other approaches, all of which are more annoying and/or verbose.\nThe first and easiest is to compile a bunch of C code as a separate shared library and then call functions in that library using ctypes. Unfortunately, passing anything other than basic data types is non-trivial.\nThe second easiest way is to write a Python module in C and then call functions in that module. You can pass anything you want to these C functions without having to jump through any hoops. And it's easy to call Python functions or methods from these C functions, as described here: https://docs.python.org/extending/extending.html#calling-python-functions-from-c\nI don't have enough experience with SWIG to offer intelligent commentary. And while it is possible to do things like pass custom Python objects to C functions through ctypes, or to define new Python classes in C, these things are annoying and verbose and I recommend taking one of the two approaches described above.\n", "\nPython is pretty liberal in allowing functions, functors, objects to be passed to functions and methods, whereas I suspect the same is not true of say C or Fortran.\n\nIn C you cannot pass a function as an argument to a function but you can pass a function pointer which is just as good a function.\nI don't know how much that would help when you are trying to integrate C and Python code but I just wanted to clear up one misconception.\n", "In addition to the tools above, I can recommend using Pyrex \n(for creating Python extension modules) or Psyco (as JIT compiler for Python).\n" ]
[ 36, 10, 6, 4, 1, 0, 0 ]
[]
[]
[ "ctypes", "prototyping", "python", "python_sip", "swig" ]
stackoverflow_0000016067_ctypes_prototyping_python_python_sip_swig.txt
Q: Why does my Python daemon hog all my CPU while sleeping? I'm using this recipe: http://code.activestate.com/recipes/278731/ on an Ubuntu server. I make a daemon instance like this: class MyDaemon(Daemon): def run(self): while True: try: do_my_data_processing() except MySQLdb.OperationalError: # Sleep an extra 30 seconds if database is away. time.sleep(30) time.sleep(30) The problem is that even while sleeping the daemon takes up almost all available CPU power. What am I doing wrong? A: The posted code looks correct. Your error must be somewhere else. Put a print statement into the loop to make sure that it does sleep. A: Turns out the daemon wasn't sleeping. It was looping without sleeping 30 seconds between every turn. Thanks Aaron. I fixed it by changing my code to this: class MyDaemon(Daemon): def run(self): while True: try: do_my_data_processing() time.sleep(30) except MySQLdb.OperationalError: # Sleep an extra 30 seconds if database is away. time.sleep(30)
Why does my Python daemon hog all my CPU while sleeping?
I'm using this recipe: http://code.activestate.com/recipes/278731/ on an Ubuntu server. I make a daemon instance like this: class MyDaemon(Daemon): def run(self): while True: try: do_my_data_processing() except MySQLdb.OperationalError: # Sleep an extra 30 seconds if database is away. time.sleep(30) time.sleep(30) The problem is that even while sleeping the daemon takes up almost all available CPU power. What am I doing wrong?
[ "The posted code looks correct. Your error must be somewhere else. Put a print statement into the loop to make sure that it does sleep.\n", "Turns out the daemon wasn't sleeping. It was looping without sleeping 30 seconds between every turn. Thanks Aaron.\nI fixed it by changing my code to this:\nclass MyDaemon(Daemon):\n def run(self):\n while True:\n try: \n do_my_data_processing()\n time.sleep(30)\n except MySQLdb.OperationalError:\n # Sleep an extra 30 seconds if database is away.\n time.sleep(30)\n\n" ]
[ 3, 0 ]
[]
[]
[ "cpu", "daemon", "python" ]
stackoverflow_0001661210_cpu_daemon_python.txt
Q: Data Structure for storing a sorting field to efficiently allow modifications I'm using Django and PostgreSQL, but I'm not absolutely tied to the Django ORM if there's a better way to do this with raw SQL or database specific operations. I've got a model that needs sequential ordering. Lookup operations will generally retrieve the entire list in order. The most common operation on this data is to move a row to the bottom of a list, with a subset of the intervening items bubbling up to replace the previous item like this: (operation on A, with subset B, C, E) A -> B B -> C C -> E D -> D E -> A Notice how D does not move. In general, the subset of items will not be more than about 50 items, but the base list may grow to tens of thousands of entries. The most obvious way of implementing this is with a simple integer order field. This seems suboptimal. It requires the compromise of making the position ordering column non-unique, where non-uniqueness is only required for the duration of a modification operation. To see this, imagine the minimal operation using A with subset B: oldpos = B.pos B.pos = A.pos A.pos = oldpos Even though you've stored the position, at the second line you've violated the uniqueness constraint. Additionally, this method makes atomicity problematic - your read operation has to happen before the write, during which time your records could change. Django's default transaction handling documentation doesn't address this, though I know it should be possible in the SQL using the "REPEATABLE READ" level of transaction locking. I'm looking for alternate data structures that suit this use pattern more closely. I've looked at this question for ideas. One proposal there is the Dewey decimal style solution, which makes insert operations occur numerically between existing values, so inserting A between B and C results in: A=1 -> B=2 B=2 -> A=2.5 C=3 -> C=3 This solves the column uniqueness problem, but introduces the issue that the column must be a float of a specified number of decimals. Either I over-estimate, and store way more data than I need to, or the system becomes limited by whatever arbitrary decimal length I impose. Furthermore, I don't expect use to be even over the database - some keys are going to be moved far more often than others, making this solution hit the limit sooner. I could solve this problem by periodically re-numbering the database, but it seems that a good data structure should avoid needing this. Another structure I've considered is the linked list (and variants). This has the advantage of making modification straightforward, but I'm not certain of it's properties with respect to SQL - ordering such a list in the SQL query seems like it would be painful, and extracting a non-sequential subset of the list has terrible retrieval properties. Beyond this, there are B-Trees, various Binary Trees, and so on. What do you recommend for this data structure? Is there a standard data structure for this solution in SQL? Is the initial idea of going with sequential integers really going to have scaling issues, or am I seeing problems where there are none? A: Prefered solutions: A linked list would be the usual way to achieve this. A query to return the items in order is trivial in Oracle, but Im not sure how you would do it in PostreSQL. Another option would be to implement this using the ltree module for postgresql. Less graceful (and write-heavy) solution: Start transaction. "select for update" within scope for row level locks. Move the target record to position 0, update the targets future succeeding records to +1 where their position is higher than the targets original position (or vice versa) and then update the target to the new position - a single additional write over that needed without a unique constraint. Commit :D Simple (yet still write-heavy) solution if you can wait for Postgresql 8.5 (Alpha is available) :) Wrap it in a transaction, select for update in scope, and use a deferred constraint (postgresql 8.5 has support for deferred unique constraints like Oracle). A: A temp table and a transaction should maintain atomicity and the unique constraint on sort order. Restating the problem, you want to go from: A 10 to B 10 B 25 C 25 C 26 E 26 E 34 A 34 Where there can be any number of items in between each row. So, first you read in the records and create a list [['A',10],['B',25],['C',26],['E',34]]. Through some pythonic magic you shift the identifiers around and insert them into a temp table: create temporary table reorder ( id varchar(20), -- whatever sort_order number, primary key (id)); Now for the update: update table XYZ set sort_order = (select sort_order from reorder where xyz.id = reorder.id) where id in (select id from reorder) I'm only assuming pgsql can handle that query. If it can, it will be atomic. Optionally, create table REORDER as a permanent table and the transaction will ensure that attempts to reorder the same record twice will be serialized. EDIT: There are some transaction issues. You might need to implement both of my ideas. If two processes both want to update item B (for example) there can be issues. So, assume all order values are even: Begin Transaction Increment all the orders being used by 1. This puts row level write locks on all the rows you are going to update. Select the data you just updated, if any sort_order fields are even some other process has added a record that matches your criteria. You can either abort the transaction and restart or you can just drop the record and finish the operation using only the records that were updated in step 2. The "right" thing to do depends on what you need this code to accomplish. Fill your temporary reorder table as above using the proper even sort_orders. Update the main table as above. Drop the temporary table. Commit the transaction Step 2 ensures that if two lists overlap, only the first one will have access to the row in question until the transaction completes: update XYZ set sort_order = sort_order + 1 where -- whatever your select criteria are select * from XYZ where -- same select criteria order by sort_order Alternatively, you can add a control field to the table to get the same affect and then you don't need to play with the sort_order field. The benefit of using the sort_order field is indexing by a BIT field or a LOCK_BY_USERID field when the field is usually null tends to have poor performance since the index 99% of the time is meaningless. SQL engines don't like indexes that spend most of their time empty. A: It seems to me that your real problem is the need to lock a table for the duration of a transaction. I don't immediately see a good way to solve this problem in a single operation, hence the need for locking. So the question is whether you can do this in a "Django way" as opposed to using straight SQL. Searching "django lock table" turned up some interesting links, including this snippet, there are many others that implement similar behavior. A straight SQL linked-list style solution can be found in this stack overflow post, it appeared logical and succinct to me, but again it's two operations. I'm very curious to hear how this turns out and what your final solution is, be sure to keep us updated! A: You can solve the renumbering issue by doing the order column as an integer that is always an even number. When you are moving the data, you change the order field to the new sort value + 1 and then do a quick update to convert all the odd order fields to even: update table set sort_order = bitand(sort_order, '0xFFFFFFFE') where sort_order <> bitand(sort_order, '0xFFFFFFFE') Thus you can keep the uniqueness of sort_order as a constraint EDIT: Okay, looking at the question again, I've started a new answer. A: Why not do a simple character field of some length like a max of 16 (or 255) initially. Start initially with labeling things aaa through zzz (that should be 17576 entries). (You could also add in 0-9, and the uppercase letters and symbols for an optimization.) As items are added, they can go to the end up to the maximum you allow for the additional 'end times' (zzza, zzzaa, zzzaaa, zzzaab, zzzaac, zzzaad, etc.) This should be reasonable simple to program, and it's very similar to the Dewey Decimal system. Yes, you will need to rebalance it occasionally, but that should be a simple operaion. The simplest approach is two passes, pass 1 would be to set the new ordering tag to '0' (or any character earlier than the first character) followed by the new tag of the appropriate length, and step 2 would be to remove the '0 from the front. Obviuosly, you could do the same thing with floats, and rebalancing it regularly, this is just a variation on that. The one advantage is that most databases will allow you to set a ridiculously large maximum size for the character field, large enough to make it very, very, very unlikely that you would run out of digits to do the ordering, and also make it unlikely that you would ever have to modify the schema, while not wasting a lot of space.
Data Structure for storing a sorting field to efficiently allow modifications
I'm using Django and PostgreSQL, but I'm not absolutely tied to the Django ORM if there's a better way to do this with raw SQL or database specific operations. I've got a model that needs sequential ordering. Lookup operations will generally retrieve the entire list in order. The most common operation on this data is to move a row to the bottom of a list, with a subset of the intervening items bubbling up to replace the previous item like this: (operation on A, with subset B, C, E) A -> B B -> C C -> E D -> D E -> A Notice how D does not move. In general, the subset of items will not be more than about 50 items, but the base list may grow to tens of thousands of entries. The most obvious way of implementing this is with a simple integer order field. This seems suboptimal. It requires the compromise of making the position ordering column non-unique, where non-uniqueness is only required for the duration of a modification operation. To see this, imagine the minimal operation using A with subset B: oldpos = B.pos B.pos = A.pos A.pos = oldpos Even though you've stored the position, at the second line you've violated the uniqueness constraint. Additionally, this method makes atomicity problematic - your read operation has to happen before the write, during which time your records could change. Django's default transaction handling documentation doesn't address this, though I know it should be possible in the SQL using the "REPEATABLE READ" level of transaction locking. I'm looking for alternate data structures that suit this use pattern more closely. I've looked at this question for ideas. One proposal there is the Dewey decimal style solution, which makes insert operations occur numerically between existing values, so inserting A between B and C results in: A=1 -> B=2 B=2 -> A=2.5 C=3 -> C=3 This solves the column uniqueness problem, but introduces the issue that the column must be a float of a specified number of decimals. Either I over-estimate, and store way more data than I need to, or the system becomes limited by whatever arbitrary decimal length I impose. Furthermore, I don't expect use to be even over the database - some keys are going to be moved far more often than others, making this solution hit the limit sooner. I could solve this problem by periodically re-numbering the database, but it seems that a good data structure should avoid needing this. Another structure I've considered is the linked list (and variants). This has the advantage of making modification straightforward, but I'm not certain of it's properties with respect to SQL - ordering such a list in the SQL query seems like it would be painful, and extracting a non-sequential subset of the list has terrible retrieval properties. Beyond this, there are B-Trees, various Binary Trees, and so on. What do you recommend for this data structure? Is there a standard data structure for this solution in SQL? Is the initial idea of going with sequential integers really going to have scaling issues, or am I seeing problems where there are none?
[ "Prefered solutions:\nA linked list would be the usual way to achieve this. A query to return the items in order is trivial in Oracle, but Im not sure how you would do it in PostreSQL.\nAnother option would be to implement this using the ltree module for postgresql.\nLess graceful (and write-heavy) solution:\nStart transaction. \"select for update\" within scope for row level locks. Move the target record to position 0, update the targets future succeeding records to +1 where their position is higher than the targets original position (or vice versa) and then update the target to the new position - a single additional write over that needed without a unique constraint. Commit :D\nSimple (yet still write-heavy) solution if you can wait for Postgresql 8.5 (Alpha is available) :)\nWrap it in a transaction, select for update in scope, and use a deferred constraint (postgresql 8.5 has support for deferred unique constraints like Oracle).\n", "A temp table and a transaction should maintain atomicity and the unique constraint on sort order. Restating the problem, you want to go from:\nA 10 to B 10\nB 25 C 25\nC 26 E 26\nE 34 A 34\n\nWhere there can be any number of items in between each row. So, first you read in the records and create a list [['A',10],['B',25],['C',26],['E',34]]. Through some pythonic magic you shift the identifiers around and insert them into a temp table:\ncreate temporary table reorder (\n id varchar(20), -- whatever\n sort_order number,\n primary key (id));\n\nNow for the update:\nupdate table XYZ\nset sort_order = (select sort_order from reorder where xyz.id = reorder.id)\nwhere id in (select id from reorder)\n\nI'm only assuming pgsql can handle that query. If it can, it will be atomic.\nOptionally, create table REORDER as a permanent table and the transaction will ensure that attempts to reorder the same record twice will be serialized.\n\nEDIT: There are some transaction issues. You might need to implement both of my ideas. If two processes both want to update item B (for example) there can be issues. So, assume all order values are even:\n\nBegin Transaction\nIncrement all the orders being used by 1. This puts row level write locks on all the rows you are going to update.\nSelect the data you just updated, if any sort_order fields are even some other process has added a record that matches your criteria. You can either abort the transaction and restart or you can just drop the record and finish the operation using only the records that were updated in step 2. The \"right\" thing to do depends on what you need this code to accomplish.\nFill your temporary reorder table as above using the proper even sort_orders.\nUpdate the main table as above.\nDrop the temporary table.\nCommit the transaction\n\nStep 2 ensures that if two lists overlap, only the first one will have access to the row \nin question until the transaction completes:\nupdate XYZ set sort_order = sort_order + 1\nwhere -- whatever your select criteria are\n\nselect * from XYZ\nwhere -- same select criteria\norder by sort_order\n\nAlternatively, you can add a control field to the table to get the same affect and then you don't need to play with the sort_order field. The benefit of using the sort_order field is indexing by a BIT field or a LOCK_BY_USERID field when the field is usually null tends to have poor performance since the index 99% of the time is meaningless. SQL engines don't like indexes that spend most of their time empty.\n", "It seems to me that your real problem is the need to lock a table for the duration of a transaction. I don't immediately see a good way to solve this problem in a single operation, hence the need for locking.\nSo the question is whether you can do this in a \"Django way\" as opposed to using straight SQL. Searching \"django lock table\" turned up some interesting links, including this snippet, there are many others that implement similar behavior.\nA straight SQL linked-list style solution can be found in this stack overflow post, it appeared logical and succinct to me, but again it's two operations.\nI'm very curious to hear how this turns out and what your final solution is, be sure to keep us updated!\n", "You can solve the renumbering issue by doing the order column as an integer that is always an even number. When you are moving the data, you change the order field to the new sort value + 1 and then do a quick update to convert all the odd order fields to even:\nupdate table set sort_order = bitand(sort_order, '0xFFFFFFFE')\nwhere sort_order <> bitand(sort_order, '0xFFFFFFFE')\n\nThus you can keep the uniqueness of sort_order as a constraint\nEDIT: Okay, looking at the question again, I've started a new answer.\n", "Why not do a simple character field of some length like a max of 16 (or 255) initially.\nStart initially with labeling things aaa through zzz (that should be 17576 entries). (You could also add in 0-9, and the uppercase letters and symbols for an optimization.)\nAs items are added, they can go to the end up to the maximum you allow for the additional 'end times' (zzza, zzzaa, zzzaaa, zzzaab, zzzaac, zzzaad, etc.)\nThis should be reasonable simple to program, and it's very similar to the Dewey Decimal system.\nYes, you will need to rebalance it occasionally, but that should be a simple operaion. The simplest approach is two passes, pass 1 would be to set the new ordering tag to '0' (or any character earlier than the first character) followed by the new tag of the appropriate length, and step 2 would be to remove the '0 from the front.\nObviuosly, you could do the same thing with floats, and rebalancing it regularly, this is just a variation on that. The one advantage is that most databases will allow you to set a ridiculously large maximum size for the character field, large enough to make it very, very, very unlikely that you would run out of digits to do the ordering, and also make it unlikely that you would ever have to modify the schema, while not wasting a lot of space.\n" ]
[ 6, 4, 1, 1, 1 ]
[]
[]
[ "data_structures", "database", "django", "python", "sorting" ]
stackoverflow_0001640664_data_structures_database_django_python_sorting.txt
Q: App Engine: What is the fastest way to check if my datastore query returns any result? I like to check if there is any result for my datastore query in the Google App Engine Datastore. This is my query: users = User.all() users.filter("hash =", current_user_hash) What is the fastest and most elegant way to check if my query returns any result? PS: I know a way to do so, but I'm very unsure if it is very efficient... A: If you also need to fetch the results, the most efficient way is to fetch the results with .fetch(), and then check if the list is nonempty. If you don't actually need the results, call .count(1). What you shouldn't do is call .count(1) if you also need the results - this'll require executing the query twice. A: users.count(1) is probably the fastest way to do this; users.fetch(1) would be the other way to accomplish this, but requires fetching an entire entity; depending on the size of that entity this could be fairly slow.
App Engine: What is the fastest way to check if my datastore query returns any result?
I like to check if there is any result for my datastore query in the Google App Engine Datastore. This is my query: users = User.all() users.filter("hash =", current_user_hash) What is the fastest and most elegant way to check if my query returns any result? PS: I know a way to do so, but I'm very unsure if it is very efficient...
[ "If you also need to fetch the results, the most efficient way is to fetch the results with .fetch(), and then check if the list is nonempty. If you don't actually need the results, call .count(1).\nWhat you shouldn't do is call .count(1) if you also need the results - this'll require executing the query twice.\n", "users.count(1) is probably the fastest way to do this; users.fetch(1) would be the other way to accomplish this, but requires fetching an entire entity; depending on the size of that entity this could be fairly slow.\n" ]
[ 4, 2 ]
[]
[]
[ "google_app_engine", "google_cloud_datastore", "python" ]
stackoverflow_0001661421_google_app_engine_google_cloud_datastore_python.txt
Q: matplotlib for R user? I regularly make figures (the exploratory data analysis type) in R. I also program in Python and was wondering if there are features or concepts in matplotlib that would be worth learning. For instance, I am quite happy with R - but its image() function will produce large files with pixelated output, whereas Matlab's equivalent figure (I also program regularly in Matlab) seems to be manageable in file size and also 'smoothed' - does matplotlib also provide such reductions...? But more generally, I wonder what other advantages matplotlib might confer. I don't mean this to be a trolling question. Thanks. A: This is a tough one to answer. I recently switched some of my graphing workload from R to matplotlib. In my humble opinion, I find matplotlib's graphs to be prettier (better default colors, they look crisper and more modern). I also think matplotlib renders PNGs a whole lot better. The real motivation for me though, was that I wanted to work with my underlying data in Python (and numpy) and not R. I think this is the big question to ask, in which language do you want to load, parse and manipulate your data? On the other hand, a bonus for R is that the plotting defaults just work (there's a function for everything). I find myself frequently digging through the matplotlib docs (they are thick) looking for some obscure way to adjust a border or increase a line thickness. R's plotting routines have some maturity behind them. A: I think that the largest advantage is that matplotlib is based on Python, which you say you already know. So, this is one language less to learn. Just spend the time mastering Python, and you'll benefit both directly for the plotting task at hand and indirectly for your other Python needs. Besides, IMHO Python is an overall richer language than R, with far more libraries that can help for various tasks. You have to access data for plotting, and data comes in many forms. In whatever form it comes I'm sure Python has an efficient library for it. And how about embedding those plots in more complete programs, say simple GUIs? matplotlib binds easily with Python's GUI libs (like PyQT) and you can make stuff that only your imagination limits.
matplotlib for R user?
I regularly make figures (the exploratory data analysis type) in R. I also program in Python and was wondering if there are features or concepts in matplotlib that would be worth learning. For instance, I am quite happy with R - but its image() function will produce large files with pixelated output, whereas Matlab's equivalent figure (I also program regularly in Matlab) seems to be manageable in file size and also 'smoothed' - does matplotlib also provide such reductions...? But more generally, I wonder what other advantages matplotlib might confer. I don't mean this to be a trolling question. Thanks.
[ "This is a tough one to answer. \nI recently switched some of my graphing workload from R to matplotlib. In my humble opinion, I find matplotlib's graphs to be prettier (better default colors, they look crisper and more modern). I also think matplotlib renders PNGs a whole lot better.\nThe real motivation for me though, was that I wanted to work with my underlying data in Python (and numpy) and not R. I think this is the big question to ask, in which language do you want to load, parse and manipulate your data?\nOn the other hand, a bonus for R is that the plotting defaults just work (there's a function for everything). I find myself frequently digging through the matplotlib docs (they are thick) looking for some obscure way to adjust a border or increase a line thickness. R's plotting routines have some maturity behind them.\n", "I think that the largest advantage is that matplotlib is based on Python, which you say you already know. So, this is one language less to learn. Just spend the time mastering Python, and you'll benefit both directly for the plotting task at hand and indirectly for your other Python needs. \nBesides, IMHO Python is an overall richer language than R, with far more libraries that can help for various tasks. You have to access data for plotting, and data comes in many forms. In whatever form it comes I'm sure Python has an efficient library for it.\nAnd how about embedding those plots in more complete programs, say simple GUIs? matplotlib binds easily with Python's GUI libs (like PyQT) and you can make stuff that only your imagination limits.\n" ]
[ 13, 4 ]
[]
[]
[ "data_visualization", "matplotlib", "python", "r", "scipy" ]
stackoverflow_0001661479_data_visualization_matplotlib_python_r_scipy.txt
Q: Creating an Infographic In Python I want to create a simple infographic in python. Matplotlib seems to have a lot of features but nothing that covers off my simple heatmap grid example. The infographic is a simple 5 x 5 grid with numbers inside ranging from 0 to 1. The grid squares would then be coloured in 0=white 1=blue 0.5 being a pale blue. Matplotlib could probably be used but I couldn't find or combine any examples that offered insight into generating this. Any insight, example code or library direction would really help Regards Matt A: It depends what you need to do with the graph once you have it, Matplotlib allows you to interactively show the graph on the screen, save it in either vector, pdf or bitmap format, and more. If you opt for this framework, imshow will do what you need, here is an example: # Just some data to test: from random import gauss a = [[gauss(0, 10) for i in xrange(0, 5)] for j in xrange(0,5)] from pylab import * # or just launch "IPython -pylab" from the command line # We create a custom colormap: myblue = cm.colors.LinearSegmentedColormap("myblue", { 'red': [(0, 1, 1), (1, 0, 0)], 'green': [(0, 1, 1), (1, 0, 0)], 'blue': [(0, 1, 1), (1, 1, 1)]}) # Plotting the graph: imshow(a, cmap=myblue) For further details on the colormap check this link, and here is the link for imshow - or simply use help(colors.LinearSegmentedColormap) and help(imshow). alt text http://img522.imageshack.us/img522/6230/bluep.png (note that this is the result with the standard options, you can add a grid, change the filtering and so on). Edit however I'm looking to display the numbers in the grid To keep it simple: for i in xrange(0,5): for j in xrange(0,5): text(i, j, "{0:5.2f}".format(a[i][j]), horizontalalignment="center", verticalalignment="center") A: PyCairo is your friend. Simple example: from __future__ import with_statement import cairo img = cairo.ImageSurface(cairo.FORMAT_ARGB32,100,100) g = cairo.Context(img) for x in range(0,100,10): for y in range(0,100,10): g.set_source_rgb(.1 + x/100.0, 0, .1 + y/100.0) g.rectangle(x,y,10,10) g.fill() with open('test.png','wb') as f: img.write_to_png(f) You might find this tutorial helpful. A: One possibility would be to generate SVG from python. You can view SVG in Firefox or Inkscape. Here's a quick-and-dirty example: import random def square(x, y, value): r, g, b = value * 255, value * 255, 255 s = '<rect x="%d" y="%d" width="1" height="1" style="fill:rgb(%d,%d,%d);"/>' % (x, y, r, g, b) t = '<text x="%d" y="%d" font-size=".2" fill="yellow">%f</text>' % (x, y + 1, value) return s + '\n' + t print(''' <?xml version="1.0" standalone="no"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd"> <svg width="100%" height="100%" version="1.1" viewBox="0 0 5 5" xmlns="http://www.w3.org/2000/svg"> ''') for x in range(0, 5): for y in range(0, 5): print(square(x, y, random.random())) print('</svg>') alt text http://www.imagechicken.com/uploads/1257184721026098800.png
Creating an Infographic In Python
I want to create a simple infographic in python. Matplotlib seems to have a lot of features but nothing that covers off my simple heatmap grid example. The infographic is a simple 5 x 5 grid with numbers inside ranging from 0 to 1. The grid squares would then be coloured in 0=white 1=blue 0.5 being a pale blue. Matplotlib could probably be used but I couldn't find or combine any examples that offered insight into generating this. Any insight, example code or library direction would really help Regards Matt
[ "It depends what you need to do with the graph once you have it, Matplotlib allows you to interactively show the graph on the screen, save it in either vector, pdf or bitmap format, and more.\nIf you opt for this framework, imshow will do what you need, here is an example:\n# Just some data to test:\nfrom random import gauss\na = [[gauss(0, 10) for i in xrange(0, 5)] for j in xrange(0,5)]\n\nfrom pylab import * # or just launch \"IPython -pylab\" from the command line\n\n# We create a custom colormap:\nmyblue = cm.colors.LinearSegmentedColormap(\"myblue\", {\n 'red': [(0, 1, 1), (1, 0, 0)], \n 'green': [(0, 1, 1), (1, 0, 0)],\n 'blue': [(0, 1, 1), (1, 1, 1)]})\n\n# Plotting the graph:\nimshow(a, cmap=myblue)\n\nFor further details on the colormap check this link, and here is the link for imshow - or simply use help(colors.LinearSegmentedColormap) and help(imshow).\nalt text http://img522.imageshack.us/img522/6230/bluep.png\n(note that this is the result with the standard options, you can add a grid, change the filtering and so on).\n\nEdit\n\nhowever I'm looking to display the\n numbers in the grid\n\nTo keep it simple:\nfor i in xrange(0,5):\n for j in xrange(0,5):\n text(i, j,\n \"{0:5.2f}\".format(a[i][j]),\n horizontalalignment=\"center\",\n verticalalignment=\"center\")\n\n", "PyCairo is your friend. Simple example:\nfrom __future__ import with_statement\nimport cairo\nimg = cairo.ImageSurface(cairo.FORMAT_ARGB32,100,100)\ng = cairo.Context(img)\nfor x in range(0,100,10):\n for y in range(0,100,10):\n g.set_source_rgb(.1 + x/100.0, 0, .1 + y/100.0)\n g.rectangle(x,y,10,10)\n g.fill()\nwith open('test.png','wb') as f:\n img.write_to_png(f)\n\n\nYou might find this tutorial helpful.\n", "One possibility would be to generate SVG from python. You can view SVG in Firefox or Inkscape.\nHere's a quick-and-dirty example:\nimport random\n\ndef square(x, y, value):\n r, g, b = value * 255, value * 255, 255\n s = '<rect x=\"%d\" y=\"%d\" width=\"1\" height=\"1\" style=\"fill:rgb(%d,%d,%d);\"/>' % (x, y, r, g, b)\n t = '<text x=\"%d\" y=\"%d\" font-size=\".2\" fill=\"yellow\">%f</text>' % (x, y + 1, value)\n return s + '\\n' + t\n\nprint('''\n<?xml version=\"1.0\" standalone=\"no\"?>\n<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\"\n\"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\">\n\n<svg width=\"100%\" height=\"100%\" version=\"1.1\" viewBox=\"0 0 5 5\"\nxmlns=\"http://www.w3.org/2000/svg\">\n''')\nfor x in range(0, 5):\n for y in range(0, 5):\n print(square(x, y, random.random()))\n\nprint('</svg>')\n\nalt text http://www.imagechicken.com/uploads/1257184721026098800.png\n" ]
[ 4, 2, 2 ]
[]
[]
[ "charts", "grid", "heatmap", "matplotlib", "python" ]
stackoverflow_0001661565_charts_grid_heatmap_matplotlib_python.txt
Q: How to generate a choicelist from all ImageSpecs I want to generate a choicelist for all specs that inherit from imagekit.specs.ImageSpec. The idea is to allow users of the admin interface to select an ImageSpec to add to a picture. i.e: class Display(ImageSpec): pre_cache = True increment_count = True processors = [ResizeDisplay,] class SingleDisplay(ImageSpec): pre_cache = True increment_count = True processors = [SingleDisplayResize] class Reflection(ImageSpec): increment_count = True processors = [ResizeDisplay, ReflectionProcessor] class SingleDisplayReflection(ImageSpec): increment_count = True processors = [SingleDisplayResize, ReflectionProcessor] results in a drop-down list "Display, Singledisplay, Reflection, Singledisplayreflection" A: Well, something like the following will get you a list of all the ImageSpec subclasses defined in the file: def subclassfilter(x, baseclass): return x is not baseclass and isinstance(x, type) and issubclass(x, baseclass) subclasses = [c for c in locals().values() if subclassfilter(c, ImageSpec)] You could then generate the choices list from the __name__ attribute of each class in the subclasses list.
How to generate a choicelist from all ImageSpecs
I want to generate a choicelist for all specs that inherit from imagekit.specs.ImageSpec. The idea is to allow users of the admin interface to select an ImageSpec to add to a picture. i.e: class Display(ImageSpec): pre_cache = True increment_count = True processors = [ResizeDisplay,] class SingleDisplay(ImageSpec): pre_cache = True increment_count = True processors = [SingleDisplayResize] class Reflection(ImageSpec): increment_count = True processors = [ResizeDisplay, ReflectionProcessor] class SingleDisplayReflection(ImageSpec): increment_count = True processors = [SingleDisplayResize, ReflectionProcessor] results in a drop-down list "Display, Singledisplay, Reflection, Singledisplayreflection"
[ "Well, something like the following will get you a list of all the ImageSpec subclasses defined in the file:\ndef subclassfilter(x, baseclass):\n return x is not baseclass and isinstance(x, type) and issubclass(x, baseclass)\n\nsubclasses = [c for c in locals().values() if subclassfilter(c, ImageSpec)]\n\nYou could then generate the choices list from the __name__ attribute of each class in the subclasses list.\n" ]
[ 2 ]
[]
[]
[ "django", "django_imagekit", "imagekit", "python" ]
stackoverflow_0001662520_django_django_imagekit_imagekit_python.txt
Q: read string backwards and terminate at first '/' I want to extract just the file name portion of a path. My code below works, but I'd like to know what the better (pythonic) way of doing this is. filename = '' tmppath = '/dir1/dir2/dir3/file.exe' for i in reversed(tmppath): if i != '/': filename += str(i) else: break a = filename[::-1] print a A: Try: #!/usr/bin/python import os.path path = '/dir1/dir2/dir3/file.exe' name = os.path.basename(path) print name A: you'd be better off using standard library for this: >>> tmppath = '/dir1/dir2/dir3/file.exe' >>> import os.path >>> os.path.basename(tmppath) 'file.exe' A: Use os.path.basename(..) function. A: >>> import os >>> path = '/dir1/dir2/dir3/file.exe' >>> path.split(os.sep) ['', 'dir1', 'dir2', 'dir3', 'file.exe'] >>> path.split(os.sep)[-1] 'file.exe' >>> A: The existing answers are correct for your "real underlying question" (path manipulation). For the question in your title (generalizable to other characters of course), what helps there is the rsplit method of strings: >>> s='some/stuff/with/many/slashes' >>> s.rsplit('/', 1) ['some/stuff/with/many', 'slashes'] >>> s.rsplit('/', 1)[1] 'slashes' >>>
read string backwards and terminate at first '/'
I want to extract just the file name portion of a path. My code below works, but I'd like to know what the better (pythonic) way of doing this is. filename = '' tmppath = '/dir1/dir2/dir3/file.exe' for i in reversed(tmppath): if i != '/': filename += str(i) else: break a = filename[::-1] print a
[ "Try:\n#!/usr/bin/python\nimport os.path\npath = '/dir1/dir2/dir3/file.exe'\nname = os.path.basename(path)\nprint name\n\n", "you'd be better off using standard library for this:\n>>> tmppath = '/dir1/dir2/dir3/file.exe'\n>>> import os.path\n>>> os.path.basename(tmppath)\n'file.exe'\n\n", "Use os.path.basename(..) function.\n", ">>> import os\n>>> path = '/dir1/dir2/dir3/file.exe'\n>>> path.split(os.sep)\n['', 'dir1', 'dir2', 'dir3', 'file.exe']\n>>> path.split(os.sep)[-1]\n'file.exe'\n>>>\n\n", "The existing answers are correct for your \"real underlying question\" (path manipulation). For the question in your title (generalizable to other characters of course), what helps there is the rsplit method of strings:\n>>> s='some/stuff/with/many/slashes'\n>>> s.rsplit('/', 1)\n['some/stuff/with/many', 'slashes']\n>>> s.rsplit('/', 1)[1]\n'slashes'\n>>> \n\n" ]
[ 12, 4, 2, 1, 0 ]
[]
[]
[ "path", "python" ]
stackoverflow_0001660059_path_python.txt
Q: Is there a "do ... until" in Python? Is there a do until x: ... in Python, or a nice way to implement such a looping construct? A: There is no do-while loop in Python. This is a similar construct, taken from the link above. while True: do_something() if condition(): break A: I prefer to use a looping variable, as it tends to read a bit nicer than just "while 1:", and no ugly-looking break statement: finished = False while not finished: ... do something... finished = evaluate_end_condition() A: There's no prepackaged "do-while", but the general Python way to implement peculiar looping constructs is through generators and other iterators, e.g.: import itertools def dowhile(predicate): it = itertools.repeat(None) for _ in it: yield if not predicate(): break so, for example: i=7; j=3 for _ in dowhile(lambda: i<j): print i, j i+=1; j-=1 executes one leg, as desired, even though the predicate's already false at the start. It's normally better to encapsulate more of the looping logic into your generator (or other iterator) -- for example, if you often have cases where one variable increases, one decreases, and you need a do/while loop comparing them, you could code: def incandec(i, j, delta=1): while True: yield i, j if j <= i: break i+=delta; j-=delta which you can use like: for i, j in incandec(i=7, j=3): print i, j It's up to you how much loop-related logic you want to put inside your generator (or other iterator) and how much you want to have outside of it (just like for any other use of a function, class, or other mechanism you can use to refactor code out of your main stream of execution), but, generally speaking, I like to see the generator used in a for loop that has little (ideally none) "loop control logic" (code related to updating state variables for the next loop leg and/or making tests about whether you should be looping again or not). A: No there isn't. Instead use a while loop such as: while 1: ...statements... if cond: break
Is there a "do ... until" in Python?
Is there a do until x: ... in Python, or a nice way to implement such a looping construct?
[ "There is no do-while loop in Python.\nThis is a similar construct, taken from the link above.\n while True:\n do_something()\n if condition():\n break\n\n", "I prefer to use a looping variable, as it tends to read a bit nicer than just \"while 1:\", and no ugly-looking break statement:\nfinished = False\nwhile not finished:\n ... do something...\n finished = evaluate_end_condition()\n\n", "There's no prepackaged \"do-while\", but the general Python way to implement peculiar looping constructs is through generators and other iterators, e.g.:\nimport itertools\n\ndef dowhile(predicate):\n it = itertools.repeat(None)\n for _ in it:\n yield\n if not predicate(): break\n\nso, for example:\ni=7; j=3\nfor _ in dowhile(lambda: i<j):\n print i, j\n i+=1; j-=1\n\nexecutes one leg, as desired, even though the predicate's already false at the start.\nIt's normally better to encapsulate more of the looping logic into your generator (or other iterator) -- for example, if you often have cases where one variable increases, one decreases, and you need a do/while loop comparing them, you could code:\ndef incandec(i, j, delta=1):\n while True:\n yield i, j\n if j <= i: break\n i+=delta; j-=delta\n\nwhich you can use like:\nfor i, j in incandec(i=7, j=3):\n print i, j\n\nIt's up to you how much loop-related logic you want to put inside your generator (or other iterator) and how much you want to have outside of it (just like for any other use of a function, class, or other mechanism you can use to refactor code out of your main stream of execution), but, generally speaking, I like to see the generator used in a for loop that has little (ideally none) \"loop control logic\" (code related to updating state variables for the next loop leg and/or making tests about whether you should be looping again or not).\n", "No there isn't. Instead use a while loop such as:\nwhile 1:\n ...statements...\n if cond:\n break\n\n" ]
[ 314, 45, 28, 11 ]
[]
[]
[ "loops", "python" ]
stackoverflow_0001662161_loops_python.txt
Q: Unable to get results when passing a string via parameter substitution in gql query I am able to properly pass a string variable to the gqlquery through parameter substitution, here's the code i've tried to use; user_name = self.request.get('username') #retrieved from UI p = models.UserDetails.all().filter('user_name = ', user_name).fetch(1) I don't get any results and the query fails silently. But when I hard code the query like this , p = models.UserDetails.all().filter('user_name = ', "peter rice").fetch(1) I get my expected resultset. I think I am passing the variable user_name in a wrong way, Please help me in getting my piece of code right. A: Have you tried filter('user_name = ', str(user_name)) ? I supose you are sure user_name has the expected content. A: I think I've got it, I tried using this, p = models.UserDetails.gql('WHERE user_name = :uname', uname = user_name).fetch(1) and I got the expected resultset. I wonder why other formats have the problem in string substitution. A: Try logging repr(user_name) to verify that the string is exactly the same as what you're expecting (and that it's not unicode rather than raw). Also try logging the expression user_name == "peter rice". Other than that, I can't see any reason why it would not work - there's literally no way the API can affect this, since it doesn't know where the argument you pass in comes from.
Unable to get results when passing a string via parameter substitution in gql query
I am able to properly pass a string variable to the gqlquery through parameter substitution, here's the code i've tried to use; user_name = self.request.get('username') #retrieved from UI p = models.UserDetails.all().filter('user_name = ', user_name).fetch(1) I don't get any results and the query fails silently. But when I hard code the query like this , p = models.UserDetails.all().filter('user_name = ', "peter rice").fetch(1) I get my expected resultset. I think I am passing the variable user_name in a wrong way, Please help me in getting my piece of code right.
[ "Have you tried filter('user_name = ', str(user_name)) ?\nI supose you are sure user_name has the expected content.\n", "I think I've got it, I tried using this,\np = models.UserDetails.gql('WHERE user_name = :uname', uname = user_name).fetch(1)\n\nand I got the expected resultset. I wonder why other formats have the problem in string substitution.\n", "Try logging repr(user_name) to verify that the string is exactly the same as what you're expecting (and that it's not unicode rather than raw). Also try logging the expression user_name == \"peter rice\". Other than that, I can't see any reason why it would not work - there's literally no way the API can affect this, since it doesn't know where the argument you pass in comes from.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "google_app_engine", "gql", "gqlquery", "python", "string_substitution" ]
stackoverflow_0001660640_google_app_engine_gql_gqlquery_python_string_substitution.txt
Q: Python on windows7 intel 64bit I've been messing around with Python over the weekend and find myself pretty much back at where I started. I've specifically been having issues with easy_install and nltk giving me errors about not finding packages, etc. I've tried both Python 2.6 and Python 3.1. I think part of the problem may be that I'm running windows 7 in 64bit mode on an Intel T5750 chipset. I'm thinking of downloading Python for windows extension http://sourceforge.net/projects/pywin32/files/, but not sure which version to get. Why do packages have a specific AMD64, but not intel? However, this may not even solve my problems. Any recommendations on getting Python to work in this environment? I've currently got Python 3.1 installed, and removed 2.6 A: The most popular 64-bit mode for "86-oid" processor is commonly known as AMD64 because AMD first came up with it (Intel at that time was pushing Itanium instead, and that didn't really catch fire -- it's still around but I don't even know if Win7 supports it); Intel later had to imitate that mode to get into the mass-64 bit market, but it's still commonly known as AMD64 after its originator. For Windows 7 in 64-bit mode, AMD64 seems likely to be what you want. The 64-bit-Windows downloads from activestate come with a few important pieces that aren't part of the standard python.org 64-bit Windows builds, and might perhaps make your life easier.
Python on windows7 intel 64bit
I've been messing around with Python over the weekend and find myself pretty much back at where I started. I've specifically been having issues with easy_install and nltk giving me errors about not finding packages, etc. I've tried both Python 2.6 and Python 3.1. I think part of the problem may be that I'm running windows 7 in 64bit mode on an Intel T5750 chipset. I'm thinking of downloading Python for windows extension http://sourceforge.net/projects/pywin32/files/, but not sure which version to get. Why do packages have a specific AMD64, but not intel? However, this may not even solve my problems. Any recommendations on getting Python to work in this environment? I've currently got Python 3.1 installed, and removed 2.6
[ "The most popular 64-bit mode for \"86-oid\" processor is commonly known as AMD64 because AMD first came up with it (Intel at that time was pushing Itanium instead, and that didn't really catch fire -- it's still around but I don't even know if Win7 supports it); Intel later had to imitate that mode to get into the mass-64 bit market, but it's still commonly known as AMD64 after its originator. For Windows 7 in 64-bit mode, AMD64 seems likely to be what you want.\nThe 64-bit-Windows downloads from activestate come with a few important pieces that aren't part of the standard python.org 64-bit Windows builds, and might perhaps make your life easier.\n" ]
[ 13 ]
[]
[]
[ "installation", "python" ]
stackoverflow_0001662920_installation_python.txt
Q: Pexpect, running ssh-copy-id is hanging when trying to spawn a second process I'm doing a Python script where I need to spawn several ssh-copy-id processes, and they need for me to type in a password, so i'm using PExpect. I have basically this: child = pexpect.spawn('command') child.expect('password:') child.sendline('the password') and then I want to spawn another process, I don't care about this one anymore, whether it ended or not. child = pexpect.spawn('command2') child.expect('password:') child.sendline('the password') And the code is hanging at the second "spawn" However, if I comment out the first call, the second one works, so i'm guessing that the fact that the first one is still running or something is keeping it from working. Now, the other thing I haven't been able to do is wait until the first one stops. I've tried: child.close() - it hangs (both with True and False as parameters) child.read(-1) - it hangs child.expect(pexpect.EOF) - it hangs. child.terminate() - it hangs (both with True and False as parameters) Any ideas on what could be happening? NOTE: I'm not a Python expert, and i have never used pexpect before, so ANY idea is more than welcome. Thanks! UPDATE: This is definitely related to ssh-copy-id, because with other processes, spawn works well even if they don't return. Also, apparently ssh-copy-id never returns an EOF. A: Fortunately or not, but OpenSSH client seems to be very picky about passwords and where they come from. You may try using Paramiko Python SSH2 library. Here's a simple example how to use it with password authentication, then issue some shell commands (echo "..." >> $HOME/.ssh/authorized_keys being the simplest) to add your public key on remote host. A: I think the problem is, that SSH tries to open PTY and it does not work on anything else than PTY for security reasons. This won't work well with pexpect. I have another ssh client: http://www.digmia.com/index.php?option=com_content&view=article&id=54:Digmia%20Enterprise%20SSH&Itemid=56 It's open-source, you can use it. What you are trying to do would be more commands, but you don't need expect at all. First install it accordingly to manual, then do something like this: Run dssh-agent, add the password you need like this: dssh-add -l < passwordfile or if it is a secure machine, i.e. no one else can log in there, this is very important, otherwise this would be a huge security hole: echo "name-of-server;22;root;password;" | dssh-add -l password file would be something like: name-of-server;22;root;password; And the do something like (replace CONTENTS OF ... with actual content of that file): dssh root@name-of-server -- echo "CONTENTS OF ~/.ssh/identity.pub" > .ssh/authorized_keys \; chmod og-w .ssh .ssh/authorized_keys You can (optionally) do dssh-add -f passwords (make sure no one else is doing all this stuff, otherwise you would have a race condition). Also, pexpect should probably work with dssh itself (so you don't need to use dssh-agent). But using dssh-agent is simpler and safer. Installation manual for DSSH is contained in the tarball. I don't know any simpler way of doing this, OpenSSH ssh-copy-id is very picky about where the password comes from... A: Reading pexpect documentation for spawn, I think it is waiting for the command to terminate. I would suggest a couple of different possibilities, depending on your needs: 1) Kill the spawned process. However, this may lead to corruption in your operation, so I do not know if it is what you want. child = pexpect.spawn('command') child.expect('password:') child.sendline('the password') child.close(True) 2) Wait for completion of the initial task before moving to the next one child = pexpect.spawn('command') child.expect('password:') child.sendline('the password') child.wait() child = pexpect.spawn('command2') ... 3) Use a different instance for all children, then wait on all of them at the end - and this would be most probably the best solution def exec_command(cmd): child = pexpect.spawn(cmd) child.expect('password:') child.sendline('the password') return child commands = ['command1', 'command2'] childrens = [exec_command(cmd) for cmd in commands] for child in childrens: child.wait() Note: all of the code here is untested, and written under the assumption that your script is hanging because deleting a spawn object will hang until the command will terminate. A: Actually, I tried many of these alternatives, and neither worked. Calling close() or terminate() hangs (both with True and False as parameters) Calling wait() or read(-1) or expect(pexpect.EOF) hangs calling spawn again without caring about the previous spawn command hangs I made some tests with other commands (like 'ftp', and they work as i'd expect, for example, if you call .expect('something'), and something is not found before EOF, they don't wait forever, they throw an exception, so I believe this is related to the ssh-copy-id command specifically.
Pexpect, running ssh-copy-id is hanging when trying to spawn a second process
I'm doing a Python script where I need to spawn several ssh-copy-id processes, and they need for me to type in a password, so i'm using PExpect. I have basically this: child = pexpect.spawn('command') child.expect('password:') child.sendline('the password') and then I want to spawn another process, I don't care about this one anymore, whether it ended or not. child = pexpect.spawn('command2') child.expect('password:') child.sendline('the password') And the code is hanging at the second "spawn" However, if I comment out the first call, the second one works, so i'm guessing that the fact that the first one is still running or something is keeping it from working. Now, the other thing I haven't been able to do is wait until the first one stops. I've tried: child.close() - it hangs (both with True and False as parameters) child.read(-1) - it hangs child.expect(pexpect.EOF) - it hangs. child.terminate() - it hangs (both with True and False as parameters) Any ideas on what could be happening? NOTE: I'm not a Python expert, and i have never used pexpect before, so ANY idea is more than welcome. Thanks! UPDATE: This is definitely related to ssh-copy-id, because with other processes, spawn works well even if they don't return. Also, apparently ssh-copy-id never returns an EOF.
[ "Fortunately or not, but OpenSSH client seems to be very picky about passwords and where they come from.\nYou may try using Paramiko Python SSH2 library. Here's a simple example how to use it with password authentication, then issue some shell commands (echo \"...\" >> $HOME/.ssh/authorized_keys being the simplest) to add your public key on remote host.\n", "I think the problem is, that SSH tries to open PTY and it does not work\non anything else than PTY for security reasons. This won't work well\nwith pexpect.\nI have another ssh client:\nhttp://www.digmia.com/index.php?option=com_content&view=article&id=54:Digmia%20Enterprise%20SSH&Itemid=56\nIt's open-source, you can use it. What you are trying to do would\nbe more commands, but you don't need expect at all.\n\nFirst install it accordingly to manual, then do something like this:\nRun dssh-agent, add the password you need like this:\ndssh-add -l < passwordfile\n\n\nor if it is a secure machine, i.e. no one else can log in there,\nthis is very important, otherwise this would be a huge security hole:\necho \"name-of-server;22;root;password;\" | dssh-add -l\n\npassword file would be something like:\nname-of-server;22;root;password;\n\n\nAnd the do something like (replace CONTENTS OF ... with actual content of that file):\ndssh root@name-of-server -- echo \"CONTENTS OF ~/.ssh/identity.pub\" > .ssh/authorized_keys \\; chmod og-w .ssh .ssh/authorized_keys\n\n\nYou can (optionally) do\ndssh-add -f passwords\n\n\n(make sure no one else is doing all this stuff, otherwise you would\nhave a race condition).\n\nAlso, pexpect should probably work with dssh itself (so you don't need\nto use dssh-agent). But using dssh-agent is simpler and safer.\nInstallation manual for DSSH is contained in the tarball.\nI don't know any simpler way of doing this, OpenSSH ssh-copy-id is\nvery picky about where the password comes from...\n", "Reading pexpect documentation for spawn, I think it is waiting for the command to terminate.\nI would suggest a couple of different possibilities, depending on your needs:\n1) Kill the spawned process. However, this may lead to corruption in your operation, so I do not know if it is what you want.\nchild = pexpect.spawn('command')\nchild.expect('password:')\nchild.sendline('the password')\nchild.close(True)\n\n2) Wait for completion of the initial task before moving to the next one\nchild = pexpect.spawn('command')\nchild.expect('password:')\nchild.sendline('the password')\nchild.wait()\nchild = pexpect.spawn('command2')\n...\n\n3) Use a different instance for all children, then wait on all of them at the end - and this would be most probably the best solution\ndef exec_command(cmd):\n child = pexpect.spawn(cmd)\n child.expect('password:')\n child.sendline('the password')\n return child\n\ncommands = ['command1', 'command2']\nchildrens = [exec_command(cmd) for cmd in commands]\nfor child in childrens:\n child.wait() \n\nNote: all of the code here is untested, and written under the assumption that your script is hanging because deleting a spawn object will hang until the command will terminate.\n", "Actually, I tried many of these alternatives, and neither worked. \n\nCalling close() or terminate() hangs (both with True and False as parameters)\nCalling wait() or read(-1) or expect(pexpect.EOF) hangs\ncalling spawn again without caring about the previous spawn command hangs\n\nI made some tests with other commands (like 'ftp', and they work as i'd expect, for example, if you call .expect('something'), and something is not found before EOF, they don't wait forever, they throw an exception, so I believe this is related to the ssh-copy-id command specifically.\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "pexpect", "process", "python" ]
stackoverflow_0000356830_pexpect_process_python.txt
Q: How do I loop through all levels of a data structure to extract all data when I don't know how many levels there will be? I need to extract data from a structure and put it into a list, but I don't know how many levels the structure has. For each level, I can call level.children(), if there are no levels below the current one, it returns [], if there are, it returns [object, object, ...], on each of which I can call children() on again. I need to drill down through the structure until I've extracted all levels of data into a list. When based off a structure like this: <name>John Smith</name> <team link="http://teamwebsite.com"> <name>Team Name</name> </team> <games> <location> <venue>A stadium</venue> </location> </games> The list should look something like this: [ [ {'name': 'name', 'attrs': {}, 'text': 'John Smith', 'parent': None}, ], [ {'name': 'team', 'attrs': {'link': 'http://teamwebsite.com'}, 'text': '', 'parent': None}, {'name': 'name', 'attrs': {}, 'text': 'Team Name', 'parent': 1}, # the reference to its parent's position in the list ], [ {'name': 'games', 'attrs': {}, 'text': '', 'parent': None}, {'name': 'location', 'attrs': {}, 'text': '', 'parent': 1}, {'name': 'venue', 'attrs': {}, 'text': 'A stadium', 'parent': 2}, ], ] I'm trying to figure out the Python I would use to get from the data structure to my list. I need a kind of self-perpetuating for loop, but I can't come up with a good solution. Anything to point me in the right direction? I'm sure there is some good theory for this kind of thing that I completely don't know about but would be happy to read. A: You're describing recursion, but I'm guessing there are better, ways, to, parse, XML. A: The concept you're looking to use here is called "Recursion".
How do I loop through all levels of a data structure to extract all data when I don't know how many levels there will be?
I need to extract data from a structure and put it into a list, but I don't know how many levels the structure has. For each level, I can call level.children(), if there are no levels below the current one, it returns [], if there are, it returns [object, object, ...], on each of which I can call children() on again. I need to drill down through the structure until I've extracted all levels of data into a list. When based off a structure like this: <name>John Smith</name> <team link="http://teamwebsite.com"> <name>Team Name</name> </team> <games> <location> <venue>A stadium</venue> </location> </games> The list should look something like this: [ [ {'name': 'name', 'attrs': {}, 'text': 'John Smith', 'parent': None}, ], [ {'name': 'team', 'attrs': {'link': 'http://teamwebsite.com'}, 'text': '', 'parent': None}, {'name': 'name', 'attrs': {}, 'text': 'Team Name', 'parent': 1}, # the reference to its parent's position in the list ], [ {'name': 'games', 'attrs': {}, 'text': '', 'parent': None}, {'name': 'location', 'attrs': {}, 'text': '', 'parent': 1}, {'name': 'venue', 'attrs': {}, 'text': 'A stadium', 'parent': 2}, ], ] I'm trying to figure out the Python I would use to get from the data structure to my list. I need a kind of self-perpetuating for loop, but I can't come up with a good solution. Anything to point me in the right direction? I'm sure there is some good theory for this kind of thing that I completely don't know about but would be happy to read.
[ "You're describing recursion, but I'm guessing there are better, ways, to, parse, XML.\n", "The concept you're looking to use here is called \"Recursion\".\n" ]
[ 10, 5 ]
[]
[]
[ "data_structures", "loops", "python", "xml" ]
stackoverflow_0001663077_data_structures_loops_python_xml.txt
Q: xml.etree.ElementTree equivalent in Java I've been doing quite a bit of simple XML-processing in python and grown to like the ElementTree way of doing things. Is there something similar and as easy to use in Java? I find the DOM model a bit cumbersome and find myself writing much more code than I would like to do simple things. Or am I asking the wrong thing? Maybe my question is: Is there a better option than the "XMLUtils" classes I see people implementing in some places to simplify their code when dealing with DOM? Adding a litte bit here about why I like ElementTree since the question was asked. Simplicity (I guess anything seems simple after working with DOM though) Feels like a natural fit in python Requires very little code on my part. I'm trying to come up with a simple code example to illustrate, but it's sort of hard to give a good example. Here's an attempt though. This just adds a tag with a value and an attribute to an existing xml string. from xml.etree.ElementTree import * xml_string = '<top><sub a="x"></sub></top>' parsed = fromstring(xmlstring) se = SubElement(parsed, "tag") se.text = "value" se.attrib["a"] = "x" new_xml_string = tostring(parsed) After that, the new_xml_string is <top><sub a="x" /><tag a="x">value</tag></top> Not an example that really covers everything, but still. There's also the fairly simple looping over tags when you want to do stuff, easy testing for presence of tags and attributes and other things. A: To be honest, all XML APIs in Java suck, you just can vary the level of suckage you push yourself into which may turn horrible/slow to manageable/decent to even suprisingly OK at times. This all mostly stems from the fact that Java APIs try to be as W3C DOM compliant as possible, in fact Xerces (Java's current native XML solution) prides itself on being compliant to a whole bunch of XML related W3C specifications as you can see from their front page. The actual Xerces API is very unpleasant to work with, though, and because of that multiple other Java XML libraries have popped out over the years. Currently most popular ones are JDOM, simplifies DOM operations a lot and do I dare to say even pleasant at times, works like a charm when mixed with Jaxen - well, unless you hit this problem with namespaces. XOM which has a wonderful presentation about what's wrong with Java's XML right now and how they propose their way of doing things as a solution. In part it is actually better than JDOM, but it's not widespread enough yet so can't really say how it behaves in the real world out there. Definitely worth a check though. dom4j, well-rounded library, supports all kinds of important features and plays out as a down-to-earth solution for XML. dom4j is basically the "old, proven and reliable" option of the popular ones. Last but definitely not least I just have to mention StAX just because it's different, it's actually event-driven streaming API for XML. Definitely worth a look just out of curiosity. PS. I'm currently actually writing my own XML parser/navigator as an exercise but haven't decided on what kind of API it will have. I'm really aiming for ease of use which seems to be quite rare in Java XML APIs so far, but I'm not entirely sure what kind of API I am going to provide. Python's ElementTree seems interesting, but since I'm not entirely familiar with it, would you like to maybe give a short summary on what exactly in it you find enjoyable? A: You might look into the following alternatives: dom4j xom jdom Since I never used ElementTree I don't know wich one is the closest. If you can use Groovy inside your project, it offers a set of classes that helps a lot when processing XML. A: We find XOM (http://www.xom.nu) to provide simple subclassable Element functionality. A: It is true the Java XML APIs are not the greatest in terms of usability. My prefered options would be XOM, JDOM then the built in JAXP in that order. There were some rumbling about native XML in the language (Begin Product Tab Sub Links Integrating XML into the Java Programming Language) as a new data-type but that seems to have stalled.
xml.etree.ElementTree equivalent in Java
I've been doing quite a bit of simple XML-processing in python and grown to like the ElementTree way of doing things. Is there something similar and as easy to use in Java? I find the DOM model a bit cumbersome and find myself writing much more code than I would like to do simple things. Or am I asking the wrong thing? Maybe my question is: Is there a better option than the "XMLUtils" classes I see people implementing in some places to simplify their code when dealing with DOM? Adding a litte bit here about why I like ElementTree since the question was asked. Simplicity (I guess anything seems simple after working with DOM though) Feels like a natural fit in python Requires very little code on my part. I'm trying to come up with a simple code example to illustrate, but it's sort of hard to give a good example. Here's an attempt though. This just adds a tag with a value and an attribute to an existing xml string. from xml.etree.ElementTree import * xml_string = '<top><sub a="x"></sub></top>' parsed = fromstring(xmlstring) se = SubElement(parsed, "tag") se.text = "value" se.attrib["a"] = "x" new_xml_string = tostring(parsed) After that, the new_xml_string is <top><sub a="x" /><tag a="x">value</tag></top> Not an example that really covers everything, but still. There's also the fairly simple looping over tags when you want to do stuff, easy testing for presence of tags and attributes and other things.
[ "To be honest, all XML APIs in Java suck, you just can vary the level of suckage you push yourself into which may turn horrible/slow to manageable/decent to even suprisingly OK at times.\nThis all mostly stems from the fact that Java APIs try to be as W3C DOM compliant as possible, in fact Xerces (Java's current native XML solution) prides itself on being compliant to a whole bunch of XML related W3C specifications as you can see from their front page.\nThe actual Xerces API is very unpleasant to work with, though, and because of that multiple other Java XML libraries have popped out over the years. Currently most popular ones are\n\nJDOM, simplifies DOM operations a lot and do I dare to say even pleasant at times, works like a charm when mixed with Jaxen - well, unless you hit this problem with namespaces.\nXOM which has a wonderful presentation about what's wrong with Java's XML right now and how they propose their way of doing things as a solution. In part it is actually better than JDOM, but it's not widespread enough yet so can't really say how it behaves in the real world out there. Definitely worth a check though.\ndom4j, well-rounded library, supports all kinds of important features and plays out as a down-to-earth solution for XML. dom4j is basically the \"old, proven and reliable\" option of the popular ones.\n\nLast but definitely not least I just have to mention StAX just because it's different, it's actually event-driven streaming API for XML. Definitely worth a look just out of curiosity.\nPS. I'm currently actually writing my own XML parser/navigator as an exercise but haven't decided on what kind of API it will have. I'm really aiming for ease of use which seems to be quite rare in Java XML APIs so far, but I'm not entirely sure what kind of API I am going to provide. Python's ElementTree seems interesting, but since I'm not entirely familiar with it, would you like to maybe give a short summary on what exactly in it you find enjoyable?\n", "You might look into the following alternatives:\ndom4j\nxom\njdom\nSince I never used ElementTree I don't know wich one is the closest.\nIf you can use Groovy inside your project, it offers a set of classes that helps a lot when processing XML.\n", "We find XOM (http://www.xom.nu) to provide simple subclassable Element functionality.\n", "It is true the Java XML APIs are not the greatest in terms of usability. My prefered options would be XOM, JDOM then the built in JAXP in that order. There were some rumbling about native XML in the language (Begin Product Tab Sub Links\nIntegrating XML into the Java Programming Language) as a new data-type but that seems to have stalled.\n" ]
[ 6, 1, 1, 0 ]
[]
[]
[ "java", "python", "xml" ]
stackoverflow_0001662375_java_python_xml.txt
Q: What may be the problem (Django views)...? I am writing a GUI application using Django 1.1.1. This is the views.py: from django.http import HttpResponse def mainpage(request): f=open('pages/index.html','r').readlines() out='''''' for line in file: out+=line print out return HttpResponse(out) I am trying to load the contents of index.html which is inside a folder pages inside the GUI application folder. My project's urls.py is from django.conf.urls.defaults import * from gui.views import * urlpatterns = patterns('', (r'^/$', mainpage) ) When I run the server I get a 404 error for the root site. How can I load the index.html file through views? A: If you require just simple output of html page, this can be achieved by simply putting following into urls.py: (r'^$', 'direct_to_template', {'template': 'index.html'}) A: For the root page don't use r'^/$', just r'^$', because this ^ means "start of the string after domain AND SLASH" (after 127.0.0.1/ if you run app on localhost). That's why localhost:8080// works for you. Edit: check yours paths too. Do you have 'pages' directory in the same directory that views.py is? Anyway: it seems that you are trying to do something bad and against Django architecture. Look here for tutorial on writing your first application in Django. A: Your actual code in the view is incorrect. Here is my fixed up version: from django.http import HttpResponse def mainpage(request): lines=open('loader/pages/index.html','r').readlines() out='''''' for line in lines: out+=line print out return HttpResponse(out) Note that in your code the line that reads from the file is: f=open('pages/index.html','r').readlines() You open the file and read the lines into f and then try to iterate over lines. The other change is just to get my path to the actual index file right. You might want to read this http://docs.djangoproject.com/en/dev/howto/static-files/ if you want to serve static pages. A: Got it! ;) It seems that the mainpage function actually runs on the urls.py (as it is imported from the views.py) file so the path I must provide is gui/pages/index.html. I still had a problem, 'type object not iterable', but the following worked: def mainpage(request): f=open('gui/pages/index.html','r').readlines() return HttpResponse(f) And url pattern was r'^$' so it worked on http://localhost:8080/ itself.
What may be the problem (Django views)...?
I am writing a GUI application using Django 1.1.1. This is the views.py: from django.http import HttpResponse def mainpage(request): f=open('pages/index.html','r').readlines() out='''''' for line in file: out+=line print out return HttpResponse(out) I am trying to load the contents of index.html which is inside a folder pages inside the GUI application folder. My project's urls.py is from django.conf.urls.defaults import * from gui.views import * urlpatterns = patterns('', (r'^/$', mainpage) ) When I run the server I get a 404 error for the root site. How can I load the index.html file through views?
[ "If you require just simple output of html page, this can be achieved by simply putting following into urls.py:\n(r'^$', 'direct_to_template', {'template': 'index.html'})\n", "For the root page don't use r'^/$', just r'^$', because this ^ means \"start of the string after domain AND SLASH\" (after 127.0.0.1/ if you run app on localhost). That's why localhost:8080// works for you.\nEdit: check yours paths too. Do you have 'pages' directory in the same directory that views.py is?\nAnyway: it seems that you are trying to do something bad and against Django architecture. Look here for tutorial on writing your first application in Django.\n", "Your actual code in the view is incorrect. Here is my fixed up version:\nfrom django.http import HttpResponse\n\ndef mainpage(request):\n lines=open('loader/pages/index.html','r').readlines()\n out=''''''\n for line in lines:\n out+=line\n\n print out\n return HttpResponse(out)\n\nNote that in your code the line that reads from the file is:\nf=open('pages/index.html','r').readlines()\n\nYou open the file and read the lines into f and then try to iterate over lines. The other change is just to get my path to the actual index file right. \nYou might want to read this http://docs.djangoproject.com/en/dev/howto/static-files/ if you want to serve static pages.\n", "Got it! ;)\nIt seems that the mainpage function actually runs on the urls.py (as it is imported from the views.py) file so the path I must provide is gui/pages/index.html. I still had a problem, 'type object not iterable', but the following worked:\ndef mainpage(request):\n f=open('gui/pages/index.html','r').readlines()\n return HttpResponse(f)\n\nAnd url pattern was r'^$' so it worked on http://localhost:8080/ itself.\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001663082_django_python.txt
Q: mod_python req.subprocess_env not "seeing" PythonOptions I'm having trouble getting an environmental variable out of apache config. (don't ask why it's being done this way, I didn't originally code it) This is what I have in the apache config. <Location "/var/www"> SetHandler python-program PythonHandler mod_python.publisher PythonOption MYSQL_PWD ########### PythonDebug On </Location> This is the problem code... #this is the problem code in question. def index(req): req.add_common_vars() os.environ["MYSQL_PWD"] = req.subprocess_env["MYSQL_PWD"] req.content_type = "text/html" statText = getStatText() here is the traceback I'm getting from executing this. Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1537, in HandlerDispatch default=default_handler, arg=req, silent=hlist.silent) File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1229, in _process_target result = _execute_target(config, req, object, arg) File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1128, in _execute_target result = object(arg) File "/usr/lib/python2.5/site-packages/mod_python/publisher.py", line 213, in handler published = publish_object(req, object) File "/usr/lib/python2.5/site-packages/mod_python/publisher.py", line 425, in publish_object return publish_object(req,util.apply_fs_data(object, req.form, req=req)) File "/usr/lib/python2.5/site-packages/mod_python/util.py", line 554, in apply_fs_data return object(**args) File "/var/www/admin/Stat.py", line 299, in index os.environ["MYSQL_PWD"] = req.subprocess_env["MYSQL_PWD"] KeyError: 'MYSQL_PWD' A: os.environ["MYSQL_PWD"] = req.get_options()["MYSQL_PWD"] See docs on PythonOption for more details
mod_python req.subprocess_env not "seeing" PythonOptions
I'm having trouble getting an environmental variable out of apache config. (don't ask why it's being done this way, I didn't originally code it) This is what I have in the apache config. <Location "/var/www"> SetHandler python-program PythonHandler mod_python.publisher PythonOption MYSQL_PWD ########### PythonDebug On </Location> This is the problem code... #this is the problem code in question. def index(req): req.add_common_vars() os.environ["MYSQL_PWD"] = req.subprocess_env["MYSQL_PWD"] req.content_type = "text/html" statText = getStatText() here is the traceback I'm getting from executing this. Traceback (most recent call last): File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1537, in HandlerDispatch default=default_handler, arg=req, silent=hlist.silent) File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1229, in _process_target result = _execute_target(config, req, object, arg) File "/usr/lib/python2.5/site-packages/mod_python/importer.py", line 1128, in _execute_target result = object(arg) File "/usr/lib/python2.5/site-packages/mod_python/publisher.py", line 213, in handler published = publish_object(req, object) File "/usr/lib/python2.5/site-packages/mod_python/publisher.py", line 425, in publish_object return publish_object(req,util.apply_fs_data(object, req.form, req=req)) File "/usr/lib/python2.5/site-packages/mod_python/util.py", line 554, in apply_fs_data return object(**args) File "/var/www/admin/Stat.py", line 299, in index os.environ["MYSQL_PWD"] = req.subprocess_env["MYSQL_PWD"] KeyError: 'MYSQL_PWD'
[ "os.environ[\"MYSQL_PWD\"] = req.get_options()[\"MYSQL_PWD\"]\n\nSee docs on PythonOption for more details\n" ]
[ 0 ]
[]
[]
[ "apache2", "debian", "mod_python", "python" ]
stackoverflow_0001663291_apache2_debian_mod_python_python.txt
Q: imagefield won't validate I moved a site to a mediatemple server using python 2.3, now ImageField won't work in the admin. Upon saving, validation gives the "not valid image" error. checked: media_root and media_url are correct PIL contains jpg support upload folders set to 775 image is not corrupted Ideas? Thanks. A: If you read documentation here: http://docs.djangoproject.com/en/dev/intro/install/ You will find the answer. It works with any Python version from 2.4 to 2.6! And : Set up a database If you installed Python 2.5 or later, you can skip this step for now. If not, or if you'd like to work with a "large" database engine like PostgreSQL, MySQL, or Oracle, consult the database installation information. I think that the problem is python 2.3 do you consider upgrade of it to 2.4, 2.5 or 2.6?
imagefield won't validate
I moved a site to a mediatemple server using python 2.3, now ImageField won't work in the admin. Upon saving, validation gives the "not valid image" error. checked: media_root and media_url are correct PIL contains jpg support upload folders set to 775 image is not corrupted Ideas? Thanks.
[ "If you read documentation here: http://docs.djangoproject.com/en/dev/intro/install/\nYou will find the answer. \n\nIt works with any Python version from\n 2.4 to 2.6!\n\nAnd :\n\nSet up a database\nIf you installed Python 2.5 or later,\n you can skip this step for now.\nIf not, or if you'd like to work with\n a \"large\" database engine like\n PostgreSQL, MySQL, or Oracle, consult\n the database installation information.\n\nI think that the problem is python 2.3 do you consider upgrade of it to 2.4, 2.5 or 2.6?\n" ]
[ 1 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001191487_django_django_models_python.txt
Q: Python, sending command to GPIB instrument I need to send a command to a GPIB instrument and I can do it like this: power.write("volt 0.01"). This command sets the output of my power source to 0.01V, however, I'm trying to take an I-V curve and want to set the source to different values and take a measurement at each value. I basically need some sort of loop to do this for me. I tried the following: k=0 while k<= 1: power.write("volt k") k=k+0.01 This doesn't work because k gets send as 'k', not as a number. How do I fix this? A: Instead of power.write("volt k"), use: power.write("volt " + str(k)) ^ observe space here! If you want to control the output precision, you can use the following: power.write("volt %0.2f" % k) That is, if k is 4.85866 then using %0.2f means volt 4.86 is sent to the device. If using %0.4f then volt 4.8587 is sent to the device. Note the rounding! A: Instead of power.write("volt k"), use: power.write("volt %0.2f" % k)
Python, sending command to GPIB instrument
I need to send a command to a GPIB instrument and I can do it like this: power.write("volt 0.01"). This command sets the output of my power source to 0.01V, however, I'm trying to take an I-V curve and want to set the source to different values and take a measurement at each value. I basically need some sort of loop to do this for me. I tried the following: k=0 while k<= 1: power.write("volt k") k=k+0.01 This doesn't work because k gets send as 'k', not as a number. How do I fix this?
[ "Instead of power.write(\"volt k\"), use:\npower.write(\"volt \" + str(k))\n ^\n observe space here!\n\nIf you want to control the output precision, you can use the following:\npower.write(\"volt %0.2f\" % k)\n\nThat is, if k is 4.85866 then using %0.2f means volt 4.86 is sent to the device. If using %0.4f then volt 4.8587 is sent to the device. Note the rounding!\n", "Instead of power.write(\"volt k\"), use:\npower.write(\"volt %0.2f\" % k)\n\n" ]
[ 6, 3 ]
[]
[]
[ "gpib", "python", "string_formatting" ]
stackoverflow_0001663763_gpib_python_string_formatting.txt
Q: Using Python to check words I'm stuck on a simple problem. I've got a dictionary of words in the English language, and a sample text that is to be checked. I've got to check every word in the sample against the dictionary, and the code I'm using is wrong. for word in checkList: # iterates through every word in the sample if word not in refDict: # checks if word is not in the dictionary print word # just to see if it's recognizing misspelled words The only problem is, as it goes through the loop it prints out every word, not just the misspelled ones. Can someone explain this and offer a solution possibly? Thank you so much! A: The snippet you have is functional. See for example >>> refDict = {'alpha':1, 'bravo':2, 'charlie':3, 'delta':4} >>> s = 'he said bravo to charlie O\'Brian and jack Alpha' >>> for word in s.split(): ... if word not in refDict: ... print(repr(word)) # by temporarily using repr() we can see exactly ... # what the words are like ... 'he' 'said' 'to' "O'Brian" 'and' 'jack' 'Alpha' # note how Alpha was not found in refDict (u/l case difference) Therefore, the dictionary contents must differ from what you think, or the words out of checklist are not exactly as they appear (eg. with whitespace or capitalization; see the use of repr() (*) in print statement to help identify cases of the former). Debugging suggestion: FOCUS on the first word from checklist (or the first that you suspect is to be found in dictionary). Then for this word and this word only, print it in details, with its length, with bracket on either side etc., for both the word out of checklist and the corresponding key in the dictionary... (*) repr() was a suggestion from John Machin. Instead I often use brackets or other characters as in print('[' + word + ']'), but repr() is more exacting in its output. A: Consider stripping your words of any whitespace that might be there, and changing all the words of both sets to the same case. Like this: word.strip().lower() That way you can make sure you're comparing apples to apples. A: Clearly "word not in refDict" always evaluates to True. This is probably because the contents of refDict or checkList are not what you think they are. Are they both tuples or lists of strings? A: The code you have would work if the keys in refDict are the correctly spelt words. If the correctly spelt words are the values in your dict then you need something like this: for word in checkList: if word not in refDict.values(): print word Is there a reason you dictionary is stored as a mapping as opposed to a list or a set? A python dict contains name-value pairs for example I could use this mapping: {"dog":23, "cat":45, "pony":67} to store an index of a word and page number it is found in some book. In your case your dict is a mapping of what to what? A: Are the words in the refDict the keys or the values? Your code will only see keys: e.g.: refDict = { 'w':'x', 'y':'z' } for word in [ 'w','x','y','z' ]: if word not in refDict: print word prints: x z Othewise you want; if word not in refDict.values() Of course this rather assumes that your dictionary is an actual python dictionary which seems an odd way to store a list of words. A: Your refDict is probably wrong. The in keyword checks if the value is in the keys of the dictionary. I believe you've put your words in as values. I'd propose using a set instead of a dictionary. knownwords = set("dog", "cat") knownwords.add("apple") text = "The dog eats an apple." for word in text.split(" "): # to ignore case word is converted to lowercase if word.lower() not in knownwords: print word # The # eats # an # apple. <- doesn't work because of the dot
Using Python to check words
I'm stuck on a simple problem. I've got a dictionary of words in the English language, and a sample text that is to be checked. I've got to check every word in the sample against the dictionary, and the code I'm using is wrong. for word in checkList: # iterates through every word in the sample if word not in refDict: # checks if word is not in the dictionary print word # just to see if it's recognizing misspelled words The only problem is, as it goes through the loop it prints out every word, not just the misspelled ones. Can someone explain this and offer a solution possibly? Thank you so much!
[ "The snippet you have is functional. See for example\n>>> refDict = {'alpha':1, 'bravo':2, 'charlie':3, 'delta':4}\n>>> s = 'he said bravo to charlie O\\'Brian and jack Alpha'\n>>> for word in s.split():\n... if word not in refDict:\n... print(repr(word)) # by temporarily using repr() we can see exactly\n... # what the words are like\n...\n'he'\n'said'\n'to'\n\"O'Brian\"\n'and'\n'jack'\n'Alpha' # note how Alpha was not found in refDict (u/l case difference)\n\nTherefore, the dictionary contents must differ from what you think, or the words out of checklist are not exactly as they appear (eg. with whitespace or capitalization; see the use of repr() (*) in print statement to help identify cases of the former).\nDebugging suggestion: FOCUS on the first word from checklist (or the first that you suspect is to be found in dictionary). Then for this word and this word only, print it in details, with its length, with bracket on either side etc., for both the word out of checklist and the corresponding key in the dictionary...\n(*) repr() was a suggestion from John Machin. Instead I often use brackets or other characters as in print('[' + word + ']'), but repr() is more exacting in its output.\n", "Consider stripping your words of any whitespace that might be there, and changing all the words of both sets to the same case. Like this:\nword.strip().lower()\n\nThat way you can make sure you're comparing apples to apples.\n", "Clearly \"word not in refDict\" always evaluates to True. This is probably because the contents of refDict or checkList are not what you think they are. Are they both tuples or lists of strings?\n", "The code you have would work if the keys in refDict are the correctly spelt words. If the correctly spelt words are the values in your dict then you need something like this:\nfor word in checkList:\n if word not in refDict.values():\n print word\n\nIs there a reason you dictionary is stored as a mapping as opposed to a list or a set? A python dict contains name-value pairs for example I could use this mapping: {\"dog\":23, \"cat\":45, \"pony\":67} to store an index of a word and page number it is found in some book. In your case your dict is a mapping of what to what?\n", "Are the words in the refDict the keys or the values? \nYour code will only see keys: e.g.:\nrefDict = { 'w':'x', 'y':'z' }\nfor word in [ 'w','x','y','z' ]:\n if word not in refDict:\n print word\n\nprints:\nx\nz\n\nOthewise you want;\nif word not in refDict.values()\nOf course this rather assumes that your dictionary is an actual python dictionary which seems an odd way to store a list of words.\n", "Your refDict is probably wrong. The in keyword checks if the value is in the keys of the dictionary. I believe you've put your words in as values.\nI'd propose using a set instead of a dictionary.\nknownwords = set(\"dog\", \"cat\")\nknownwords.add(\"apple\")\n\ntext = \"The dog eats an apple.\"\nfor word in text.split(\" \"):\n # to ignore case word is converted to lowercase\n if word.lower() not in knownwords:\n print word\n# The\n# eats\n# an\n# apple. <- doesn't work because of the dot\n\n" ]
[ 6, 5, 2, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001663133_python.txt
Q: How to keep help strings the same when applying decorators? How can I keep help strings in functions to be visible after applying a decorator? Right now the doc string is (partially) replaced with that of the inner function of the decorator. def deco(fn): def x(*args, **kwargs): return fn(*args, **kwargs) x.func_doc = fn.func_doc x.func_name = fn.func_name return x @deco def y(a, b): """This is Y""" pass def z(c, d): """This is Z""" pass help(y) # 1 help(z) # 2 In the Y function, required arguments aren't shown in the help. The user may assume it takes any arguments, while actually it doesn't. y(*args, **kwargs) <= y(a, b) is desired This is Y z(c, d) This is Z I use help() and dir() a lot, since it's faster than pdf manuals, and want to make reliable document strings for my library and tools, but this is an obstacle. A: give the decorator module a peek. i believe it does exactly what you want. In [1]: from decorator import decorator In [2]: @decorator ...: def say_hello(f, *args, **kwargs): ...: print "Hello!" ...: return f(*args, **kwargs) ...: In [3]: @say_hello ...: def double(x): ...: return 2*x ...: and info says "double(x)" in it. A: What you're requesting is very hard to do "properly", because help gets the function signature from inspect.getargspec which in turn gets it from introspection which cannot directly be fooled -- to do it "properly" would mean generating a new function object on the fly (instead of a simple wrapper function) with the right argument names and numbers (and default values). Extremely hard, advanced, black-magic bytecode hacking required, in other words. I think it may be easier to do it by monkeypatching (never a pleasant prospect, but sometimes the only way to perform customization tasks that are otherwise so difficult as to prove almost impossible, like the one you require) -- replace the real inspect.getargspec with your own lookalike function which uses a look-aside table (mapping the wrapper functions you generate to the wrapped functions' argspecs and otherwise delegating to the real thing). import functools import inspect realgas = inspect.getargspec lookaside = dict() def fakegas(f): if f in lookaside: return lookaside[f] return realgas(f) inspect.getargspec = fakegas def deco(fn): @functools.wraps(fn) def x(*args, **kwargs): return fn(*args, **kwargs) lookaside[x] = realgas(fn) return x @deco def x(a, b=23): """Some doc for x.""" return a + b help(x) This prints, as required: Help on function x in module __main__: x(a, b=23) Some doc for x. (END)
How to keep help strings the same when applying decorators?
How can I keep help strings in functions to be visible after applying a decorator? Right now the doc string is (partially) replaced with that of the inner function of the decorator. def deco(fn): def x(*args, **kwargs): return fn(*args, **kwargs) x.func_doc = fn.func_doc x.func_name = fn.func_name return x @deco def y(a, b): """This is Y""" pass def z(c, d): """This is Z""" pass help(y) # 1 help(z) # 2 In the Y function, required arguments aren't shown in the help. The user may assume it takes any arguments, while actually it doesn't. y(*args, **kwargs) <= y(a, b) is desired This is Y z(c, d) This is Z I use help() and dir() a lot, since it's faster than pdf manuals, and want to make reliable document strings for my library and tools, but this is an obstacle.
[ "give the decorator module a peek. i believe it does exactly what you want.\nIn [1]: from decorator import decorator\nIn [2]: @decorator\n ...: def say_hello(f, *args, **kwargs):\n ...: print \"Hello!\"\n ...: return f(*args, **kwargs)\n ...: \nIn [3]: @say_hello\n ...: def double(x):\n ...: return 2*x\n ...: \n\nand info says \"double(x)\" in it.\n", "What you're requesting is very hard to do \"properly\", because help gets the function signature from inspect.getargspec which in turn gets it from introspection which cannot directly be fooled -- to do it \"properly\" would mean generating a new function object on the fly (instead of a simple wrapper function) with the right argument names and numbers (and default values). Extremely hard, advanced, black-magic bytecode hacking required, in other words.\nI think it may be easier to do it by monkeypatching (never a pleasant prospect, but sometimes the only way to perform customization tasks that are otherwise so difficult as to prove almost impossible, like the one you require) -- replace the real inspect.getargspec with your own lookalike function which uses a look-aside table (mapping the wrapper functions you generate to the wrapped functions' argspecs and otherwise delegating to the real thing).\nimport functools\nimport inspect\n\nrealgas = inspect.getargspec\n\nlookaside = dict()\n\ndef fakegas(f):\n if f in lookaside:\n return lookaside[f]\n return realgas(f)\n\ninspect.getargspec = fakegas\n\ndef deco(fn):\n @functools.wraps(fn)\n def x(*args, **kwargs):\n return fn(*args, **kwargs)\n lookaside[x] = realgas(fn)\n return x\n\n@deco\ndef x(a, b=23):\n \"\"\"Some doc for x.\"\"\"\n return a + b\n\nhelp(x)\n\nThis prints, as required:\nHelp on function x in module __main__:\n\nx(a, b=23)\n Some doc for x.\n(END)\n\n" ]
[ 5, 1 ]
[]
[]
[ "decorator", "documentation", "python" ]
stackoverflow_0001663568_decorator_documentation_python.txt
Q: tokenizer errors with nltk I'm very new to Python, and am trying to learn in conjunction with using nltk. I've been following some examples and testing things out, but it seems I am very limited in what I can do due to errors being returned by python. I know nltk is installed and importing fine, because this code works from nltk.sem import chat80 print chat80.items However, 'from nltk.tokenizer import *' returns 'File "stdin", line1. I get similar errors when using any sort of "TOKEN=" or I'm guessing tokenization of anything. I've installed python many times in the last few days, hoping a different version or better install might help. I'm getting this error on windows7 using activePython2.6, though I've gotten similar err ors with python 3.1 activePython3.1 and Python 2.6. as well as on Mac OSx 10.5 with Python 2.5. The mac is giving a bit more data with "Import Error: No module named tokenizer. I'm just trying some of the introductory demos to nltk online, not even trying to write my own code yet, and I'm getting more errors than successes. A: Looks like the nltp package doesn't have a tokenizer package. A quick look on the NLTK website suggests that from nltp.tokenize import * is what you're after. A: Adam's answer may well be correct for your immediate "tokenizer" problem. Here's some general advice: It helps when one is in unfamiliar territory to read the road signs e.g. this at the top of the Downloads page: """Although Python 3.0 is now available, many packages that NLTK requires do not have distributions for Python 3.0. For now you should use NLTK with Python 2.4., 2.5., or 2.6.* only.""" ... that would have saved you the effort trying Python 3.1. Moreover, trying to learn Python 2.x and 3.x at the same time is a bit too much for a novice. """I've installed python many times in the last few days, hoping a different version or better install might help""" ... repeated installations of the same version is unlikely to help. """However, from nltk.tokenizer import * returns File "stdin", line1 """ ... when asking for help, show your input and ALL of the output e.g. >>> from nosuchthing import * Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named nosuchthing >>> and don't type from memory; use copy/paste. When faced with a problem, plan your investigation of possible causes. Look at those with high plausibility and low cost of investigation (e.g. typo or other transcription error) first. I can't recall where I read this advice, but it's worth remembering: "Before you blame acts of God and acts of Gates, check for acts of self".
tokenizer errors with nltk
I'm very new to Python, and am trying to learn in conjunction with using nltk. I've been following some examples and testing things out, but it seems I am very limited in what I can do due to errors being returned by python. I know nltk is installed and importing fine, because this code works from nltk.sem import chat80 print chat80.items However, 'from nltk.tokenizer import *' returns 'File "stdin", line1. I get similar errors when using any sort of "TOKEN=" or I'm guessing tokenization of anything. I've installed python many times in the last few days, hoping a different version or better install might help. I'm getting this error on windows7 using activePython2.6, though I've gotten similar err ors with python 3.1 activePython3.1 and Python 2.6. as well as on Mac OSx 10.5 with Python 2.5. The mac is giving a bit more data with "Import Error: No module named tokenizer. I'm just trying some of the introductory demos to nltk online, not even trying to write my own code yet, and I'm getting more errors than successes.
[ "Looks like the nltp package doesn't have a tokenizer package.\nA quick look on the NLTK website suggests that from nltp.tokenize import * is what you're after.\n", "Adam's answer may well be correct for your immediate \"tokenizer\" problem. Here's some general advice:\nIt helps when one is in unfamiliar territory to read the road signs e.g. this at the top of the Downloads page: \"\"\"Although Python 3.0 is now available, many packages that NLTK requires do not have distributions for Python 3.0. For now you should use NLTK with Python 2.4., 2.5., or 2.6.* only.\"\"\" ... that would have saved you the effort trying Python 3.1. Moreover, trying to learn Python 2.x and 3.x at the same time is a bit too much for a novice.\n\"\"\"I've installed python many times in the last few days, hoping a different version or better install might help\"\"\" ... repeated installations of the same version is unlikely to help.\n\"\"\"However, from nltk.tokenizer import * returns File \"stdin\", line1 \"\"\" ... when asking for help, show your input and ALL of the output e.g.\n>>> from nosuchthing import *\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nImportError: No module named nosuchthing\n>>>\n\nand don't type from memory; use copy/paste.\nWhen faced with a problem, plan your investigation of possible causes. Look at those with high plausibility and low cost of investigation (e.g. typo or other transcription error) first. I can't recall where I read this advice, but it's worth remembering: \"Before you blame acts of God and acts of Gates, check for acts of self\".\n" ]
[ 3, 0 ]
[]
[]
[ "nltk", "python" ]
stackoverflow_0001663762_nltk_python.txt
Q: What should I install in order to be able to use GTK in Python on Ubuntu? In my source code I have: import gtk But when I run the script with python3 script.py command I get the following error. What package should I install to get it working? Edit: my bad. here is the error: ImportError: No module named gtk Edit2: Thanks for the answer, kaizer.se. But I'm still getting an error message. Take a look at the following code: import pygtk, gtk pygtk.require('2.0') def main(): win = gtk.Window(gtk.WINDOW_TOPLEVEL) s = u"привет 한국" win.set_title(s) win.connect("destroy", gtk.main_quit) win.show() if __name__ == "__main__": main() gtk.main() When I run this script I get the following error: SyntaxError: Non-ASCII character '\xd0' in file basics.py on line 6, but no encoding declared; see python.org/peps/pep-0263.html for details Any idea how I may solve this problem? Thanks. A: PyGtk doesn't support Python 3 yet. You might want to use Python 2.x and then you will need to install the python-gtk2 package. A: There is no big difference between how Python 3 and Python 2.6 handle unicode and international text, technically. The biggest difference is what the classes are called and what the defaults are. So if you in Python 3 write: s = "Grüß Gott" you take this in Python 2.x: # coding: UTF-8 s = u"Grüß Gott" PyGTK always works with the UTF-8 encoding internally, and you can pass it unicode strings or UTF-8-encoded strings however you want. The best model is to always work with unicode strings internally, and always convert strings as soon as they enter your program (say, you read a file). Again, in Python 3 this is more explicitly enforced, but the process is really exactly the same. Addessing the updated question: I have already answered, Look closely at my Py 3 and Py 2.x examples! Hint: You must specify an encoding on the first line of every file, like this: # coding: UTF-8 You must also ask yourself, Baha, when you see an error message like this: SyntaxError: Non-ASCII character '\xd0' in file basics.py on line 6, but no encoding declared; see python.org/peps/pep-0263.html for details Did you read the message and see it says "no encoding declared"? Did you follow the link and read the information there? It would have been easy to solve this yourself.
What should I install in order to be able to use GTK in Python on Ubuntu?
In my source code I have: import gtk But when I run the script with python3 script.py command I get the following error. What package should I install to get it working? Edit: my bad. here is the error: ImportError: No module named gtk Edit2: Thanks for the answer, kaizer.se. But I'm still getting an error message. Take a look at the following code: import pygtk, gtk pygtk.require('2.0') def main(): win = gtk.Window(gtk.WINDOW_TOPLEVEL) s = u"привет 한국" win.set_title(s) win.connect("destroy", gtk.main_quit) win.show() if __name__ == "__main__": main() gtk.main() When I run this script I get the following error: SyntaxError: Non-ASCII character '\xd0' in file basics.py on line 6, but no encoding declared; see python.org/peps/pep-0263.html for details Any idea how I may solve this problem? Thanks.
[ "PyGtk doesn't support Python 3 yet. You might want to use Python 2.x and then you will need to install the python-gtk2 package.\n", "There is no big difference between how Python 3 and Python 2.6 handle unicode and international text, technically. The biggest difference is what the classes are called and what the defaults are.\nSo if you in Python 3 write:\ns = \"Grüß Gott\"\n\nyou take this in Python 2.x:\n# coding: UTF-8\ns = u\"Grüß Gott\"\n\nPyGTK always works with the UTF-8 encoding internally, and you can pass it unicode strings or UTF-8-encoded strings however you want.\nThe best model is to always work with unicode strings internally, and always convert strings as soon as they enter your program (say, you read a file). Again, in Python 3 this is more explicitly enforced, but the process is really exactly the same.\nAddessing the updated question: I have already answered, Look closely at my Py 3 and Py 2.x examples! Hint: You must specify an encoding on the first line of every file, like this: # coding: UTF-8\nYou must also ask yourself, Baha, when you see an error message like this:\n\nSyntaxError: Non-ASCII character\n '\\xd0' in file basics.py on line 6,\n but no encoding declared; see\n python.org/peps/pep-0263.html for\n details\n\nDid you read the message and see it says \"no encoding declared\"? Did you follow the link and read the information there? It would have been easy to solve this yourself.\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001663109_python.txt
Q: What does "evaluated only once" mean for chained comparisons in Python? A friend brought this to my attention, and after I pointed out an oddity, we're both confused. Python's docs, say, and have said since at least 2.5.1 (haven't checked further back: Comparisons can be chained arbitrarily, e.g., x < y <= z is equivalent to x < y and y <= z, except that y is evaluated only once (but in both cases z is not evaluated at all when x < y is found to be false). Our confusion lies in the meaning of "y is evaluated only once". Given a simple but contrived class: class Magic(object): def __init__(self, name, val): self.name = name self.val = val def __lt__(self, other): print("Magic: Called lt on {0}".format(self.name)) if self.val < other.val: return True else: return False def __le__(self, other): print("Magic: Called le on {0}".format(self.name)) if self.val <= other.val: return True else: return False We can produce this result: >>> x = Magic("x", 0) >>> y = Magic("y", 5) >>> z = Magic("z", 10) >>> >>> if x < y <= z: ... print ("More magic.") ... Magic: Called lt on x Magic: Called le on y More magic. >>> This certainly looks like 'y' is, in a traditional sense "evaluated" twice -- once when x.__lt__(y) is called and performs a comparison on it, and once when y.__le__(z) is called. So with this in mind, what exactly do the Python docs mean when they say "y is evaluated only once"? A: The 'expression' y is evaluated once. I.e., in the following expression, the function is executed only one time. >>> def five(): ... print 'returning 5' ... return 5 ... >>> 1 < five() <= 5 returning 5 True As opposed to: >>> 1 < five() and five() <= 5 returning 5 returning 5 True A: In the context of y being evaluated, y is meant as an arbitrary expression that could have side-effects. For instance: class Foo(object): @property def complain(self): print("Evaluated!") return 2 f = Foo() print(1 < f.complain < 3) # Prints evaluated once print(1 < f.complain and f.complain < 3) # Prints evaluated twice
What does "evaluated only once" mean for chained comparisons in Python?
A friend brought this to my attention, and after I pointed out an oddity, we're both confused. Python's docs, say, and have said since at least 2.5.1 (haven't checked further back: Comparisons can be chained arbitrarily, e.g., x < y <= z is equivalent to x < y and y <= z, except that y is evaluated only once (but in both cases z is not evaluated at all when x < y is found to be false). Our confusion lies in the meaning of "y is evaluated only once". Given a simple but contrived class: class Magic(object): def __init__(self, name, val): self.name = name self.val = val def __lt__(self, other): print("Magic: Called lt on {0}".format(self.name)) if self.val < other.val: return True else: return False def __le__(self, other): print("Magic: Called le on {0}".format(self.name)) if self.val <= other.val: return True else: return False We can produce this result: >>> x = Magic("x", 0) >>> y = Magic("y", 5) >>> z = Magic("z", 10) >>> >>> if x < y <= z: ... print ("More magic.") ... Magic: Called lt on x Magic: Called le on y More magic. >>> This certainly looks like 'y' is, in a traditional sense "evaluated" twice -- once when x.__lt__(y) is called and performs a comparison on it, and once when y.__le__(z) is called. So with this in mind, what exactly do the Python docs mean when they say "y is evaluated only once"?
[ "The 'expression' y is evaluated once. I.e., in the following expression, the function is executed only one time.\n>>> def five():\n... print 'returning 5'\n... return 5\n... \n>>> 1 < five() <= 5\nreturning 5\nTrue\n\nAs opposed to:\n>>> 1 < five() and five() <= 5\nreturning 5\nreturning 5\nTrue\n\n", "In the context of y being evaluated, y is meant as an arbitrary expression that could have side-effects. For instance:\nclass Foo(object):\n @property\n def complain(self):\n print(\"Evaluated!\")\n return 2\n\nf = Foo()\nprint(1 < f.complain < 3) # Prints evaluated once\nprint(1 < f.complain and f.complain < 3) # Prints evaluated twice\n\n" ]
[ 45, 8 ]
[]
[]
[ "python" ]
stackoverflow_0001664292_python.txt
Q: How do you set the text direction for a TextTable Cell in OpenOffice? I want to set the text direction for some cells in a TextTable so that they are vertical (i.e., the text is landscape instead of portrait). You can do this in Writer by selecting the cell(s), and going to: Table - Text Properties - Text Flow - Text Direction However, I cannot figure out how to do this through the API. I tried using CharRotation, but it does not behave the right way. CharRotation simply takes the text, and rotates it (without adjusting any formatting). The text I am dealing with is formatted by tab stops, and does not behave correctly when rotated this way. A: I finally figured this out after all these months! You have to set the "WritingMode" property for the cell. In C#: XCell cell = table.getCellByName(cellName); ((XPropertySet)cell).setPropertyValue("WritingMode", new Any((short) WritingMode.TB_RL)); I haven't tried it in python yet, but I suppose it would be something like this: cell = table.getCellByName(cellName) cell.WritingMode = 2 If you're using a statically typed language, make sure you cast it to a short. Doing typeof(WritingMode) won't work, for some odd reason. See this issue in the OOo bug tracker.
How do you set the text direction for a TextTable Cell in OpenOffice?
I want to set the text direction for some cells in a TextTable so that they are vertical (i.e., the text is landscape instead of portrait). You can do this in Writer by selecting the cell(s), and going to: Table - Text Properties - Text Flow - Text Direction However, I cannot figure out how to do this through the API. I tried using CharRotation, but it does not behave the right way. CharRotation simply takes the text, and rotates it (without adjusting any formatting). The text I am dealing with is formatted by tab stops, and does not behave correctly when rotated this way.
[ "I finally figured this out after all these months!\nYou have to set the \"WritingMode\" property for the cell. In C#:\nXCell cell = table.getCellByName(cellName);\n((XPropertySet)cell).setPropertyValue(\"WritingMode\", new Any((short) \nWritingMode.TB_RL));\n\nI haven't tried it in python yet, but I suppose it would be something like this:\ncell = table.getCellByName(cellName)\ncell.WritingMode = 2\n\nIf you're using a statically typed language, make sure you cast it to a short. Doing typeof(WritingMode) won't work, for some odd reason.\nSee this issue in the OOo bug tracker.\n" ]
[ 0 ]
[]
[]
[ "c#", "openoffice.org", "openoffice_writer", "python", "uno" ]
stackoverflow_0000898739_c#_openoffice.org_openoffice_writer_python_uno.txt
Q: bencoding binary data in Java strings I'm playing with bencoding and I would like to keep bencoded strings as Java strings, but they contain binary data, so blindly converting them to string will corrupt the data. What I am trying to accomplish is to have a conversion function that will keep the ASCII bytes as ASCII and encode non-ASCII chars in a reversible way. I have found some examples of what I am trying to accomplish in Python but I don't know enough Python to dig through them. This decoder does exactly what I would like to do: ASCII parts of the torrent stay as ASCII, but sha1 hashes are printed as "\xd8r\xe7". Though my Python knowledge is very limited, he doesn't seem to be doing anything special to the string; is this handled by the Python interpreter? Can I accomplish the same in Java? I have played with some encodings such as Base64 or using Integer.toHexString, but I get unreadable ASCII strings in the end. I have also found a scheme example that prints everything but the sha1 hashes. A: Bencoded strings are byte strings. You can attempt to decode a byte string to unicode codepoints in Java with String(byte[] bytes, Charset charset). Decoding with certain encodings such as ISO-8859-1 will always succeed, since any byte maps directly to a codepoint. With many of these encodings (including ISO-8859-1) the process is also reversible. A: If Wikipedia is accurate on Bencode, the format seems straightforward enough. Parse the byte data directly: while (true) { in.mark(1); int n = in.read(); if (n < 0) { // end of input break; } in.reset(); // take advantage of some UTF-16 values == ASCII values if (n == 'd') { // parse dictionary } else if (n == 'i') { // parse int } else if (n >= '0' && n <= '9') { // parse binary string } else if (n == 'l') { // parse list } else { throw new IOException("Invalid input"); } Store the binary strings in a type that only converts them to ASCII when you do it explicitly, as in this toString call: public class ByteString { private final byte[] data; public ByteString(byte[] data) { this.data = data.clone(); } public byte[] getData() { return data.clone(); } @Override public String toString() { return new String(data, Charset.forName("US-ASCII")); } }
bencoding binary data in Java strings
I'm playing with bencoding and I would like to keep bencoded strings as Java strings, but they contain binary data, so blindly converting them to string will corrupt the data. What I am trying to accomplish is to have a conversion function that will keep the ASCII bytes as ASCII and encode non-ASCII chars in a reversible way. I have found some examples of what I am trying to accomplish in Python but I don't know enough Python to dig through them. This decoder does exactly what I would like to do: ASCII parts of the torrent stay as ASCII, but sha1 hashes are printed as "\xd8r\xe7". Though my Python knowledge is very limited, he doesn't seem to be doing anything special to the string; is this handled by the Python interpreter? Can I accomplish the same in Java? I have played with some encodings such as Base64 or using Integer.toHexString, but I get unreadable ASCII strings in the end. I have also found a scheme example that prints everything but the sha1 hashes.
[ "Bencoded strings are byte strings. You can attempt to decode a byte string to unicode codepoints in Java with String(byte[] bytes, Charset charset). Decoding with certain encodings such as ISO-8859-1 will always succeed, since any byte maps directly to a codepoint. With many of these encodings (including ISO-8859-1) the process is also reversible.\n", "If Wikipedia is accurate on Bencode, the format seems straightforward enough. Parse the byte data directly:\nwhile (true) {\n in.mark(1);\n int n = in.read();\n if (n < 0) {\n // end of input\n break;\n }\n in.reset();\n // take advantage of some UTF-16 values == ASCII values\n if (n == 'd') {\n // parse dictionary\n } else if (n == 'i') {\n // parse int\n } else if (n >= '0' && n <= '9') {\n // parse binary string\n } else if (n == 'l') {\n // parse list\n } else {\n throw new IOException(\"Invalid input\");\n }\n\nStore the binary strings in a type that only converts them to ASCII when you do it explicitly, as in this toString call:\npublic class ByteString {\n private final byte[] data;\n\n public ByteString(byte[] data) { this.data = data.clone(); }\n public byte[] getData() { return data.clone(); }\n\n @Override public String toString() {\n return new String(data, Charset.forName(\"US-ASCII\"));\n }\n}\n\n" ]
[ 2, 0 ]
[]
[]
[ "encoding", "java", "python" ]
stackoverflow_0001664124_encoding_java_python.txt
Q: Parsing HTML generated from Legacy ASP Application to create ASP.NET 2.0 Pages One of my friends is working on having a good solution to generate aspx pages, out of html pages generated from a legacy asp application. The idea is to run the legacy app, capture html output, clean the html using some tool (say HtmlTidy) and parse it/transform it to aspx, (using Xslt or a custom tool) so that existing html elements, divs, images, styles etc gets converted neatly to an aspx page (too much ;) ). Any existing tools/scripts/utilities to do the same? A: Here's what you do. Define what the legacy app is supposed to do. Write down the scenarios of getting pages, posting forms, navigating, etc. Write unit test-like scripts for the various scenarios. Use the Python HTTP client library to exercise the legacy app in your various scripts. If your scripts work, you (a) actually understand the legacy app, (b) can make it do the various things it's supposed to do, and (c) you can reliably capture the HTML response pages. Update your scripts to capture the HTML responses. You have the pages. Now you can think about what you need for your ASPX pages. Edit the HTML by hand to make it into ASPX. Write something that uses Beautiful Soup to massage the HTML into a form suitable for ASPX. This might be some replacement of text or tags with <asp:... tags. Create some other, more useful data structure out of the HTML -- one that reflects the structure and meaning of the pages, not just the HTML tags. Generate the ASPX pages from that more useful structure. A: Just found HTML agility pack to be useful enough, as they understand C# better than python. A: I know this is an old question, but in a similar situation (50k+ legacy ASP pages that need to display in a .NET framework), I did the following. Created a rewrite engine (HttpModule) which catches all incoming requests and looks for anything that is from the old site. (in a separate class - keep things organized!) use WebClient or HttpRequest, etc to open a connection to the old server and download the rendered HTML. Use the HTML agility toolkit (very slick) to extract the content that I'm interested in - in our case, this is always inside if a div with the class "bdy". Throw this into a cache - a SQL table in this example. Each hit checks the cache and either a)retrieves the page and builds the cache entry, or b) just gets the page from the cache. An aspx page built specifically for displaying legacy content receives the rewrite request and displays the relevant content from the legacy page inside of an asp literal control. The cache is there for performance - since the first request for a given page has a minimum of two hits - one from the browser to the new server, one from the new server to the old server - I store cachable data on the new server so that subsequent requests don't have to go back to the old server. We also cache images, css, scripts, etc. It gets messy when you have to handle forms, cookies, etc, but these can all be stored in your cache and passed through to the old server with each request if necessary. I also store content expiration dates and other headers that I get back from the legacy server and am sure to pass those back to the browser when rendering the cached page. Just remember to take as content-agnostic an approach as possible. You're effectively building an in-page web proxy that lets IIS render old ASP the way it wants, and manipulating the output. Works very well - I have all of the old pages working seamlessly within our ASP.NET app. This saved us a solid year of development time that would have been required if we had to touch every legacy asp page. Good luck!
Parsing HTML generated from Legacy ASP Application to create ASP.NET 2.0 Pages
One of my friends is working on having a good solution to generate aspx pages, out of html pages generated from a legacy asp application. The idea is to run the legacy app, capture html output, clean the html using some tool (say HtmlTidy) and parse it/transform it to aspx, (using Xslt or a custom tool) so that existing html elements, divs, images, styles etc gets converted neatly to an aspx page (too much ;) ). Any existing tools/scripts/utilities to do the same?
[ "Here's what you do.\n\nDefine what the legacy app is supposed to do. Write down the scenarios of getting pages, posting forms, navigating, etc.\nWrite unit test-like scripts for the various scenarios.\nUse the Python HTTP client library to exercise the legacy app in your various scripts.\nIf your scripts work, you (a) actually understand the legacy app, (b) can make it do the various things it's supposed to do, and (c) you can reliably capture the HTML response pages.\nUpdate your scripts to capture the HTML responses.\n\nYou have the pages. Now you can think about what you need for your ASPX pages.\n\nEdit the HTML by hand to make it into ASPX.\nWrite something that uses Beautiful Soup to massage the HTML into a form suitable for ASPX. This might be some replacement of text or tags with <asp:... tags.\nCreate some other, more useful data structure out of the HTML -- one that reflects the structure and meaning of the pages, not just the HTML tags. Generate the ASPX pages from that more useful structure.\n\n", "Just found HTML agility pack to be useful enough, as they understand C# better than python. \n", "I know this is an old question, but in a similar situation (50k+ legacy ASP pages that need to display in a .NET framework), I did the following.\n\nCreated a rewrite engine (HttpModule) which catches all incoming requests and looks for anything that is from the old site.\n(in a separate class - keep things organized!) use WebClient or HttpRequest, etc to open a connection to the old server and download the rendered HTML.\nUse the HTML agility toolkit (very slick) to extract the content that I'm interested in - in our case, this is always inside if a div with the class \"bdy\".\nThrow this into a cache - a SQL table in this example.\n\nEach hit checks the cache and either a)retrieves the page and builds the cache entry, or b) just gets the page from the cache.\n\nAn aspx page built specifically for displaying legacy content receives the rewrite request and displays the relevant content from the legacy page inside of an asp literal control.\n\nThe cache is there for performance - since the first request for a given page has a minimum of two hits - one from the browser to the new server, one from the new server to the old server - I store cachable data on the new server so that subsequent requests don't have to go back to the old server. We also cache images, css, scripts, etc.\nIt gets messy when you have to handle forms, cookies, etc, but these can all be stored in your cache and passed through to the old server with each request if necessary. I also store content expiration dates and other headers that I get back from the legacy server and am sure to pass those back to the browser when rendering the cached page. Just remember to take as content-agnostic an approach as possible. You're effectively building an in-page web proxy that lets IIS render old ASP the way it wants, and manipulating the output. \nWorks very well - I have all of the old pages working seamlessly within our ASP.NET app. This saved us a solid year of development time that would have been required if we had to touch every legacy asp page.\nGood luck!\n" ]
[ 2, 0, 0 ]
[]
[]
[ ".net", "c#", "html", "python" ]
stackoverflow_0000565264_.net_c#_html_python.txt
Q: Significant whitespace in C# like Python or Haskell? I'm wondering if any other C# developers would find it an improvement to have a compiler directive for csc.exe to make whitespace significant a la Haskell or Python where the kinds of whitespace create code blocks. While this would certainly be a massive departure from C-style languages, it seems to me that since C# is ultimately being compiled down to CIL (which would still have the curly braces and semicolons), it really is just a parsing trick the compiler can handle either way (that is, it can either deal with significant whitespaces or not). Since curlies and semicolons are often a barrier to entry to C# & they are really only parsing helpers (they don't in themselves impart meaning to your code), they could be removed a la Haskell/Python. F# handles this with the #light compiler directive which you can read about in Lightweight syntax option in F# 1.1.12.3. I'd like to see the same thing in C#: a #SigSpace or somesuch directive that would direct csc.exe to treat the source like a Haskell file in terms of whitespace (just as an example). Standard C#: public void WhiteSpaceSig() { List<string> names = new List<string>(); List<string> colors = new List<string>(); foreach (string name in names) { foreach (string color in colors) { // bla bla bla } } } Significant whitespace: #SigSpace public void WhiteSpaceSig() List<string> names = new List<string>() List<string> colors = new List<string>() foreach (string name in names) foreach (string color in colors) // bla bla bla I'm not saying that I want this in C#, but I am interested in what the tradeoffs are. My guess is that most C# developers have gotten so used to the syntax that they won't be able to see how artificial it is (though it may in the end make the code easier to read). A: If you want this syntax, why not just use IronPython or Boo instead of C#? It seems better to implement a custom language for this, instead of trying to tweak C#. As you said, they all compile to the same IL, so there's no reason to change a good, clean working syntax to implement what would essentially be a new language grammar. A: As a mainly Python developer, I would love to see more languages adopting significant whitespace for delimiting blocks. If you search the newsgroups, you will find plenty of opinions of C,C++,C#,Java and so on developers. My feeling is that many of them really like the curly braces. Having a mixture of styles would be a pain though. I regularly use curly brace languages too, so I can see both sides A: You might be interested in Kirill Osenkov's thesis, Designing, implementing and integrating a structured C# code editor. The underlying idea is that while the braces are part of the C# language as defined, your editor doesn't have to show them to you. Osenkov implemented an editor control for SharpDevelop that represents brace pairs as indentation, and makes it faster for the programmer to work with the structure of the code. Jump to page 113 in the linked document to see a good example. A: No. Curlies remove any possibility of ambiguity on the part of the reader. Humans don't distinguish well between different kinds of whitespace (I mean, just think about that - "different kinds of whitespace"!). And by humans, I mean me. Which is why I like C# :) Some languages have philosophies behind them that embrace some kinds of ambiguity. C# is not one of them. A: I can't think of anything worse! Especially having the option of the two. Every time you were reading someone else's code you'd have to be familiar with both notations to make sense of it, and heaven forbid they should switch between the two - what a nightmare! It would remove all consistency, and lead to many developers shouting many more WTFS. Then there's the whole holy war on whitespace vs brackets - which I won't even comment on. A: Having been a C#/Java developer my entire career, looking at C# code with significant whitespace would drive me nuts. If you're familiar with brackets, it makes code MUCH more read-able and really helps you figure out what the code is doing. A: That would not be C#, it would be a different language like Iron Python. A: If this was an option, I would never use it. Specifically, I like the way Visual Studio parses the curly braces allowing for block collapse/expand, placing the caret next to a curly brace highlights the corresponding closing/opening curly brace. Human readability is also an issue. Its easier to distinguish curly braces between words than it is to distinguish white space. A: I work on a programming language developed by my company 30+ years ago. We are constantly haggling with questions like this. Any change, or even addition, introduces not just a chance for improvement, but also a chance for errors and misunderstandings. Even in the best case scenario, this doesn't solve any problem. You're just trading one arbitrary set of code block identifiers with another, which nullifies any gains (if there were any with the new syntax, which isn't even established). Much more likely, you are trading off a well-known well-established set of rules with another, not so well-known one, introducing the chances of errors and misconceptions. Even just adding this as an option introduces a greater chance of errors since you could now have 2 syntaxes to write the same code and possibly even for the same team of developers to work with.
Significant whitespace in C# like Python or Haskell?
I'm wondering if any other C# developers would find it an improvement to have a compiler directive for csc.exe to make whitespace significant a la Haskell or Python where the kinds of whitespace create code blocks. While this would certainly be a massive departure from C-style languages, it seems to me that since C# is ultimately being compiled down to CIL (which would still have the curly braces and semicolons), it really is just a parsing trick the compiler can handle either way (that is, it can either deal with significant whitespaces or not). Since curlies and semicolons are often a barrier to entry to C# & they are really only parsing helpers (they don't in themselves impart meaning to your code), they could be removed a la Haskell/Python. F# handles this with the #light compiler directive which you can read about in Lightweight syntax option in F# 1.1.12.3. I'd like to see the same thing in C#: a #SigSpace or somesuch directive that would direct csc.exe to treat the source like a Haskell file in terms of whitespace (just as an example). Standard C#: public void WhiteSpaceSig() { List<string> names = new List<string>(); List<string> colors = new List<string>(); foreach (string name in names) { foreach (string color in colors) { // bla bla bla } } } Significant whitespace: #SigSpace public void WhiteSpaceSig() List<string> names = new List<string>() List<string> colors = new List<string>() foreach (string name in names) foreach (string color in colors) // bla bla bla I'm not saying that I want this in C#, but I am interested in what the tradeoffs are. My guess is that most C# developers have gotten so used to the syntax that they won't be able to see how artificial it is (though it may in the end make the code easier to read).
[ "If you want this syntax, why not just use IronPython or Boo instead of C#?\nIt seems better to implement a custom language for this, instead of trying to tweak C#. As you said, they all compile to the same IL, so there's no reason to change a good, clean working syntax to implement what would essentially be a new language grammar.\n", "As a mainly Python developer, I would love to see more languages adopting significant whitespace for delimiting blocks.\nIf you search the newsgroups, you will find plenty of opinions of C,C++,C#,Java and so on developers. My feeling is that many of them really like the curly braces.\nHaving a mixture of styles would be a pain though.\nI regularly use curly brace languages too, so I can see both sides\n", "You might be interested in Kirill Osenkov's thesis, Designing, implementing and integrating\na structured C# code editor. \nThe underlying idea is that while the braces are part of the C# language as defined, your editor doesn't have to show them to you. Osenkov implemented an editor control for SharpDevelop that represents brace pairs as indentation, and makes it faster for the programmer to work with the structure of the code. Jump to page 113 in the linked document to see a good example.\n", "No. Curlies remove any possibility of ambiguity on the part of the reader. Humans don't distinguish well between different kinds of whitespace (I mean, just think about that - \"different kinds of whitespace\"!). And by humans, I mean me. Which is why I like C# :)\nSome languages have philosophies behind them that embrace some kinds of ambiguity. C# is not one of them.\n", "I can't think of anything worse!\nEspecially having the option of the two. Every time you were reading someone else's code you'd have to be familiar with both notations to make sense of it, and heaven forbid they should switch between the two - what a nightmare!\nIt would remove all consistency, and lead to many developers shouting many more WTFS.\nThen there's the whole holy war on whitespace vs brackets - which I won't even comment on.\n", "Having been a C#/Java developer my entire career, looking at C# code with significant whitespace would drive me nuts.\nIf you're familiar with brackets, it makes code MUCH more read-able and really helps you figure out what the code is doing.\n", "That would not be C#, it would be a different language like Iron Python.\n", "If this was an option, I would never use it.\nSpecifically, I like the way Visual Studio parses the curly braces allowing for block collapse/expand, placing the caret next to a curly brace highlights the corresponding closing/opening curly brace.\nHuman readability is also an issue. Its easier to distinguish curly braces between words than it is to distinguish white space. \n", "I work on a programming language developed by my company 30+ years ago. We are constantly haggling with questions like this. Any change, or even addition, introduces not just a chance for improvement, but also a chance for errors and misunderstandings.\nEven in the best case scenario, this doesn't solve any problem. You're just trading one arbitrary set of code block identifiers with another, which nullifies any gains (if there were any with the new syntax, which isn't even established).\nMuch more likely, you are trading off a well-known well-established set of rules with another, not so well-known one, introducing the chances of errors and misconceptions. Even just adding this as an option introduces a greater chance of errors since you could now have 2 syntaxes to write the same code and possibly even for the same team of developers to work with.\n" ]
[ 11, 3, 3, 2, 2, 1, 0, 0, 0 ]
[]
[]
[ "c#", "haskell", "python" ]
stackoverflow_0001664394_c#_haskell_python.txt
Q: Python name grabber if I have a string in the format of (static string) name (different static string ) message (last static string) (static string) name (different static string ) message (last static string) (static string) name (different static string ) message (last static string) (static string) name (different static string ) message (last static string) what would be the best way of searching through the messages for word and generate an array of all of the name's that had that word in their message? A: >>> s="(static string) name (different static string ) message (last static string)" >>> _,_,s=s.partition("(static string)") >>> name,_,s=s.partition("(different static string )") >>> message,_,s=s.partition("(last static string)") >>> name ' name ' >>> message ' message ' A: Expecting this string: Foo NameA Bar MessageA Baz this regex will match: Foo\s+(\w+)\s+Bar\s+(\w+)\s+Baz Group 1 will be the name, group 2 will be the message. FooBarBaz are the static parts. Here it is using the repl of Python: Python 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import re >>> s = "Foo NameA Bar MessageA Baz" >>> m = re.match("Foo\s+(\w+)\s+Bar\s+(\w+)\s+Baz", s) >>> m.group(0) 'Foo NameA Bar MessageA Baz' >>> m.group(1) 'NameA' >>> m.group(2) 'MessageA' >>> A: Here's a full answer showing how to do it using replace(). strings = ['(static string) name (different static string ) message (last static string)', '(static string) name (different static string ) message (last static string)', '(static string) name (different static string ) message (last static string)', '(static string) name (different static string ) message (last static string)', '(static string) name (different static string ) message (last static string)', '(static string) name (different static string ) message (last static string)'] results = [] target_word = 'message' separators = ['(static string)', '(different static string )', '(last static string)'] for s in strings: for sep in separators: s = s.replace(sep, '') name, message = s.split() if target_word in message: results.append((name, message)) >>> results [('name', 'message'), ('name', 'message'), ('name', 'message'), ('name', 'message'), ('name', 'message'), ('name', 'message')] Note that this will match any message that contains the substring target_word. It will not look for word boundaries, e.g. compare a run of this with target_word = 'message' vs. target_word = 'sag' - will produce the same results. You may need regular expressions if your word matching is more complicated. A: for line in open("file"): line=line.split(")") for item in line: try: print item[:item.index("(")] except:pass output $ more file (static string) name (different static string ) message (last static string) (static string) name (different static string ) message (last static string) (static string) name (different static string ) message (last static string) (static string) name (different static string ) message (last static string) $ python python.py name message name message name message name message
Python name grabber
if I have a string in the format of (static string) name (different static string ) message (last static string) (static string) name (different static string ) message (last static string) (static string) name (different static string ) message (last static string) (static string) name (different static string ) message (last static string) what would be the best way of searching through the messages for word and generate an array of all of the name's that had that word in their message?
[ ">>> s=\"(static string) name (different static string ) message (last static string)\"\n>>> _,_,s=s.partition(\"(static string)\")\n>>> name,_,s=s.partition(\"(different static string )\")\n>>> message,_,s=s.partition(\"(last static string)\")\n>>> name\n' name '\n>>> message\n' message '\n\n", "Expecting this string:\nFoo NameA Bar MessageA Baz\nthis regex will match:\nFoo\\s+(\\w+)\\s+Bar\\s+(\\w+)\\s+Baz\nGroup 1 will be the name, group 2 will be the message. FooBarBaz are the static parts.\nHere it is using the repl of Python:\nPython 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import re\n>>> s = \"Foo NameA Bar MessageA Baz\"\n>>> m = re.match(\"Foo\\s+(\\w+)\\s+Bar\\s+(\\w+)\\s+Baz\", s)\n>>> m.group(0)\n'Foo NameA Bar MessageA Baz'\n>>> m.group(1)\n'NameA'\n>>> m.group(2)\n'MessageA'\n>>> \n\n", "Here's a full answer showing how to do it using replace(). \nstrings = ['(static string) name (different static string ) message (last static string)',\n '(static string) name (different static string ) message (last static string)',\n '(static string) name (different static string ) message (last static string)',\n '(static string) name (different static string ) message (last static string)',\n '(static string) name (different static string ) message (last static string)',\n '(static string) name (different static string ) message (last static string)']\n\nresults = []\ntarget_word = 'message'\nseparators = ['(static string)', '(different static string )', '(last static string)']\n\nfor s in strings:\n for sep in separators:\n s = s.replace(sep, '')\n name, message = s.split()\n if target_word in message:\n results.append((name, message))\n\n>>> results\n[('name', 'message'), ('name', 'message'), ('name', 'message'), ('name', 'message'), ('name', 'message'), ('name', 'message')]\n\nNote that this will match any message that contains the substring target_word. It will not look for word boundaries, e.g. compare a run of this with target_word = 'message' vs. target_word = 'sag' - will produce the same results. You may need regular expressions if your word matching is more complicated.\n", "for line in open(\"file\"):\n line=line.split(\")\")\n for item in line:\n try:\n print item[:item.index(\"(\")]\n except:pass\n\noutput\n$ more file\n(static string) name (different static string ) message (last static string)\n(static string) name (different static string ) message (last static string)\n(static string) name (different static string ) message (last static string)\n(static string) name (different static string ) message (last static string)\n$ python python.py\n\n name\n message\n\n name\n message\n\n name\n message\n\n name\n message\n\n" ]
[ 3, 0, 0, 0 ]
[]
[]
[ "parsing", "python", "regex" ]
stackoverflow_0001659759_parsing_python_regex.txt
Q: secure and efficient file uploader I am looking for a decent file uploader. I'm using django, but while i don't NEED a django file uploader, python is preferable. php would also be fine. can anyone point me in the direction of a good file uploader which is secure and efficient? EDIT: I need the server-side handler of file uploads. A: I'm using uploadify which is an upload plugin for jQuery with a flash progress bar. I use some basic php for the upload script. It can be made as secure as you need with a little investigating the forum. A: Agree with jeerose: Uploadify is a nice app. I've made a simple Django wrapper for it as well if you're interested: django-uploadify. It basically gives you a template tag which renders uploadify and fires a Django signal whenever a file is received. How you implement the 'handler' you mentioned above is simply a matter of subscribing to the signal and then doing whatever you wish with the incoming data. (the wiki includes an example of creating a new File Model if that's the kind of thing you're looking for)
secure and efficient file uploader
I am looking for a decent file uploader. I'm using django, but while i don't NEED a django file uploader, python is preferable. php would also be fine. can anyone point me in the direction of a good file uploader which is secure and efficient? EDIT: I need the server-side handler of file uploads.
[ "I'm using uploadify which is an upload plugin for jQuery with a flash progress bar. I use some basic php for the upload script. It can be made as secure as you need with a little investigating the forum.\n", "Agree with jeerose: Uploadify is a nice app.\nI've made a simple Django wrapper for it as well if you're interested: django-uploadify. It basically gives you a template tag which renders uploadify and fires a Django signal whenever a file is received.\nHow you implement the 'handler' you mentioned above is simply a matter of subscribing to the signal and then doing whatever you wish with the incoming data. (the wiki includes an example of creating a new File Model if that's the kind of thing you're looking for)\n" ]
[ 2, 2 ]
[]
[]
[ "django", "php", "python", "upload" ]
stackoverflow_0001664597_django_php_python_upload.txt
Q: Python string interning and substrings Does python create a completely new string (copying the contents) when you do a substring operation like: new_string = my_old_string[foo:bar] Or does it use interning to point to the old data ? As a clarification, I'm curious if the underlying character buffer is shared as it is in Java. I realize that strings are immutable and will always appear to be a completely new string, and it would have to be an entirely new string object. A: Examining the source reveals: When the slice indexes match the start and end of the original string, then the original string is returned. Otherwise, you get the result of the function PyString_FromStringAndSize, which takes the existing string object. This function returns an interned string in the case of a 0 or 1-character-width string; otherwise it copies the substring into a new string object. A: You may also be interested in islice which does provide a view of the original string >>> from sys import getrefcount >>> from itertools import islice >>> h="foobarbaz" >>> getrefcount(h) 2 >>> g=islice(h,3,6) >>> getrefcount(h) 3 >>> "".join(g) 'bar' >>> A: It's a completely new string (so the old bigger one can be let go when feasible, rather than staying alive just because some tiny string's been sliced from it and it being kept around). intern is a different thing, though. A: Looks like I can answer my own question, opened up the source and guess what I found: static PyObject * string_slice(register PyStringObject *a, register Py_ssize_t i, register Py_ssize_t j) ... snip ... return PyString_FromStringAndSize(a->ob_sval + i, j-i); ..and no reference to interning. FromStringAndSize() only explicitly interns on strings of size 1 and 0 So it seems clear that you'll always get a totally new object and they won't share any buffers.
Python string interning and substrings
Does python create a completely new string (copying the contents) when you do a substring operation like: new_string = my_old_string[foo:bar] Or does it use interning to point to the old data ? As a clarification, I'm curious if the underlying character buffer is shared as it is in Java. I realize that strings are immutable and will always appear to be a completely new string, and it would have to be an entirely new string object.
[ "Examining the source reveals:\nWhen the slice indexes match the start and end of the original string, then the original string is returned.\nOtherwise, you get the result of the function PyString_FromStringAndSize, which takes the existing string object. This function returns an interned string in the case of a 0 or 1-character-width string; otherwise it copies the substring into a new string object.\n", "You may also be interested in islice which does provide a view of the original string\n>>> from sys import getrefcount\n>>> from itertools import islice\n>>> h=\"foobarbaz\"\n>>> getrefcount(h)\n2\n>>> g=islice(h,3,6)\n>>> getrefcount(h)\n3\n>>> \"\".join(g)\n'bar'\n>>> \n\n", "It's a completely new string (so the old bigger one can be let go when feasible, rather than staying alive just because some tiny string's been sliced from it and it being kept around).\nintern is a different thing, though.\n", "Looks like I can answer my own question, opened up the source and guess what I found:\n static PyObject *\n string_slice(register PyStringObject *a, register Py_ssize_t i,\n register Py_ssize_t j)\n\n ... snip ...\n\n return PyString_FromStringAndSize(a->ob_sval + i, j-i);\n\n..and no reference to interning. FromStringAndSize() only explicitly interns on strings of size 1 and 0\nSo it seems clear that you'll always get a totally new object and they won't share any buffers. \n" ]
[ 8, 8, 2, 0 ]
[ "In Python, strings are immutable. That means that you will always get a copy on any slice, concatenate, or other operations.\nhttp://effbot.org/pyfaq/why-are-python-strings-immutable.htm is a nice explanation for some of the reasons behind immutable strings.\n" ]
[ -2 ]
[ "python" ]
stackoverflow_0001664840_python.txt
Q: Is there a way to inspect the (differing) internal structures of Python objects that test as equal (==)? Yesterday I asked ("A case of outwardly equal lists of sets behaving differently under Python 2.5 (I think …)") why list W constructed as follows: r_dim_1_based = range( 1, dim + 1) set_dim_1_based = set( r_dim_1_based) def listW_fill_func( val): if (val == 0): return set_dim_1_based else: return set( [val]) W = [ listW_fill_func( A[cid]) for cid in r_ncells ] didn't behave as I expected. In particular, it did not behave like other lists that showed equality with it (another_list == W --> True). Is there a utility, trick, builtin, whatever that would have shown these differing internal structures to me? Something that would have produced perhaps a C-like declaration of the objects so that I would have seen at once that I was dealing with pointers in one case (list W) and values in the others? A: You're dealing with references in each case (more similar to pointers than to values). You can surely introspect your objects' references to your heart's contents -- for example, if you have a list and want to check if any items are identical references, if len(thelist) != len(set(id(x) for x in thelist)): ... DO note that we're talking about references here -- so, two identical references to None, or two identical references to the int value 17, would also trigger the same alarm. Of course you can keep introspecting to remove that case, eliminating immutables from the list in a first pass, for example, if you think that multiple references to the same immutable are fine -- e.g.: immutyps = int, long, float, tuple, frozenset, str, unicode mutables = [x for x in thelist if not isinstance(x, immutyps)] if len(mutables) != len(set(id(x) for x in mutables)): cryhavocandletloosethedogsofwar() but I would question the return-on-investment of such a deep introspection strategy!
Is there a way to inspect the (differing) internal structures of Python objects that test as equal (==)?
Yesterday I asked ("A case of outwardly equal lists of sets behaving differently under Python 2.5 (I think …)") why list W constructed as follows: r_dim_1_based = range( 1, dim + 1) set_dim_1_based = set( r_dim_1_based) def listW_fill_func( val): if (val == 0): return set_dim_1_based else: return set( [val]) W = [ listW_fill_func( A[cid]) for cid in r_ncells ] didn't behave as I expected. In particular, it did not behave like other lists that showed equality with it (another_list == W --> True). Is there a utility, trick, builtin, whatever that would have shown these differing internal structures to me? Something that would have produced perhaps a C-like declaration of the objects so that I would have seen at once that I was dealing with pointers in one case (list W) and values in the others?
[ "You're dealing with references in each case (more similar to pointers than to values). You can surely introspect your objects' references to your heart's contents -- for example, if you have a list and want to check if any items are identical references,\nif len(thelist) != len(set(id(x) for x in thelist)): ...\n\nDO note that we're talking about references here -- so, two identical references to None, or two identical references to the int value 17, would also trigger the same alarm. Of course you can keep introspecting to remove that case, eliminating immutables from the list in a first pass, for example, if you think that multiple references to the same immutable are fine -- e.g.:\nimmutyps = int, long, float, tuple, frozenset, str, unicode\nmutables = [x for x in thelist if not isinstance(x, immutyps)]\nif len(mutables) != len(set(id(x) for x in mutables)):\n cryhavocandletloosethedogsofwar()\n\nbut I would question the return-on-investment of such a deep introspection strategy!\n" ]
[ 1 ]
[]
[]
[ "inspection", "list", "python", "set" ]
stackoverflow_0001665176_inspection_list_python_set.txt
Q: Verifying that an object in python adheres to a specific structure Is there some simple method that can check if an input object to some function adheres to a specific structure? For example, I want only a dictionary of string keys and values that are a list of integers. One method would be to write a recursive function that you pass in the object and you iterate over it, checking at each level it is what you expect. But I feel that there should be a more elegant way to do it than this in python. A: Why would you expect Python to provide an "elegant way" to check types, since the whole idea of type-checking is so utterly alien to the Pythonic way of conceiving the world and interacting with it?! Normally in Python you'd use duck typing -- so "an integer" might equally well be an int, a long, a gmpy.mpz -- types with no relation to each other except they all implement the same core signature... just as "a dict" might be any implementation of mapping, and so forth. The new-in-2.6-and-later concept of "abstract base classes" provides a more systematic way to implement and verify duck typing, and 3.0-and-later function annotations let you interface with such a checking system (third-party, since Python adopts no such system for the foreseeable future). For example, this recipe provides a 3.0-and-later way to perform "kinda but not quite" type checking based on function annotations -- though I doubt it goes anywhere as deep as you desire, but then, it's early times for function annotations, and most of us Pythonistas feel so little craving for such checking that we're unlikely to run flat out to implement such monumental systems in lieu of actually useful code;-). A: Short answer, no, you have to create your own function. Long answer: its not pythonic to do what you're asking. There might be some special cases (e.g, marshalling a dict to xmlrpc), but by and large, assume the objects will act like what they're documented to be. If they don't, let the AttributeError bubble up. If you are ok with coercing values, then use str() and int() to convert them. They could, afterall, implement __str__, __add__, etc that makes them not descendants of int/str, but still usable. def dict_of_string_and_ints(obj): assert isinstance(obj, dict) for key, value in obj.iteritems(): # py2.4 assert isinstance(key, basestring) assert isinstance(value, list) assert sum(isinstance(x, int) for x in value) == len(list) A: Since Python emphasizes things just working, your best bet is to just assert as you go and trust the users of your library to feed you proper data. Let exceptions happen, if you must; that's on your clients for not reading your docstring. In your case, something like this: def myfunction(arrrrrgs): assert issubclass(dict, type(arrrrrgs)), "Need a dictionary!" for key in arrrrrgs: assert type(key) is str, "Need a string!" val = arrrrrgs[key] assert type(val) is list, "Need a list!" And so forth. Really, it isn't worth the effort, and let your program blow up if you express yourself clearly in the docstring, or throw well-placed exceptions to guide the late-night debugger. A: I will take a shot and propose a helper function that can do something like that for you in a more generic+elegant way: def check_type(value, type_def): """ This validates an object instanct <value> against a type template <type_def> presented as a simplified object. E.g. if value is list of dictionaries that have string values as key and integers as values: >> check_type(value, [{'':0}]) if value is list of dictionaries, no restriction on key/values >> check_type(value, [{}]) """ if type(value) != type(type_def): return False if hasattr(value, '__iter__'): if len(type_def) == 0: return True type_def_val = iter(type_def).next() for key in value: if not check_type(key, type_def_val): return False if type(value) is dict: if not check_type(value.values(), type_def.values()): return False return True The comment explains a sample of usage, but you can always go pretty deep, e.g. >>> check_type({1:['a', 'b'], 2:['c', 'd']}, {0:['']}) True >>> check_type({1:['a', 'b'], 2:['c', 3]}, {0:['']}) False P.S. Feel free to modify it if you want one-by-one tuple validation (e.g. validation against ([], '', {0:0}) which is not handled as it is expected now)
Verifying that an object in python adheres to a specific structure
Is there some simple method that can check if an input object to some function adheres to a specific structure? For example, I want only a dictionary of string keys and values that are a list of integers. One method would be to write a recursive function that you pass in the object and you iterate over it, checking at each level it is what you expect. But I feel that there should be a more elegant way to do it than this in python.
[ "Why would you expect Python to provide an \"elegant way\" to check types, since the whole idea of type-checking is so utterly alien to the Pythonic way of conceiving the world and interacting with it?! Normally in Python you'd use duck typing -- so \"an integer\" might equally well be an int, a long, a gmpy.mpz -- types with no relation to each other except they all implement the same core signature... just as \"a dict\" might be any implementation of mapping, and so forth.\nThe new-in-2.6-and-later concept of \"abstract base classes\" provides a more systematic way to implement and verify duck typing, and 3.0-and-later function annotations let you interface with such a checking system (third-party, since Python adopts no such system for the foreseeable future). For example, this recipe provides a 3.0-and-later way to perform \"kinda but not quite\" type checking based on function annotations -- though I doubt it goes anywhere as deep as you desire, but then, it's early times for function annotations, and most of us Pythonistas feel so little craving for such checking that we're unlikely to run flat out to implement such monumental systems in lieu of actually useful code;-).\n", "Short answer, no, you have to create your own function.\nLong answer: its not pythonic to do what you're asking. There might be some special cases (e.g, marshalling a dict to xmlrpc), but by and large, assume the objects will act like what they're documented to be. If they don't, let the AttributeError bubble up. If you are ok with coercing values, then use str() and int() to convert them. They could, afterall, implement __str__, __add__, etc that makes them not descendants of int/str, but still usable.\ndef dict_of_string_and_ints(obj):\n assert isinstance(obj, dict)\n for key, value in obj.iteritems(): # py2.4\n assert isinstance(key, basestring)\n assert isinstance(value, list)\n assert sum(isinstance(x, int) for x in value) == len(list)\n\n", "Since Python emphasizes things just working, your best bet is to just assert as you go and trust the users of your library to feed you proper data. Let exceptions happen, if you must; that's on your clients for not reading your docstring.\nIn your case, something like this:\ndef myfunction(arrrrrgs):\n assert issubclass(dict, type(arrrrrgs)), \"Need a dictionary!\"\n for key in arrrrrgs:\n assert type(key) is str, \"Need a string!\"\n val = arrrrrgs[key]\n assert type(val) is list, \"Need a list!\"\n\nAnd so forth.\nReally, it isn't worth the effort, and let your program blow up if you express yourself clearly in the docstring, or throw well-placed exceptions to guide the late-night debugger.\n", "I will take a shot and propose a helper function that can do something like that for you in a more generic+elegant way:\ndef check_type(value, type_def):\n \"\"\"\n This validates an object instanct <value> against a type template <type_def>\n presented as a simplified object.\n E.g.\n if value is list of dictionaries that have string values as key and integers \n as values:\n >> check_type(value, [{'':0}])\n if value is list of dictionaries, no restriction on key/values\n >> check_type(value, [{}])\n \"\"\"\n if type(value) != type(type_def):\n return False\n if hasattr(value, '__iter__'):\n if len(type_def) == 0:\n return True\n type_def_val = iter(type_def).next()\n for key in value:\n if not check_type(key, type_def_val):\n return False\n if type(value) is dict:\n if not check_type(value.values(), type_def.values()):\n return False\n return True\n\nThe comment explains a sample of usage, but you can always go pretty deep, e.g.\n>>> check_type({1:['a', 'b'], 2:['c', 'd']}, {0:['']})\nTrue\n>>> check_type({1:['a', 'b'], 2:['c', 3]}, {0:['']})\nFalse\n\nP.S. Feel free to modify it if you want one-by-one tuple validation (e.g. validation against ([], '', {0:0}) which is not handled as it is expected now)\n" ]
[ 4, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001665260_python.txt
Q: BeautifulSoup with Jython I just tried to run BeautifulSoup (3.1.0.1) with Jython (2.5.1) and I was amazed to see how much slower it was than CPython. Parsing a page (http://www.fixprotocol.org/specifications/fields/5000-5999) with CPython took just under a second (0.844 second to be exact). With Jython it took 564 seconds - almost 700 times as much. Can anyone confirm this result? It's doesn't seem reasonable for Jython to run 700 times slower than CPython. Perhaps something is wrong with my setup. [Edit] Here's the code I used to test this (naturally I downloaded the above mentioned HTML file): import time from BeautifulSoup import BeautifulSoup data = open("fix-5000-5999.html").read() start = time.time() soup = BeautifulSoup(data) print time.time() - start A: I can confirm similar findings. Intel Mac, OS X 10.6.1, Java 1.6.0_15 64-bit, Jython 2.5.1. Running your code with CPython 2.6.1 takes 0.1–0.2 seconds, but running it with Jython takes at least tens of seconds; I didn't wait more than 30. It also uses a lot of CPU. I tried Beautiful Soup 3.0.7a, because it uses a different parser, but had the same results. Interestingly, I tried running your code on a different HTML file and it worked fine. But it still seemed much slower than CPython: Jython took 1.02–1.3 seconds; CPython took 0.019–0.020. I don't have any suggestions at this point except that you should consider asking this question on the jython-users list; I've found the community there, which includes the lead developer, to be responsive and helpful. Good luck!
BeautifulSoup with Jython
I just tried to run BeautifulSoup (3.1.0.1) with Jython (2.5.1) and I was amazed to see how much slower it was than CPython. Parsing a page (http://www.fixprotocol.org/specifications/fields/5000-5999) with CPython took just under a second (0.844 second to be exact). With Jython it took 564 seconds - almost 700 times as much. Can anyone confirm this result? It's doesn't seem reasonable for Jython to run 700 times slower than CPython. Perhaps something is wrong with my setup. [Edit] Here's the code I used to test this (naturally I downloaded the above mentioned HTML file): import time from BeautifulSoup import BeautifulSoup data = open("fix-5000-5999.html").read() start = time.time() soup = BeautifulSoup(data) print time.time() - start
[ "I can confirm similar findings.\nIntel Mac, OS X 10.6.1, Java 1.6.0_15 64-bit, Jython 2.5.1.\nRunning your code with CPython 2.6.1 takes 0.1–0.2 seconds, but running it with Jython takes at least tens of seconds; I didn't wait more than 30. It also uses a lot of CPU.\nI tried Beautiful Soup 3.0.7a, because it uses a different parser, but had the same results.\nInterestingly, I tried running your code on a different HTML file and it worked fine. But it still seemed much slower than CPython: Jython took 1.02–1.3 seconds; CPython took 0.019–0.020.\nI don't have any suggestions at this point except that you should consider asking this question on the jython-users list; I've found the community there, which includes the lead developer, to be responsive and helpful.\nGood luck!\n" ]
[ 6 ]
[]
[]
[ "beautifulsoup", "jython", "python" ]
stackoverflow_0001661310_beautifulsoup_jython_python.txt
Q: How to create an image from a string in python I'm currently having trouble creating an image from a binary string of data in my Python program. I receive the binary data via a socket but when I try the methods I read about on here like this: buff = StringIO.StringIO() #buffer where image is stored #Then I concatenate data by doing a buff.write(data) #the data from the socket im = Image.open(buff) I get an exception to the effect of "image type not recognized". I know that I am receiving the data correctly because if I write the image to a file and then open a file it works: buff = StringIO.StringIO() #buffer where image is stored buff.write(data) #data is from the socket output = open("tmp.jpg", 'wb') output.write(buff) output.close() im = Image.open("tmp.jpg") im.show() I figure I am probably doing something wrong in using the StringIO class but I'm not sure A: I suspect that you're not seek-ing back to the beginning of the buffer before you pass the StringIO object to PIL. Here's some code the demonstrates the problem and solution: >>> buff = StringIO.StringIO() >>> buff.write(open('map.png', 'rb').read()) >>> >>> #seek back to the beginning so the whole thing will be read by PIL >>> buff.seek(0) >>> >>> Image.open(buff) <PngImagePlugin.PngImageFile instance at 0x00BD7DC8> >>> >>> #that worked.. but if we try again: >>> Image.open(buff) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\python25\lib\site-packages\pil-1.1.6-py2.5-win32.egg\Image.py", line 1916, in open raise IOError("cannot identify image file") IOError: cannot identify image file Make sure you call buff.seek(0) before reading any StringIO objects. Otherwise you'll be reading from the end of the buffer, which will look like an empty file and is likely causing the error you're seeing. A: You have either call buff.seek(0) or, better, initialize memory buffer with data StringIO.StringIO(data).
How to create an image from a string in python
I'm currently having trouble creating an image from a binary string of data in my Python program. I receive the binary data via a socket but when I try the methods I read about on here like this: buff = StringIO.StringIO() #buffer where image is stored #Then I concatenate data by doing a buff.write(data) #the data from the socket im = Image.open(buff) I get an exception to the effect of "image type not recognized". I know that I am receiving the data correctly because if I write the image to a file and then open a file it works: buff = StringIO.StringIO() #buffer where image is stored buff.write(data) #data is from the socket output = open("tmp.jpg", 'wb') output.write(buff) output.close() im = Image.open("tmp.jpg") im.show() I figure I am probably doing something wrong in using the StringIO class but I'm not sure
[ "I suspect that you're not seek-ing back to the beginning of the buffer before you pass the StringIO object to PIL. Here's some code the demonstrates the problem and solution:\n>>> buff = StringIO.StringIO()\n>>> buff.write(open('map.png', 'rb').read())\n>>> \n>>> #seek back to the beginning so the whole thing will be read by PIL\n>>> buff.seek(0)\n>>>\n>>> Image.open(buff)\n<PngImagePlugin.PngImageFile instance at 0x00BD7DC8>\n>>> \n>>> #that worked.. but if we try again:\n>>> Image.open(buff)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"c:\\python25\\lib\\site-packages\\pil-1.1.6-py2.5-win32.egg\\Image.py\", line 1916, in open\n raise IOError(\"cannot identify image file\")\nIOError: cannot identify image file\n\nMake sure you call buff.seek(0) before reading any StringIO objects. Otherwise you'll be reading from the end of the buffer, which will look like an empty file and is likely causing the error you're seeing.\n", "You have either call buff.seek(0) or, better, initialize memory buffer with data StringIO.StringIO(data).\n" ]
[ 29, 7 ]
[]
[]
[ "image", "python", "sockets", "string" ]
stackoverflow_0001664861_image_python_sockets_string.txt
Q: Python, Pygame, Pyro: How to send a surface over a network? I am working on a project in python using pygame and pyro. I can send data, functions, classes, and the like easily. However, I cannot send a surface across the wire without it dying on me in transit. The server makes a surface in the def __init__ of the class being accessed across the wire: self.screen = pygame.display.set_mode(SCREENRECT.size, NOFRAME) On the server, the screen prints as Surface(800x800x32 SW) but when retrieved by the client it is Surface(Dead Display). Something to note though. I get a dead display when I use an accessor function to get my screen. If I use print Player.screen to get the variable I instead get what seems to be a pyro pointer to the screen: <Pyro.core._RemoteMethod instance at 0x02B7B7B0>. Maybe I can dereference this? More than likely I am being thick, does anyone have some insight? Thanks. :) A: A pygame Surface is a wrapper around an underlying SDL surface, which I suspect can't be serialized by Pyro. If you want to copy its contents across the wire, you would be better off doing something like this: on the server use Surface.get_buffer() to get access to the underlying pixels. make a note of the Surface's dimensions, colour depth, etc. send the data obtained from steps 1 and 2 over the wire to the client. on the client create a new surface using the dimensions, colour depth, etc, from step 2. set the new Surface's pixels using Surface.get_buffer() and copying in the pixels from step 1. Edit: It just occurred to me that I'm overcomplicating it. To serialise your Surface, use pygame.image.tostring(), and to reload it, use pygame.image.fromstring(). A: Generally speaking, you don't want to send a Surface (I'm assuming that a Surface is a device-dependent display) across the network. Most of the time, your client will be responsible for managing the drawing on its local Surface, and your server is responsible for telling the client what it needs to draw. A server may not even have a display that's capable of showing graphics! A: Try pickling the object and send the file...
Python, Pygame, Pyro: How to send a surface over a network?
I am working on a project in python using pygame and pyro. I can send data, functions, classes, and the like easily. However, I cannot send a surface across the wire without it dying on me in transit. The server makes a surface in the def __init__ of the class being accessed across the wire: self.screen = pygame.display.set_mode(SCREENRECT.size, NOFRAME) On the server, the screen prints as Surface(800x800x32 SW) but when retrieved by the client it is Surface(Dead Display). Something to note though. I get a dead display when I use an accessor function to get my screen. If I use print Player.screen to get the variable I instead get what seems to be a pyro pointer to the screen: <Pyro.core._RemoteMethod instance at 0x02B7B7B0>. Maybe I can dereference this? More than likely I am being thick, does anyone have some insight? Thanks. :)
[ "A pygame Surface is a wrapper around an underlying SDL surface, which I suspect can't be serialized by Pyro. If you want to copy its contents across the wire, you would be better off doing something like this:\n\non the server use Surface.get_buffer() to get\naccess to the underlying pixels.\nmake a note of the Surface's dimensions, colour depth, etc.\nsend the data obtained from steps 1 and 2 over the wire to the client.\non the client create a new surface using the dimensions, colour depth, etc, from step 2.\nset the new Surface's pixels using Surface.get_buffer() and copying in the pixels from step 1.\n\nEdit:\nIt just occurred to me that I'm overcomplicating it. To serialise your Surface, use pygame.image.tostring(), and to reload it, use pygame.image.fromstring().\n", "Generally speaking, you don't want to send a Surface (I'm assuming that a Surface is a device-dependent display) across the network. Most of the time, your client will be responsible for managing the drawing on its local Surface, and your server is responsible for telling the client what it needs to draw. A server may not even have a display that's capable of showing graphics!\n", "Try pickling the object and send the file...\n" ]
[ 6, 1, 0 ]
[]
[]
[ "network_programming", "pygame", "pyro", "python" ]
stackoverflow_0001665376_network_programming_pygame_pyro_python.txt
Q: Forwarding command line arguments to a process in Python I'm using a crude IDE (Microchip MPLAB) with C30 toolchain on Windows XP. The C compiler has a very noisy output that I'm unable to control, and it's very hard to spot actual warnings and errors in output window. I want to write a python script that would receive arguments for compiler, call the compiler with same arguments, filter results and output them to stdout. Then I can replace the compiler executable with my script in toolchain settings. The IDE calls my script and receives filtered compiler output. My code for executing the compiler looks like this: arguments = ' '.join(sys.argv[1:]) cmd = '%s %s' % (compiler_path, arguments) process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) The problem is that quotes from arguments are consumed on script execution, so if IDE calls my script with following arguments: main.c -o"main.o" the value of arguments is main.c -omain.o The most obvious solution is to put whole argument list in quotes, but this would require modification in compiler calling code in IDE. I also tried using batch file, but it can only accept nine parameters (%1 to %9), and compiler is called with 15+ parameters. Is there a way to forward exactly the same arguments to a process from script? A: Give the command arguments to Popen as a list: arguments = sys.argv[1:] cmd = [compiler_path] + arguments process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) A: As ChristopheD said the shell removes the quotes. But you don't need to create the string yourself when using Popen: it can handle that for you automatically. You can do this instead: import sys, subprocess process = subprocess.Popen(sys.argv[1:], executable=compiler_path, stdout=subprocess.PIPE, stderr=subprocess.PIPE) The subprocess module hopefully will pass the arguments correctly for you. A: Your shell is eating the quotes (the python script never even receives them) so I suppose it's not very easy to get them 'unaltered'.
Forwarding command line arguments to a process in Python
I'm using a crude IDE (Microchip MPLAB) with C30 toolchain on Windows XP. The C compiler has a very noisy output that I'm unable to control, and it's very hard to spot actual warnings and errors in output window. I want to write a python script that would receive arguments for compiler, call the compiler with same arguments, filter results and output them to stdout. Then I can replace the compiler executable with my script in toolchain settings. The IDE calls my script and receives filtered compiler output. My code for executing the compiler looks like this: arguments = ' '.join(sys.argv[1:]) cmd = '%s %s' % (compiler_path, arguments) process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) The problem is that quotes from arguments are consumed on script execution, so if IDE calls my script with following arguments: main.c -o"main.o" the value of arguments is main.c -omain.o The most obvious solution is to put whole argument list in quotes, but this would require modification in compiler calling code in IDE. I also tried using batch file, but it can only accept nine parameters (%1 to %9), and compiler is called with 15+ parameters. Is there a way to forward exactly the same arguments to a process from script?
[ "Give the command arguments to Popen as a list:\narguments = sys.argv[1:]\ncmd = [compiler_path] + arguments\nprocess = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\n", "As ChristopheD said the shell removes the quotes.\nBut you don't need to create the string yourself when using Popen: it can handle that for you automatically. You can do this instead:\nimport sys, subprocess\nprocess = subprocess.Popen(sys.argv[1:], executable=compiler_path, stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\nThe subprocess module hopefully will pass the arguments correctly for you.\n", "Your shell is eating the quotes (the python script never even receives them) so I suppose it's not very easy to get them 'unaltered'.\n" ]
[ 4, 3, 0 ]
[]
[]
[ "arguments", "command_line_arguments", "process", "python", "quotes" ]
stackoverflow_0001665917_arguments_command_line_arguments_process_python_quotes.txt
Q: What is the issubclass equivalent of isinstance in python? Given an object, how do I tell if it's a class, and a subclass of a given class Foo? e.g. class Bar(Foo): pass isinstance(Bar(), Foo) # => True issubclass(Bar, Foo) # <--- how do I do that? A: It works exactly as one would expect it to work... class Foo(): pass class Bar(Foo): pass class Bar2(): pass print issubclass(Bar, Foo) # True print issubclass(Bar2, Foo) # False If you want to know if an instance of a class derived from a given base class, you could use: bar_instance = Bar() print issubclass(bar_instance.__class__, Foo)
What is the issubclass equivalent of isinstance in python?
Given an object, how do I tell if it's a class, and a subclass of a given class Foo? e.g. class Bar(Foo): pass isinstance(Bar(), Foo) # => True issubclass(Bar, Foo) # <--- how do I do that?
[ "It works exactly as one would expect it to work...\nclass Foo():\n pass\n\nclass Bar(Foo):\n pass\n\nclass Bar2():\n pass\n\nprint issubclass(Bar, Foo) # True\nprint issubclass(Bar2, Foo) # False\n\nIf you want to know if an instance of a class derived from a given base class, you could use:\nbar_instance = Bar()\nprint issubclass(bar_instance.__class__, Foo)\n\n" ]
[ 22 ]
[]
[]
[ "introspection", "python" ]
stackoverflow_0001666079_introspection_python.txt
Q: Does Django's Unit Testing Raise Warnings to Exceptions? I am using Django's unit testing apparatus (manage.py test), which is throwing an error and halting when the code generates a warning. This same code when tested with the standard Python unittest module, generates warnings but continues code execution through them. A little research shows that Python can be set to raise warnings to exceptions, which I suppose would cause the testing framework to think an error had occurred. Unfortunately, the Django documentation on testing is a little light on the definition of an "error", or how to modify the handling of warnings. So: Is the Django unit testing framework setup to raise warnings to errors by default? Is there some facility in Django for changing this behavior? If not, does anyone have any suggestions for how I can Django to print out the errors but continue code execution? Or have I completely misdiagnosed the problem? UPDATE: The test code is halting on warnings thrown by calls on MySQLdb. Those calls are made by a module which throws the same warnings when tested under the Python unittest framework, but does not halt. I'll think about an efficient way of trying to replicate the situation in code terse enough to post. ANSWER: A little more research reveals this behavior is related to Django's MySQL backend: /usr/...django/.../mysql/base.py: if settings.DEBUG: ... filterwarnings("error", category=Database.Warning) When I change settings.py so DEBUG = False, the code throws the warning but does not halt. I hadn't previously encountered this behavior in Django because my database calls are generated by backend of my own. Since I didn't call the Django backend, I didn't reset the handling of the warnings, and the code continued despite the warnings. The Django test framework surely calls the Django backend -- it does all sorts of things with the database -- and that call would reset the warning handling before my code is called. A: Given the updated info, I'm inclined to say that this is the right thing for Django to be doing; MySQL's warnings can indicate any number of things up to and including loss of data (e.g., MySQL will warn and silently truncate if you try to insert a value larger than a column can hold), and that's the sort of thing you'd want to find out about when testing. So probably your best bet is to look at the warnings it's generating and change your code so that it no longer causes those warnings to happen.
Does Django's Unit Testing Raise Warnings to Exceptions?
I am using Django's unit testing apparatus (manage.py test), which is throwing an error and halting when the code generates a warning. This same code when tested with the standard Python unittest module, generates warnings but continues code execution through them. A little research shows that Python can be set to raise warnings to exceptions, which I suppose would cause the testing framework to think an error had occurred. Unfortunately, the Django documentation on testing is a little light on the definition of an "error", or how to modify the handling of warnings. So: Is the Django unit testing framework setup to raise warnings to errors by default? Is there some facility in Django for changing this behavior? If not, does anyone have any suggestions for how I can Django to print out the errors but continue code execution? Or have I completely misdiagnosed the problem? UPDATE: The test code is halting on warnings thrown by calls on MySQLdb. Those calls are made by a module which throws the same warnings when tested under the Python unittest framework, but does not halt. I'll think about an efficient way of trying to replicate the situation in code terse enough to post. ANSWER: A little more research reveals this behavior is related to Django's MySQL backend: /usr/...django/.../mysql/base.py: if settings.DEBUG: ... filterwarnings("error", category=Database.Warning) When I change settings.py so DEBUG = False, the code throws the warning but does not halt. I hadn't previously encountered this behavior in Django because my database calls are generated by backend of my own. Since I didn't call the Django backend, I didn't reset the handling of the warnings, and the code continued despite the warnings. The Django test framework surely calls the Django backend -- it does all sorts of things with the database -- and that call would reset the warning handling before my code is called.
[ "Given the updated info, I'm inclined to say that this is the right thing for Django to be doing; MySQL's warnings can indicate any number of things up to and including loss of data (e.g., MySQL will warn and silently truncate if you try to insert a value larger than a column can hold), and that's the sort of thing you'd want to find out about when testing. So probably your best bet is to look at the warnings it's generating and change your code so that it no longer causes those warnings to happen.\n" ]
[ 2 ]
[]
[]
[ "django", "python", "unit_testing" ]
stackoverflow_0001658265_django_python_unit_testing.txt
Q: IDE for Python + Django Template Highlight + JQuery I try Netbeans 6.7 for python but don't have a good django template highlight and jquery code completion... i did find a project in google for django for netbeans but they don't explain how to do... Also I try eclipse with pydev, but have some problems with code competion on my class... I like to much Netbeans 6.7... I need more of javascript code completion!!! template highlight is just for help my webdesigner A: Aptana is great in HTML/CSS/Javascript editing. You may also refer to this question: Which text editor has the most useful autocomplete for web page editing
IDE for Python + Django Template Highlight + JQuery
I try Netbeans 6.7 for python but don't have a good django template highlight and jquery code completion... i did find a project in google for django for netbeans but they don't explain how to do... Also I try eclipse with pydev, but have some problems with code competion on my class... I like to much Netbeans 6.7... I need more of javascript code completion!!! template highlight is just for help my webdesigner
[ "Aptana is great in HTML/CSS/Javascript editing.\nYou may also refer to this question:\nWhich text editor has the most useful autocomplete for web page editing\n" ]
[ 0 ]
[]
[]
[ "django_templates", "jquery", "python" ]
stackoverflow_0001665468_django_templates_jquery_python.txt
Q: MOD_WSGI difficulties on Mac OS X Snow Leopard I've been trying to get MOD_WSGI working on Apache via XAMPP on my Mac OS X Snow Leopard all day today without any success. I've followed all the instructions, searched the internet for solutions, etc but no luck so far. Below are my exact steps and details. When I run localhost all I get is a white screen. When I remove "LoadModule wsgi_module modules/mod_wsgi.so" from httpd.conf localhost runs as expected. Downloaded and installed Xcode. XAMPP is already installed and working. I Don't need to install Python as OS X already has Python 2.6 in 64-bit mode. Download and unpack mod_wsgi-2.6.tar.gz to desktop. Terminal "./configure --with-apxs=/Applications/XAMPP/xamppfiles/bin/apxs --with-python=/System/Library/Frameworks/Python.framework/Versions/2.6/bin/python2.6" (no errors) Terminal "make" (message "make: Nothing to be done for `all'.") Terminal "sudo make install" (no errors) Add to XAMPP's httpd.conf file: LoadModule wsgi_module modules/mod_wsgi.so AddType text/html .py WSGIScriptAlias /app-sample "/Applications/xampp/xamppfiles/htdocs/app-sample/main.py" <Directory "/Applications/xampp/xamppfiles/htdocs/app-sample"> Order deny,allow Allow from all </Directory> Restart Apache via XAMPP A: First off, run 'make distclean' and then redo configure/make/make install for mod_wsgi. Where you have 'Terminal "make" (message "make: Nothing to be done for `all'.")' indicates there were prior build results in directory and nothing got built for that execution of make. Next, use '.wsgi' extension instead of '.py' to ensure that you don't have a conflict with an existing definition saying that '.py' files should be executed as CGI scripts. This is one common reason for blank responses. The Apache error logs should give you clues as to this being the problem. Also, what does your sample application do? Have you tried with a simple hello world program as per the documentation on the mod_wsgi site rather than jump to using your own program. If using your own program only, then you may possibly be causing Apache processes to crash due to some shared library conflict between Apache and Python modules being used, something else that will cause blank responses. Again, carefully check the Apache error logs for information logged at time request is made. Finally your program could just be buggy and have bad syntax in returned HTML response causing it to not be displayed. Ask the browser to show the source for the page returned by the request and make sure it isn't malformed HTML.
MOD_WSGI difficulties on Mac OS X Snow Leopard
I've been trying to get MOD_WSGI working on Apache via XAMPP on my Mac OS X Snow Leopard all day today without any success. I've followed all the instructions, searched the internet for solutions, etc but no luck so far. Below are my exact steps and details. When I run localhost all I get is a white screen. When I remove "LoadModule wsgi_module modules/mod_wsgi.so" from httpd.conf localhost runs as expected. Downloaded and installed Xcode. XAMPP is already installed and working. I Don't need to install Python as OS X already has Python 2.6 in 64-bit mode. Download and unpack mod_wsgi-2.6.tar.gz to desktop. Terminal "./configure --with-apxs=/Applications/XAMPP/xamppfiles/bin/apxs --with-python=/System/Library/Frameworks/Python.framework/Versions/2.6/bin/python2.6" (no errors) Terminal "make" (message "make: Nothing to be done for `all'.") Terminal "sudo make install" (no errors) Add to XAMPP's httpd.conf file: LoadModule wsgi_module modules/mod_wsgi.so AddType text/html .py WSGIScriptAlias /app-sample "/Applications/xampp/xamppfiles/htdocs/app-sample/main.py" <Directory "/Applications/xampp/xamppfiles/htdocs/app-sample"> Order deny,allow Allow from all </Directory> Restart Apache via XAMPP
[ "First off, run 'make distclean' and then redo configure/make/make install for mod_wsgi. Where you have 'Terminal \"make\" (message \"make: Nothing to be done for `all'.\")' indicates there were prior build results in directory and nothing got built for that execution of make.\nNext, use '.wsgi' extension instead of '.py' to ensure that you don't have a conflict with an existing definition saying that '.py' files should be executed as CGI scripts. This is one common reason for blank responses. The Apache error logs should give you clues as to this being the problem.\nAlso, what does your sample application do? Have you tried with a simple hello world program as per the documentation on the mod_wsgi site rather than jump to using your own program. If using your own program only, then you may possibly be causing Apache processes to crash due to some shared library conflict between Apache and Python modules being used, something else that will cause blank responses. Again, carefully check the Apache error logs for information logged at time request is made.\nFinally your program could just be buggy and have bad syntax in returned HTML response causing it to not be displayed. Ask the browser to show the source for the page returned by the request and make sure it isn't malformed HTML.\n" ]
[ 3 ]
[]
[]
[ "apache", "macos", "mod_wsgi", "python", "xampp" ]
stackoverflow_0001665994_apache_macos_mod_wsgi_python_xampp.txt
Q: Public API with Private Elements in Python I'm working on a web mapping service and would like to provide my users with a Python API that they can use to create custom plugins. These plugins would be running on my server so I'm trying to lock down Python as much as possible. To ensure that users can't access files they are not supposed to, I'm planning on running the plugins inside of a PyPy sandboxed interpreter. The last large hurdle that I'm trying to overcome is how to overcome is with the API itself. I have a database API that will allow the user to make controlled queries to the database. For example, I have a select(column,condition) function that will allow users to retrieve items from the database but they will be restricted to a table. Then, there is a private function _query(sql) that takes raw SQL commands and executes them on the server The problem is that Python (at least to my knowledge) provides no way of preventing users from calling the query function directly and nuking my database. In order to prevent this, I was thinking of breaking my API into two parts: ------------- pipe -------------- ------------ | publicAPI | -------> | privateAPI | ---> | database | ------------- -------------- ------------ So publicAPI would basically be a proxy API communicating with the main API via a pipe. publicAPI would only contain proxies to the public API functions, users would be unable to get a hold of the private elements (such as _query(sql)). Do you think this is a viable solution? Am I making way too much work for myself by overlooking a simpler solution? Thanks so much for your help! A: This looks like a clean way to implement this to me. I believe it's also sometimes referred to as the "Facade" design pattern. In python this is very easy to implement using explicit method delegation (a short snippet to give you a general idea): class FacadingAPI(): def __init__(fullapi_instance): self.select = fullapi_instance.select # method delegating ... A: There are many ways that you can do this. I tend to have a tuple with all the function names that are available and then check if the function being called is in the tuple, if not then throw an error. e.g. funcs = ("available","functions","here") if (calledFunction in funcs): #do something else: #throw error or you can take the approach that google have on their App Engine page http://code.google.com/appengine/articles/rpc.html
Public API with Private Elements in Python
I'm working on a web mapping service and would like to provide my users with a Python API that they can use to create custom plugins. These plugins would be running on my server so I'm trying to lock down Python as much as possible. To ensure that users can't access files they are not supposed to, I'm planning on running the plugins inside of a PyPy sandboxed interpreter. The last large hurdle that I'm trying to overcome is how to overcome is with the API itself. I have a database API that will allow the user to make controlled queries to the database. For example, I have a select(column,condition) function that will allow users to retrieve items from the database but they will be restricted to a table. Then, there is a private function _query(sql) that takes raw SQL commands and executes them on the server The problem is that Python (at least to my knowledge) provides no way of preventing users from calling the query function directly and nuking my database. In order to prevent this, I was thinking of breaking my API into two parts: ------------- pipe -------------- ------------ | publicAPI | -------> | privateAPI | ---> | database | ------------- -------------- ------------ So publicAPI would basically be a proxy API communicating with the main API via a pipe. publicAPI would only contain proxies to the public API functions, users would be unable to get a hold of the private elements (such as _query(sql)). Do you think this is a viable solution? Am I making way too much work for myself by overlooking a simpler solution? Thanks so much for your help!
[ "This looks like a clean way to implement this to me. I believe it's also sometimes referred to as the \"Facade\" design pattern.\nIn python this is very easy to implement using explicit method delegation (a short snippet to give you a general idea):\nclass FacadingAPI():\n def __init__(fullapi_instance):\n self.select = fullapi_instance.select # method delegating\n ...\n\n", "There are many ways that you can do this. I tend to have a tuple with all the function names that are available and then check if the function being called is in the tuple, if not then throw an error.\ne.g.\nfuncs = (\"available\",\"functions\",\"here\")\n\nif (calledFunction in funcs):\n #do something\nelse:\n #throw error\n\nor you can take the approach that google have on their App Engine page\nhttp://code.google.com/appengine/articles/rpc.html\n" ]
[ 2, 0 ]
[]
[]
[ "api", "class", "pipe", "python", "security" ]
stackoverflow_0001666533_api_class_pipe_python_security.txt
Q: Python: advantages and disvantages of _mysql vs MySQLdb? Two libraries for Mysql. I've always used _mysql because it's simpler. Can anyone tell me the difference, and why I should use which one in certain occasions? A: MySQLdb uses DB-API (described in PEP 249) which should be preferred, since it's common to all database drivers. IMHO there is no advantage in going low-level with _mysql. I'd rather think of using higher level libraries, like SQLAlchemy, instead. A: Alternatively, you can use MySQL Connector/Python: MySQL Connector/Python is implementing the MySQL Client/Server protocol completely in Python. This means you don't have to compile anything or MySQL doesn't even have to be installed on the machine. A: _mysql is the one-to-one mapping of the rough mysql API. On top of it, the DB-API is built, handling things using cursors and so on. If you are used to the low-level mysql API provided by libmysqlclient, then the _mysql module is what you need, but as another answer says, there's no real need to go so low-level. You can work with the DB-API and behave just fine, with the added benefit that the DB-API is backend-independent.
Python: advantages and disvantages of _mysql vs MySQLdb?
Two libraries for Mysql. I've always used _mysql because it's simpler. Can anyone tell me the difference, and why I should use which one in certain occasions?
[ "MySQLdb uses DB-API (described in PEP 249) which should be preferred, since it's common to all database drivers. IMHO there is no advantage in going low-level with _mysql. I'd rather think of using higher level libraries, like SQLAlchemy, instead.\n", "Alternatively, you can use MySQL Connector/Python:\n\nMySQL Connector/Python is implementing the MySQL Client/Server protocol completely in Python. This means you don't have to compile anything or MySQL doesn't even have to be installed on the machine.\n\n", "_mysql is the one-to-one mapping of the rough mysql API. On top of it, the DB-API is built, handling things using cursors and so on. \nIf you are used to the low-level mysql API provided by libmysqlclient, then the _mysql module is what you need, but as another answer says, there's no real need to go so low-level. You can work with the DB-API and behave just fine, with the added benefit that the DB-API is backend-independent.\n" ]
[ 14, 9, 5 ]
[]
[]
[ "mysql", "python" ]
stackoverflow_0001620575_mysql_python.txt
Q: How to count the number of times something occurs inside a certain string? In python, I remember there is a function to do this. .count? "The big brown fox is brown" brown = 2. A: why not read the docs first, it's very simple: >>> "The big brown fox is brown".count("brown") 2 A: One thing worth learning if you're a Python beginner is how to use interactive mode to help with this. The first thing to learn is the dir function which will tell you the attributes of an object. >>> mystring = "The big brown fox is brown" >>> dir(mystring) ['__add__', '__class__', '__contains__', '__delattr__', '__doc__', '__eq__', '__ ge__', '__getattribute__', '__getitem__', '__getnewargs__', '__getslice__', '__g t__', '__hash__', '__init__', '__le__', '__len__', '__lt__', '__mod__', '__mul__ ', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__rmod__', ' __rmul__', '__setattr__', '__str__', 'capitalize', 'center', 'count', 'decode', 'encode', 'endswith', 'expandtabs', 'find', 'index', 'isalnum', 'isalpha', 'isdi git', 'islower', 'isspace', 'istitle', 'isupper', 'join', 'ljust', 'lower', 'lst rip', 'partition', 'replace', 'rfind', 'rindex', 'rjust', 'rpartition', 'rsplit' , 'rstrip', 'split', 'splitlines', 'startswith', 'strip', 'swapcase', 'title', ' translate', 'upper', 'zfill'] Remember, in Python, methods are also attributes. So now he use the help function to inquire about one of the methods that looks promising: >>> help(mystring.count) Help on built-in function count: count(...) S.count(sub[, start[, end]]) -> int Return the number of non-overlapping occurrences of substring sub in string S[start:end]. Optional arguments start and end are interpreted as in slice notation. This displays the docstring of the method - some help text which you should get in to the habit of putting in your own methods too.
How to count the number of times something occurs inside a certain string?
In python, I remember there is a function to do this. .count? "The big brown fox is brown" brown = 2.
[ "why not read the docs first, it's very simple:\n>>> \"The big brown fox is brown\".count(\"brown\")\n2\n\n", "One thing worth learning if you're a Python beginner is how to use interactive mode to help with this. The first thing to learn is the dir function which will tell you the attributes of an object.\n>>> mystring = \"The big brown fox is brown\"\n>>> dir(mystring)\n['__add__', '__class__', '__contains__', '__delattr__', '__doc__', '__eq__', '__\nge__', '__getattribute__', '__getitem__', '__getnewargs__', '__getslice__', '__g\nt__', '__hash__', '__init__', '__le__', '__len__', '__lt__', '__mod__', '__mul__\n', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__rmod__', '\n__rmul__', '__setattr__', '__str__', 'capitalize', 'center', 'count', 'decode',\n'encode', 'endswith', 'expandtabs', 'find', 'index', 'isalnum', 'isalpha', 'isdi\ngit', 'islower', 'isspace', 'istitle', 'isupper', 'join', 'ljust', 'lower', 'lst\nrip', 'partition', 'replace', 'rfind', 'rindex', 'rjust', 'rpartition', 'rsplit'\n, 'rstrip', 'split', 'splitlines', 'startswith', 'strip', 'swapcase', 'title', '\ntranslate', 'upper', 'zfill']\n\nRemember, in Python, methods are also attributes. So now he use the help function to inquire about one of the methods that looks promising:\n>>> help(mystring.count)\nHelp on built-in function count:\n\ncount(...)\n S.count(sub[, start[, end]]) -> int\n\n Return the number of non-overlapping occurrences of substring sub in\n string S[start:end]. Optional arguments start and end are interpreted\n as in slice notation.\n\nThis displays the docstring of the method - some help text which you should get in to the habit of putting in your own methods too.\n" ]
[ 27, 19 ]
[]
[]
[ "python" ]
stackoverflow_0001666700_python.txt
Q: file size is dramatically increased after pickle I'm reading in a file and sending the data (once encrypted) to a dictionary, with a hash of the data before and after encryption. I then pickle the dictionary but find the file size is massive compared to the source file size. If I write the encrypted data straight to a file the size is identical to the source. Any idea why my pickled file is so large? #Encrypt data and get hashes def encryptAndExportFile(self, key, inFile, outFile): openInFile = open(inFile,"rb") inFileSize = os.path.getsize(inFile) inFileData = openInFile.readlines() openInFile.close() """ initialise cipher """ cipher = AES.new(key, AES.MODE_CFB) """ initialise MD5 """ m = hashlib.md5() #hash h = hashlib.md5() #hash of encrypted dataq encryptedData = [] for data in inFileData: m.update(data) encData = cipher.encrypt(data) h.update(encData) encryptedData.append(encData) hashResult = m.digest() encHashResult = h.digest() return hashResult, encryptedData, encHashResult def storeEncryptedObject(self, obj, path): outFile = open(path, 'wb') pickle.dump(obj, outFile) outFile.close() A: Try using a binary pickle by specifying protocol=2 as a keyword argument to pickle.dump. It should be much more efficient.
file size is dramatically increased after pickle
I'm reading in a file and sending the data (once encrypted) to a dictionary, with a hash of the data before and after encryption. I then pickle the dictionary but find the file size is massive compared to the source file size. If I write the encrypted data straight to a file the size is identical to the source. Any idea why my pickled file is so large? #Encrypt data and get hashes def encryptAndExportFile(self, key, inFile, outFile): openInFile = open(inFile,"rb") inFileSize = os.path.getsize(inFile) inFileData = openInFile.readlines() openInFile.close() """ initialise cipher """ cipher = AES.new(key, AES.MODE_CFB) """ initialise MD5 """ m = hashlib.md5() #hash h = hashlib.md5() #hash of encrypted dataq encryptedData = [] for data in inFileData: m.update(data) encData = cipher.encrypt(data) h.update(encData) encryptedData.append(encData) hashResult = m.digest() encHashResult = h.digest() return hashResult, encryptedData, encHashResult def storeEncryptedObject(self, obj, path): outFile = open(path, 'wb') pickle.dump(obj, outFile) outFile.close()
[ "Try using a binary pickle by specifying protocol=2 as a keyword argument to pickle.dump. It should be much more efficient.\n" ]
[ 6 ]
[]
[]
[ "aes", "encryption", "pickle", "python" ]
stackoverflow_0001667144_aes_encryption_pickle_python.txt
Q: How to delete a certain IE cookie from python? how can I delete IE 8 cookies for a certain site from Python? A: It is probably cleaner and less error prone to use the Python standard library module: cookielib this provides functions to manipulate cookies in various ways. Unfortunately to use this with IE consider the third party extension to this module: Client Cookie. This module contains various "cookie jars" such as MSIECookieJar which is what you probably want but also MozillaCookieJar. This module does not necessarily work with all version of IE but is worth a look. A: I've found out that under Windows seven cookies are stored in C:\Users(user)\AppData\Roaming\Microsoft\Windows\Cookies in format (user)@host[\d].txt, so I guess just deleting the corresponding file is way to go.
How to delete a certain IE cookie from python?
how can I delete IE 8 cookies for a certain site from Python?
[ "It is probably cleaner and less error prone to use the Python standard library module: cookielib this provides functions to manipulate cookies in various ways. \nUnfortunately to use this with IE consider the third party extension to this module: Client Cookie. This module contains various \"cookie jars\" such as MSIECookieJar which is what you probably want but also MozillaCookieJar. This module does not necessarily work with all version of IE but is worth a look.\n", "I've found out that under Windows seven cookies are stored in \n\nC:\\Users(user)\\AppData\\Roaming\\Microsoft\\Windows\\Cookies\n\nin format (user)@host[\\d].txt, so I guess just deleting the corresponding file is way to go.\n" ]
[ 1, 0 ]
[]
[]
[ "cookies", "internet_explorer", "python" ]
stackoverflow_0001666989_cookies_internet_explorer_python.txt
Q: Accessing POST params with same name in python I need to get values of these check boxes with same name through HTTP "POST". <input type="checkbox" id="dde" name="dept[]" value="dde"/> <input type="checkbox" id="dre" name="dept[]" value="dre"/> <input type="checkbox" id="iid" name="dept[]" value="iid"/> How to get these values in python using self.request.get() method? A: You can use request.get_all(). According to the docs it "Returns a list of values of all of the query (URL) or POST arguments with the given name, possibly an empty list."
Accessing POST params with same name in python
I need to get values of these check boxes with same name through HTTP "POST". <input type="checkbox" id="dde" name="dept[]" value="dde"/> <input type="checkbox" id="dre" name="dept[]" value="dre"/> <input type="checkbox" id="iid" name="dept[]" value="iid"/> How to get these values in python using self.request.get() method?
[ "You can use request.get_all(). \nAccording to the docs it \"Returns a list of values of all of the query (URL) or POST arguments with the given name, possibly an empty list.\"\n" ]
[ 4 ]
[]
[]
[ "google_app_engine", "http", "python" ]
stackoverflow_0001667349_google_app_engine_http_python.txt
Q: Best way to convert HTML to plaintext using Python I'm working on a project that involves converting a large amount of HTML content to plain/text. I have a custom-written module that does the job OK, but I'm wondering if there's some standard tools to help get the job done. A: Html2Text seems to be a good option A: Here's a python library which does HTML parsing: lxml.html BeautifulSoup is another option.
Best way to convert HTML to plaintext using Python
I'm working on a project that involves converting a large amount of HTML content to plain/text. I have a custom-written module that does the job OK, but I'm wondering if there's some standard tools to help get the job done.
[ "Html2Text seems to be a good option\n", "Here's a python library which does HTML parsing:\n\nlxml.html\n\nBeautifulSoup is another option.\n" ]
[ 10, 4 ]
[]
[]
[ "html", "plaintext", "python" ]
stackoverflow_0001668081_html_plaintext_python.txt
Q: Visibility_notify event in pyGTK I am on windows and I am developing a pygtk app. I need to know when a window is visible or hidden by another window. In order to stop an heavy drawing process. http://www.pygtk.org/docs/pygtk/class-gtkwidget.html#signal-gtkwidget--visibility-notify-event I Use the visibility_notify_event to be notified on windows visibility state change. I should get gtk.gdk.VISIBILITY_FULLY_OBSCURED, gtk.gdk.VISIBILITY_PARTIAL or gtk.gdk.VISIBILITY_UNOBSCURED http://www.pygtk.org/docs/pygtk/class-gdkevent.html here is a sample that display message when event occured. #!/usr/bin/env python import pygtk pygtk.require('2.0') import gtk class EventBoxExample: def __init__(self): window = gtk.Window(gtk.WINDOW_TOPLEVEL) window.set_title("Test") window.connect("destroy", lambda w: gtk.main_quit()) window.set_border_width(10) # Create an EventBox and add it to our toplevel window self.event_box = gtk.EventBox() window.add(self.event_box) self.event_box.show() #we want all events self.event_box.set_events(gtk.gdk.ALL_EVENTS_MASK) #connect events self.event_box.connect ("map_event", self.Map) self.event_box.connect ("unmap_event", self.unMap) self.event_box.connect ("configure_event", self.Configure) self.event_box.connect ("expose_event", self.Expose) self.event_box.connect ("visibility_notify_event", self.Visibility) self.event_box.connect ("key_press_event", self.KeyPress) self.event_box.connect ("button_press_event", self.ButtonPress) self.event_box.connect ("button_release_event", self.ButtonRelease) self.event_box.connect ("motion_notify_event", self.MouseMotion) self.event_box.connect ("destroy", self.Destroy) self.event_box.connect ("enter_notify_event", self.Enter) self.event_box.connect ("leave_notify_event", self.Leave) self.event_box.connect ("delete_event", self.Destroy) window.show() def Map (self, *args): print "Map ", args return True def unMap (self, *args): print "unMap ", args return True def Configure (self, *args): print "Configure" return True def Expose (self, *args): print "Expose" return True def Visibility (self, *args): print "Visibility" return True def KeyPress (self, *args): print "KeyPress" return True def ButtonPress (self, *args): print "ButtonPress" return True def ButtonRelease (self, *args): print "ButtonRelease" return True def MouseMotion (self, *args): print "MouseMotion" return True def Enter (self, *args): print "Enter" self.event_box.grab_focus () return True def Leave (self, *args): print "Leave" return True def Destroy (self, *args): print "Destroy" def main(): gtk.main() return 0 if __name__ == "__main__": EventBoxExample() main() Does any one has any idea of why I can't get visibility_notify_event? Thx A: It's quite likely that the underlying GDK layer simply isn't "good enough" on Windows. The GTK+ toolkit's port to Windows is known to be a bit lagging in functionality and polish. If you can try the same program on a Linux machine, and it works there, you can be pretty certain this is a limitation of the Windows port.
Visibility_notify event in pyGTK
I am on windows and I am developing a pygtk app. I need to know when a window is visible or hidden by another window. In order to stop an heavy drawing process. http://www.pygtk.org/docs/pygtk/class-gtkwidget.html#signal-gtkwidget--visibility-notify-event I Use the visibility_notify_event to be notified on windows visibility state change. I should get gtk.gdk.VISIBILITY_FULLY_OBSCURED, gtk.gdk.VISIBILITY_PARTIAL or gtk.gdk.VISIBILITY_UNOBSCURED http://www.pygtk.org/docs/pygtk/class-gdkevent.html here is a sample that display message when event occured. #!/usr/bin/env python import pygtk pygtk.require('2.0') import gtk class EventBoxExample: def __init__(self): window = gtk.Window(gtk.WINDOW_TOPLEVEL) window.set_title("Test") window.connect("destroy", lambda w: gtk.main_quit()) window.set_border_width(10) # Create an EventBox and add it to our toplevel window self.event_box = gtk.EventBox() window.add(self.event_box) self.event_box.show() #we want all events self.event_box.set_events(gtk.gdk.ALL_EVENTS_MASK) #connect events self.event_box.connect ("map_event", self.Map) self.event_box.connect ("unmap_event", self.unMap) self.event_box.connect ("configure_event", self.Configure) self.event_box.connect ("expose_event", self.Expose) self.event_box.connect ("visibility_notify_event", self.Visibility) self.event_box.connect ("key_press_event", self.KeyPress) self.event_box.connect ("button_press_event", self.ButtonPress) self.event_box.connect ("button_release_event", self.ButtonRelease) self.event_box.connect ("motion_notify_event", self.MouseMotion) self.event_box.connect ("destroy", self.Destroy) self.event_box.connect ("enter_notify_event", self.Enter) self.event_box.connect ("leave_notify_event", self.Leave) self.event_box.connect ("delete_event", self.Destroy) window.show() def Map (self, *args): print "Map ", args return True def unMap (self, *args): print "unMap ", args return True def Configure (self, *args): print "Configure" return True def Expose (self, *args): print "Expose" return True def Visibility (self, *args): print "Visibility" return True def KeyPress (self, *args): print "KeyPress" return True def ButtonPress (self, *args): print "ButtonPress" return True def ButtonRelease (self, *args): print "ButtonRelease" return True def MouseMotion (self, *args): print "MouseMotion" return True def Enter (self, *args): print "Enter" self.event_box.grab_focus () return True def Leave (self, *args): print "Leave" return True def Destroy (self, *args): print "Destroy" def main(): gtk.main() return 0 if __name__ == "__main__": EventBoxExample() main() Does any one has any idea of why I can't get visibility_notify_event? Thx
[ "It's quite likely that the underlying GDK layer simply isn't \"good enough\" on Windows. The GTK+ toolkit's port to Windows is known to be a bit lagging in functionality and polish.\nIf you can try the same program on a Linux machine, and it works there, you can be pretty certain this is a limitation of the Windows port.\n" ]
[ 2 ]
[]
[]
[ "pygtk", "python" ]
stackoverflow_0001667525_pygtk_python.txt
Q: How do I get the string representation of a variable in python? I have a variable x in python. How can i find the string 'x' from the variable. Here is my attempt: def var(v,c): for key in c.keys(): if c[key] == v: return key def f(): x = '321' print 'Local var %s = %s'%(var(x,locals()),x) x = '123' print 'Global var %s = %s'%(var(x,locals()),x) f() The results are: Global var x = 123 Local var x = 321 The above recipe seems a bit un-pythonesque. Is there a better/shorter way to achieve the same result? A: Q: I have a variable x in python. How can i find the string 'x' from the variable. A: If I am understanding your question properly, you want to go from the value of a variable to its name. This is not really possible in Python. In Python, there really isn't any such thing as a "variable". What Python really has are "names" which can have objects bound to them. It makes no difference to the object what names, if any, it might be bound to. It might be bound to dozens of different names, or none. Consider this example: foo = 1 bar = foo baz = foo Now, suppose you have the integer object with value 1, and you want to work backwards and find its name. What would you print? Three different names have that object bound to them, and all are equally valid. print(bar is foo) # prints True print(baz is foo) # prints True In Python, a name is a way to access an object, so there is no way to work with names directly. You might be able to search through locals() to find the value and recover a name, but that is at best a parlor trick. And in my above example, which of foo, bar, and baz is the "correct" answer? They all refer to exactly the same object. P.S. The above is a somewhat edited version of an answer I wrote before. I think I did a better job of wording things this time. A: I believe the general form of what you want is repr() or the __repr__() method of an object. with regards to __repr__(): Called by the repr() built-in function and by string conversions (reverse quotes) to compute the “official” string representation of an object. See the docs here: object.repr(self) A: stevenha has a great answer to this question. But, if you actually do want to poke around in the namespace dictionaries anyway, you can get all the names for a given value in a particular scope / namespace like this: def foo1(): x = 5 y = 4 z = x print names_of1(x, locals()) def names_of1(var, callers_namespace): return [name for (name, value) in callers_namespace.iteritems() if var is value] foo1() # prints ['x', 'z'] If you're working with a Python that has stack frame support (most do, CPython does), it isn't required that you pass the locals dict into the names_of function; the function can retrieve that dictionary from its caller's frame itself: def foo2(): xx = object() yy = object() zz = xx print names_of2(xx) def names_of2(var): import inspect callers_namespace = inspect.currentframe().f_back.f_locals return [name for (name, value) in callers_namespace.iteritems() if var is value] foo2() # ['xx', 'zz'] If you're working with a value type that you can assign a name attribute to, you can give it a name, and then use that: class SomeClass(object): pass obj = SomeClass() obj.name = 'obj' class NamedInt(int): __slots__ = ['name'] x = NamedInt(321) x.name = 'x' Finally, if you're working with class attributes and you want them to know their names (descriptors are the obvious use case), you can do cool tricks with metaclass programming like they do in the Django ORM and SQLAlchemy declarative-style table definitions: class AutonamingType(type): def __init__(cls, name, bases, attrs): for (attrname, attrvalue) in attrs.iteritems(): if getattr(attrvalue, '__autoname__', False): attrvalue.name = attrname super(AutonamingType,cls).__init__(name, bases, attrs) class NamedDescriptor(object): __autoname__ = True name = None def __get__(self, instance, instance_type): return self.name class Foo(object): __metaclass__ = AutonamingType bar = NamedDescriptor() baaz = NamedDescriptor() lilfoo = Foo() print lilfoo.bar # prints 'bar' print lilfoo.baaz # prints 'baaz' A: There are three ways to get "the" string representation of an object in python: 1: str() >>> foo={"a":"z","b":"y"} >>> str(foo) "{'a': 'z', 'b': 'y'}" 2: repr() >>> foo={"a":"z","b":"y"} >>> repr(foo) "{'a': 'z', 'b': 'y'}" 3: string interpolation: >>> foo={"a":"z","b":"y"} >>> "%s" % (foo,) "{'a': 'z', 'b': 'y'}" In this case all three methods generated the same output, the difference is that str() calls dict.__str__(), while repr() calls dict.__repr__(). str() is used on string interpolation, while repr() is used by Python internally on each object in a list or dict when you print the list or dict. As Tendayi Mawushe mentiones above, string produced by repr isn't necessarily human-readable. Also, the default implementation of .__str__() is to call .__repr__(), so if the class does not have it's own overrides to .__str__(), the value returned from .__repr__() is used.
How do I get the string representation of a variable in python?
I have a variable x in python. How can i find the string 'x' from the variable. Here is my attempt: def var(v,c): for key in c.keys(): if c[key] == v: return key def f(): x = '321' print 'Local var %s = %s'%(var(x,locals()),x) x = '123' print 'Global var %s = %s'%(var(x,locals()),x) f() The results are: Global var x = 123 Local var x = 321 The above recipe seems a bit un-pythonesque. Is there a better/shorter way to achieve the same result?
[ "Q: I have a variable x in python. How can i find the string 'x' from the variable.\nA: If I am understanding your question properly, you want to go from the value of a variable to its name. This is not really possible in Python.\nIn Python, there really isn't any such thing as a \"variable\". What Python really has are \"names\" which can have objects bound to them. It makes no difference to the object what names, if any, it might be bound to. It might be bound to dozens of different names, or none.\nConsider this example:\nfoo = 1\nbar = foo\nbaz = foo\n\nNow, suppose you have the integer object with value 1, and you want to work backwards and find its name. What would you print? Three different names have that object bound to them, and all are equally valid.\nprint(bar is foo) # prints True\nprint(baz is foo) # prints True\n\nIn Python, a name is a way to access an object, so there is no way to work with names directly. You might be able to search through locals() to find the value and recover a name, but that is at best a parlor trick. And in my above example, which of foo, bar, and baz is the \"correct\" answer? They all refer to exactly the same object.\nP.S. The above is a somewhat edited version of an answer I wrote before. I think I did a better job of wording things this time.\n", "I believe the general form of what you want is repr() or the __repr__() method of an object.\nwith regards to __repr__():\n\nCalled by the repr() built-in function\n and by string conversions (reverse\n quotes) to compute the “official”\n string representation of an object.\n\nSee the docs here: object.repr(self)\n", "stevenha has a great answer to this question. But, if you actually do want to poke around in the namespace dictionaries anyway, you can get all the names for a given value in a particular scope / namespace like this:\ndef foo1():\n x = 5\n y = 4\n z = x\n print names_of1(x, locals())\n\ndef names_of1(var, callers_namespace):\n return [name for (name, value) in callers_namespace.iteritems() if var is value]\n\nfoo1() # prints ['x', 'z']\n\nIf you're working with a Python that has stack frame support (most do, CPython does), it isn't required that you pass the locals dict into the names_of function; the function can retrieve that dictionary from its caller's frame itself:\ndef foo2():\n xx = object()\n yy = object()\n zz = xx\n print names_of2(xx)\n\ndef names_of2(var):\n import inspect\n callers_namespace = inspect.currentframe().f_back.f_locals\n return [name for (name, value) in callers_namespace.iteritems() if var is value]\n\nfoo2() # ['xx', 'zz']\n\nIf you're working with a value type that you can assign a name attribute to, you can give it a name, and then use that:\nclass SomeClass(object):\n pass\n\nobj = SomeClass()\nobj.name = 'obj'\n\n\nclass NamedInt(int):\n __slots__ = ['name']\n\nx = NamedInt(321)\nx.name = 'x'\n\nFinally, if you're working with class attributes and you want them to know their names (descriptors are the obvious use case), you can do cool tricks with metaclass programming like they do in the Django ORM and SQLAlchemy declarative-style table definitions:\nclass AutonamingType(type):\n def __init__(cls, name, bases, attrs):\n for (attrname, attrvalue) in attrs.iteritems():\n if getattr(attrvalue, '__autoname__', False):\n attrvalue.name = attrname\n super(AutonamingType,cls).__init__(name, bases, attrs)\n\nclass NamedDescriptor(object):\n __autoname__ = True\n name = None\n def __get__(self, instance, instance_type):\n return self.name\n\nclass Foo(object):\n __metaclass__ = AutonamingType\n\n bar = NamedDescriptor()\n baaz = NamedDescriptor()\n\nlilfoo = Foo()\nprint lilfoo.bar # prints 'bar'\nprint lilfoo.baaz # prints 'baaz'\n\n", "There are three ways to get \"the\" string representation of an object in python:\n1: str()\n>>> foo={\"a\":\"z\",\"b\":\"y\"}\n>>> str(foo)\n\"{'a': 'z', 'b': 'y'}\"\n\n2: repr()\n>>> foo={\"a\":\"z\",\"b\":\"y\"}\n>>> repr(foo)\n\"{'a': 'z', 'b': 'y'}\"\n\n3: string interpolation:\n>>> foo={\"a\":\"z\",\"b\":\"y\"}\n>>> \"%s\" % (foo,)\n\"{'a': 'z', 'b': 'y'}\"\n\nIn this case all three methods generated the same output, the difference is that str() calls dict.__str__(), while repr() calls dict.__repr__(). str() is used on string interpolation, while repr() is used by Python internally on each object in a list or dict when you print the list or dict.\nAs Tendayi Mawushe mentiones above, string produced by repr isn't necessarily human-readable.\nAlso, the default implementation of .__str__() is to call .__repr__(), so if the class does not have it's own overrides to .__str__(), the value returned from .__repr__() is used.\n" ]
[ 14, 7, 3, 1 ]
[]
[]
[ "introspection", "python" ]
stackoverflow_0001665833_introspection_python.txt
Q: qt - pyqt QTableView not populating when changing databases I'm trying to allow my users to pick which database to open. Each database will have the same schema. For some reason though I can't get my QTableView to populate after I open the database. I'm paraphrasing the example code but this should give you an idea of what I'm trying to do. works: class aMainWindow(QMainWindow, Ui_MainWindow): def __init__(self): QMainWindow.__init__(self) self.db = QSqlDatabase.addDatabase("QSQLITE") self.db.setDatabaseName('testdb.db') self.db.open() # Set up the user interface from Designer. self.setupUi(self) #self.db.setDatabaseName('testdb.db') self.model = QSqlTableModel(self) self.model.setTable("records") self.model.setSort(FILEORDER, Qt.AscendingOrder) self.model.setHeaderData(ID, Qt.Horizontal, QVariant("ID")) self.model.setHeaderData(FILEORDER, Qt.Horizontal, QVariant("File Order")) self.model.setHeaderData(RECORDTYPE, Qt.Horizontal, QVariant("Type")) self.model.setHeaderData(NAME, Qt.Horizontal, QVariant("Name")) self.model.setHeaderData(PRESORTNAME, Qt.Horizontal, QVariant("Presort Name")) self.model.setHeaderData(RECORD, Qt.Horizontal, QVariant("Record")) self.model.select() self.tableView.setModel(self.model) #self.view.setSelectionMode(QTableView.SingleSelection) #self.view.setSelectionBehavior(QTableView.SelectRows) self.tableView.setColumnHidden(ID, True) self.tableView.setColumnHidden(PRESORTNAME, True) self.tableView.setColumnHidden(RECORD, True) doesn't work: class aMainWindow(QMainWindow, Ui_MainWindow): def __init__(self): QMainWindow.__init__(self) # Set up the user interface from Designer. self.setupUi(self) #self.db.setDatabaseName('testdb.db') self.model = QSqlTableModel(self) self.model.setTable("records") self.model.setSort(FILEORDER, Qt.AscendingOrder) self.model.setHeaderData(ID, Qt.Horizontal, QVariant("ID")) self.model.setHeaderData(FILEORDER, Qt.Horizontal, QVariant("File Order")) self.model.setHeaderData(RECORDTYPE, Qt.Horizontal, QVariant("Type")) self.model.setHeaderData(NAME, Qt.Horizontal, QVariant("Name")) self.model.setHeaderData(PRESORTNAME, Qt.Horizontal, QVariant("Presort Name")) self.model.setHeaderData(RECORD, Qt.Horizontal, QVariant("Record")) self.model.select() self.tableView.setModel(self.model) #self.view.setSelectionMode(QTableView.SingleSelection) #self.view.setSelectionBehavior(QTableView.SelectRows) self.tableView.setColumnHidden(ID, True) self.tableView.setColumnHidden(PRESORTNAME, True) self.tableView.setColumnHidden(RECORD, True) #slot of the open db action def on_actionOpen_DB_triggered(self, checked=None): if checked is None: return filename = QFileDialog.getOpenFileName(self, 'open a database', '/home/', "Databases (*.db)", #All Files (*.*) "Databases (*.db)") if not filename: pass self.db = QSqlDatabase.addDatabase("QSQLITE") if self.db.isOpen(): sys.stdout.write('db still open?') self.db.setDatabaseName(filename) self.dbname = filename self.db.open() self.model.select() #self.tableView.update() if self.db.isOpen(): sys.stdout.write('db opened') A: I can't remember today exactly where I found it but as I was researching something else I found some forum posting that said the connection must be made before making the model. I suspect there must be some code in the model construct that's touching the db. I changed my on_actionOpen_DB_triggered to create the model after making the connection and it works just fine.
qt - pyqt QTableView not populating when changing databases
I'm trying to allow my users to pick which database to open. Each database will have the same schema. For some reason though I can't get my QTableView to populate after I open the database. I'm paraphrasing the example code but this should give you an idea of what I'm trying to do. works: class aMainWindow(QMainWindow, Ui_MainWindow): def __init__(self): QMainWindow.__init__(self) self.db = QSqlDatabase.addDatabase("QSQLITE") self.db.setDatabaseName('testdb.db') self.db.open() # Set up the user interface from Designer. self.setupUi(self) #self.db.setDatabaseName('testdb.db') self.model = QSqlTableModel(self) self.model.setTable("records") self.model.setSort(FILEORDER, Qt.AscendingOrder) self.model.setHeaderData(ID, Qt.Horizontal, QVariant("ID")) self.model.setHeaderData(FILEORDER, Qt.Horizontal, QVariant("File Order")) self.model.setHeaderData(RECORDTYPE, Qt.Horizontal, QVariant("Type")) self.model.setHeaderData(NAME, Qt.Horizontal, QVariant("Name")) self.model.setHeaderData(PRESORTNAME, Qt.Horizontal, QVariant("Presort Name")) self.model.setHeaderData(RECORD, Qt.Horizontal, QVariant("Record")) self.model.select() self.tableView.setModel(self.model) #self.view.setSelectionMode(QTableView.SingleSelection) #self.view.setSelectionBehavior(QTableView.SelectRows) self.tableView.setColumnHidden(ID, True) self.tableView.setColumnHidden(PRESORTNAME, True) self.tableView.setColumnHidden(RECORD, True) doesn't work: class aMainWindow(QMainWindow, Ui_MainWindow): def __init__(self): QMainWindow.__init__(self) # Set up the user interface from Designer. self.setupUi(self) #self.db.setDatabaseName('testdb.db') self.model = QSqlTableModel(self) self.model.setTable("records") self.model.setSort(FILEORDER, Qt.AscendingOrder) self.model.setHeaderData(ID, Qt.Horizontal, QVariant("ID")) self.model.setHeaderData(FILEORDER, Qt.Horizontal, QVariant("File Order")) self.model.setHeaderData(RECORDTYPE, Qt.Horizontal, QVariant("Type")) self.model.setHeaderData(NAME, Qt.Horizontal, QVariant("Name")) self.model.setHeaderData(PRESORTNAME, Qt.Horizontal, QVariant("Presort Name")) self.model.setHeaderData(RECORD, Qt.Horizontal, QVariant("Record")) self.model.select() self.tableView.setModel(self.model) #self.view.setSelectionMode(QTableView.SingleSelection) #self.view.setSelectionBehavior(QTableView.SelectRows) self.tableView.setColumnHidden(ID, True) self.tableView.setColumnHidden(PRESORTNAME, True) self.tableView.setColumnHidden(RECORD, True) #slot of the open db action def on_actionOpen_DB_triggered(self, checked=None): if checked is None: return filename = QFileDialog.getOpenFileName(self, 'open a database', '/home/', "Databases (*.db)", #All Files (*.*) "Databases (*.db)") if not filename: pass self.db = QSqlDatabase.addDatabase("QSQLITE") if self.db.isOpen(): sys.stdout.write('db still open?') self.db.setDatabaseName(filename) self.dbname = filename self.db.open() self.model.select() #self.tableView.update() if self.db.isOpen(): sys.stdout.write('db opened')
[ "I can't remember today exactly where I found it but as I was researching something else I found some forum posting that said the connection must be made before making the model. I suspect there must be some code in the model construct that's touching the db. I changed my on_actionOpen_DB_triggered to create the model after making the connection and it works just fine. \n" ]
[ 2 ]
[]
[]
[ "pyqt", "python", "qt" ]
stackoverflow_0001659756_pyqt_python_qt.txt
Q: Reading bytestreams in Python I'm using Python appscript to write artwork to my iTunes Songs. I have a file stored in .pict format and when I use the normal open and read routines, it reads the content as a string (encoded in utf-8). imFile = open('/Users/kartikaiyer/temp.pict','r') data = imFile.read() it = app('iTunes') sel = it.current_track.get() sel.artworks[1].data_.set(data[513:]) Is the code I'm using. it fails with a objct not recognized and I'm guessing its because the set parameter is a utf-8 encoded string, Any ideas as to how I can coerce data to a bytestream and use that as a set parameter. BinAscii module doesn't have the functions I need. Any help would be much appreciated. A: Try setting the read mode to binary: imFile = open('/Users/kartikaiyer/temp.pict','rb')
Reading bytestreams in Python
I'm using Python appscript to write artwork to my iTunes Songs. I have a file stored in .pict format and when I use the normal open and read routines, it reads the content as a string (encoded in utf-8). imFile = open('/Users/kartikaiyer/temp.pict','r') data = imFile.read() it = app('iTunes') sel = it.current_track.get() sel.artworks[1].data_.set(data[513:]) Is the code I'm using. it fails with a objct not recognized and I'm guessing its because the set parameter is a utf-8 encoded string, Any ideas as to how I can coerce data to a bytestream and use that as a set parameter. BinAscii module doesn't have the functions I need. Any help would be much appreciated.
[ "Try setting the read mode to binary:\nimFile = open('/Users/kartikaiyer/temp.pict','rb')\n\n" ]
[ 7 ]
[]
[]
[ "py_appscript", "python", "sourceforge_appscript" ]
stackoverflow_0001669040_py_appscript_python_sourceforge_appscript.txt
Q: Summing Consecutive Ranges Pythonically I have a sumranges() function, which sums all the ranges of consecutive numbers found in a tuple of tuples. To illustrate: def sumranges(nums): return sum([sum([1 for j in range(len(nums[i])) if nums[i][j] == 0 or nums[i][j - 1] + 1 != nums[i][j]]) for i in range(len(nums))]) >>> nums = ((1, 2, 3, 4), (1, 5, 6), (19, 20, 24, 29, 400)) >>> print sumranges(nums) 7 As you can see, it returns the number of ranges of consecutive digits within the tuple, that is: len((1, 2, 3, 4), (1), (5, 6), (19, 20), (24), (29), (400)) = 7. The tuples are always ordered. My problem is that my sumranges() is terrible. I hate looking at it. I'm currently just iterating through the tuple and each subtuple, assigning a 1 if the number is not (1 + previous number), and summing the total. I feel like I am missing a much easier way to accomplish my stated objective. Does anyone know a more pythonic way to do this? Edit: I have benchmarked all the answers given thus far. Thanks to all of you for your answers. The benchmarking code is as follows, using a sample size of 100K: from time import time from random import randrange nums = [sorted(list(set(randrange(1, 10) for i in range(10)))) for j in range(100000)] for func in sumranges, alex, matt, redglyph, ephemient, ferdinand: start = time() result = func(nums) end = time() print ', '.join([func.__name__, str(result), str(end - start) + ' s']) Results are as follows. Actual answer shown to verify that all functions return the correct answer: sumranges, 250281, 0.54171204567 s alex, 250281, 0.531121015549 s matt, 250281, 0.843333005905 s redglyph, 250281, 0.366822004318 s ephemient, 250281, 0.805964946747 s ferdinand, 250281, 0.405596971512 s RedGlyph does edge out in terms of speed, but the simplest answer is probably Ferdinand's, and probably wins for most pythonic. A: My 2 cents: >>> sum(len(set(x - i for i, x in enumerate(t))) for t in nums) 7 It's basically the same idea as descriped in Alex' post, but using a set instead of itertools.groupby, resulting in a shorter expression. Since sets are implemented in C and len() of a set runs in constant time, this should also be pretty fast. A: Consider: >>> nums = ((1, 2, 3, 4), (1, 5, 6), (19, 20, 24, 29, 400)) >>> flat = [[(x - i) for i, x in enumerate(tu)] for tu in nums] >>> print flat [[1, 1, 1, 1], [1, 4, 4], [19, 19, 22, 26, 396]] >>> import itertools >>> print sum(1 for tu in flat for _ in itertools.groupby(tu)) 7 >>> we "flatten" the "increasing ramps" of interest by subtracting the index from the value, turning them into consecutive "runs" of identical values; then we identify and could the "runs" with the precious itertools.groupby. This seems to be a pretty elegant (and speedy) solution to your problem. A: Just to show something closer to your original code: def sumranges(nums): return sum( (1 for i in nums for j, v in enumerate(i) if j == 0 or v != i[j-1] + 1) ) The idea here was to: avoid building intermediate lists but use a generator instead, it will save some resources avoid using indices when you already have selected a subelement (i and v above). The remaining sum() is still necessary with my example though. A: Here's my attempt: def ranges(ls): for l in ls: consec = False for (a,b) in zip(l, l[1:]+(None,)): if b == a+1: consec = True if b is not None and b != a+1: consec = False if consec: yield 1 ''' >>> nums = ((1, 2, 3, 4), (1, 5, 6), (19, 20, 24, 29, 400)) >>> print sum(ranges(nums)) 7 ''' It looks at the numbers pairwise, checking if they are a consecutive pair (unless it's at the last element of the list). Each time there's a consecutive pair of numbers it yields 1. A: This could probably be put together in a more compact form, but I think clarity would suffer: def pairs(seq): for i in range(1,len(seq)): yield (seq[i-1], seq[i]) def isadjacent(pair): return pair[0]+1 == pair[1] def sumrange(seq): return 1 + sum([1 for pair in pairs(seq) if not isadjacent(pair)]) def sumranges(nums): return sum([sumrange(seq) for seq in nums]) nums = ((1, 2, 3, 4), (1, 5, 6), (19, 20, 24, 29, 400)) print sumranges(nums) # prints 7 A: You could probably do this better if you had an IntervalSet class because then you would scan through your ranges to build your IntervalSet, then just use the count of set members. Some tasks don't always lend themselves to neat code, particularly if you need to write the code for performance. A: There is a formula for this, the sum of the first n numbers, 1+ 2+ ... + n = n(n+1) / 2 . Then if you want to have the sum of i-j then it is (j(j+1)/2) - (i(i+1)/2) this I am sure simplifies but you can work that out. It might not be pythonic but it is what I would use.
Summing Consecutive Ranges Pythonically
I have a sumranges() function, which sums all the ranges of consecutive numbers found in a tuple of tuples. To illustrate: def sumranges(nums): return sum([sum([1 for j in range(len(nums[i])) if nums[i][j] == 0 or nums[i][j - 1] + 1 != nums[i][j]]) for i in range(len(nums))]) >>> nums = ((1, 2, 3, 4), (1, 5, 6), (19, 20, 24, 29, 400)) >>> print sumranges(nums) 7 As you can see, it returns the number of ranges of consecutive digits within the tuple, that is: len((1, 2, 3, 4), (1), (5, 6), (19, 20), (24), (29), (400)) = 7. The tuples are always ordered. My problem is that my sumranges() is terrible. I hate looking at it. I'm currently just iterating through the tuple and each subtuple, assigning a 1 if the number is not (1 + previous number), and summing the total. I feel like I am missing a much easier way to accomplish my stated objective. Does anyone know a more pythonic way to do this? Edit: I have benchmarked all the answers given thus far. Thanks to all of you for your answers. The benchmarking code is as follows, using a sample size of 100K: from time import time from random import randrange nums = [sorted(list(set(randrange(1, 10) for i in range(10)))) for j in range(100000)] for func in sumranges, alex, matt, redglyph, ephemient, ferdinand: start = time() result = func(nums) end = time() print ', '.join([func.__name__, str(result), str(end - start) + ' s']) Results are as follows. Actual answer shown to verify that all functions return the correct answer: sumranges, 250281, 0.54171204567 s alex, 250281, 0.531121015549 s matt, 250281, 0.843333005905 s redglyph, 250281, 0.366822004318 s ephemient, 250281, 0.805964946747 s ferdinand, 250281, 0.405596971512 s RedGlyph does edge out in terms of speed, but the simplest answer is probably Ferdinand's, and probably wins for most pythonic.
[ "My 2 cents:\n>>> sum(len(set(x - i for i, x in enumerate(t))) for t in nums)\n7\n\nIt's basically the same idea as descriped in Alex' post, but using a set instead of itertools.groupby, resulting in a shorter expression. Since sets are implemented in C and len() of a set runs in constant time, this should also be pretty fast.\n", "Consider:\n>>> nums = ((1, 2, 3, 4), (1, 5, 6), (19, 20, 24, 29, 400))\n>>> flat = [[(x - i) for i, x in enumerate(tu)] for tu in nums]\n>>> print flat\n[[1, 1, 1, 1], [1, 4, 4], [19, 19, 22, 26, 396]]\n>>> import itertools\n>>> print sum(1 for tu in flat for _ in itertools.groupby(tu))\n7\n>>> \n\nwe \"flatten\" the \"increasing ramps\" of interest by subtracting the index from the value, turning them into consecutive \"runs\" of identical values; then we identify and could the \"runs\" with the precious itertools.groupby. This seems to be a pretty elegant (and speedy) solution to your problem.\n", "Just to show something closer to your original code:\ndef sumranges(nums):\n return sum( (1 for i in nums\n for j, v in enumerate(i)\n if j == 0 or v != i[j-1] + 1) )\n\nThe idea here was to:\n\navoid building intermediate lists but use a generator instead, it will save some resources\navoid using indices when you already have selected a subelement (i and v above).\n\nThe remaining sum() is still necessary with my example though.\n", "Here's my attempt:\ndef ranges(ls):\n for l in ls:\n consec = False\n for (a,b) in zip(l, l[1:]+(None,)):\n if b == a+1:\n consec = True\n if b is not None and b != a+1:\n consec = False\n if consec:\n yield 1\n\n'''\n>>> nums = ((1, 2, 3, 4), (1, 5, 6), (19, 20, 24, 29, 400))\n>>> print sum(ranges(nums))\n7\n'''\n\nIt looks at the numbers pairwise, checking if they are a consecutive pair (unless it's at the last element of the list). Each time there's a consecutive pair of numbers it yields 1.\n", "This could probably be put together in a more compact form, but I think clarity would suffer:\ndef pairs(seq):\n for i in range(1,len(seq)):\n yield (seq[i-1], seq[i])\n\ndef isadjacent(pair):\n return pair[0]+1 == pair[1]\n\ndef sumrange(seq):\n return 1 + sum([1 for pair in pairs(seq) if not isadjacent(pair)])\n\ndef sumranges(nums):\n return sum([sumrange(seq) for seq in nums])\n\n\nnums = ((1, 2, 3, 4), (1, 5, 6), (19, 20, 24, 29, 400))\nprint sumranges(nums) # prints 7\n\n", "You could probably do this better if you had an IntervalSet class because then you would scan through your ranges to build your IntervalSet, then just use the count of set members.\nSome tasks don't always lend themselves to neat code, particularly if you need to write the code for performance.\n", "There is a formula for this, the sum of the first n numbers, 1+ 2+ ... + n = n(n+1) / 2 . Then if you want to have the sum of i-j then it is (j(j+1)/2) - (i(i+1)/2) this I am sure simplifies but you can work that out. It might not be pythonic but it is what I would use.\n" ]
[ 14, 9, 7, 1, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001668491_python.txt
Q: Python: importing through function to main namespace (Important: See update below.) I'm trying to write a function, import_something, that will important certain modules. (It doesn't matter which for this question.) The thing is, I would like those modules to be imported at the level from which the function is called. For example: import_something() # Let's say this imports my_module my_module.do_stuff() # Is this possible? Update: Sorry, my original phrasing and example were misleading. I'll try to explain my entire problem. What I have is a package, which has inside it some modules and packages. In its __init__.py I want to import all the modules and packages. So somewhere else in the program, I import the entire package, and iterate over the modules/packages it has imported. (Why? The package is called crunchers, and inside it there are defined all kinds of crunchers, like CruncherThread, CruncherProcess, and in the future perhaps MicroThreadCruncher. I want the crunchers package to automatically have all the crunchers that are placed in it, so later in the program when I use crunchers I know it can tell exactly which crunchers I have defined.) I know I can solve this if I avoid using functions at all, and do all imports on the main level with for loops and such. But it's ugly and I want to see if I can avoid it. If anything more is unclear, please ask in comments. A: Functions have the ability to return something to where they were called. Its called their return value :p def import_something(): # decide what to import # ... mod = __import__( something ) return mod my_module = import_something() my_module.do_stuff() good style, no hassle. About your update, I think adding something like this to you __init__.py does what you want: import os # make a list of all .py files in the same dir that dont start with _ __all__ = installed = [ name for (name,ext) in ( os.path.splitext(fn) for fn in os.listdir(os.path.dirname(__file__))) if ext=='.py' and not name.startswith('_') ] for name in installed: # import them all __import__( name, globals(), locals()) somewhere else: import crunchers crunchers.installed # all names crunchers.cruncherA # actual module object, but you can't use it since you don't know the name when you write the code # turns out the be pretty much the same as the first solution :p mycruncher = getattr(crunchers, crunchers.installed[0]) A: You can monkey with the parent frame in CPython to install the modules into the locals for that frame (and only that frame). The downsides are that a) this is really quite hackish and b) sys._getframe() is not guaranteed to exist in other python implementations. def importer(): f = sys._getframe(1) # Get the parent frame f.f_locals["some_name"] = __import__(module_name, f.f_globals, f.f_locals) You still have to install the module into f_locals, since import won't actually do that for you - you just supply the parent frame locals and globals for the proper context. Then in your calling function you can have: def foo(): importer() # Magically makes 'some_name' available to the calling function some_name.some_func() A: Are you looking for something like this? def my_import(*names): for name in names: sys._getframe(1).f_locals[name] = __import__(name) then you can call it like this: my_import("os", "re") or namelist = ["os", "re"] my_import(*namelist) A: According to __import__'s help: __import__(name, globals={}, locals={}, fromlist=[], level=-1) -> module Import a module. The globals are only used to determine the context; they are not modified. ... So you can simply get the globals of your parent frame and use that for the __import__ call. def import_something(s): return __import__(s, sys._getframe(1).f_globals) Note: Pre-2.6, __import__'s signature differed in that it simply had optional parameters instead of using kwargs. Since globals is the second argument in both cases, the way it's called above works fine. Just something to be aware of if you decided to use any of the other arguments.
Python: importing through function to main namespace
(Important: See update below.) I'm trying to write a function, import_something, that will important certain modules. (It doesn't matter which for this question.) The thing is, I would like those modules to be imported at the level from which the function is called. For example: import_something() # Let's say this imports my_module my_module.do_stuff() # Is this possible? Update: Sorry, my original phrasing and example were misleading. I'll try to explain my entire problem. What I have is a package, which has inside it some modules and packages. In its __init__.py I want to import all the modules and packages. So somewhere else in the program, I import the entire package, and iterate over the modules/packages it has imported. (Why? The package is called crunchers, and inside it there are defined all kinds of crunchers, like CruncherThread, CruncherProcess, and in the future perhaps MicroThreadCruncher. I want the crunchers package to automatically have all the crunchers that are placed in it, so later in the program when I use crunchers I know it can tell exactly which crunchers I have defined.) I know I can solve this if I avoid using functions at all, and do all imports on the main level with for loops and such. But it's ugly and I want to see if I can avoid it. If anything more is unclear, please ask in comments.
[ "Functions have the ability to return something to where they were called. Its called their return value :p\ndef import_something():\n # decide what to import\n # ...\n mod = __import__( something )\n return mod\nmy_module = import_something()\nmy_module.do_stuff()\n\ngood style, no hassle.\nAbout your update, I think adding something like this to you __init__.py does what you want:\nimport os\n\n# make a list of all .py files in the same dir that dont start with _\n__all__ = installed = [ name for (name,ext) in ( os.path.splitext(fn) for fn in os.listdir(os.path.dirname(__file__))) if ext=='.py' and not name.startswith('_') ]\nfor name in installed:\n # import them all\n __import__( name, globals(), locals())\n\nsomewhere else:\nimport crunchers\ncrunchers.installed # all names\ncrunchers.cruncherA # actual module object, but you can't use it since you don't know the name when you write the code\n# turns out the be pretty much the same as the first solution :p\nmycruncher = getattr(crunchers, crunchers.installed[0]) \n\n", "You can monkey with the parent frame in CPython to install the modules into the locals for that frame (and only that frame). The downsides are that a) this is really quite hackish and b) sys._getframe() is not guaranteed to exist in other python implementations.\ndef importer():\n f = sys._getframe(1) # Get the parent frame\n f.f_locals[\"some_name\"] = __import__(module_name, f.f_globals, f.f_locals)\n\nYou still have to install the module into f_locals, since import won't actually do that for you - you just supply the parent frame locals and globals for the proper context.\nThen in your calling function you can have:\ndef foo():\n importer() # Magically makes 'some_name' available to the calling function\n some_name.some_func()\n\n", "Are you looking for something like this?\ndef my_import(*names):\n for name in names:\n sys._getframe(1).f_locals[name] = __import__(name)\n\nthen you can call it like this:\nmy_import(\"os\", \"re\")\n\nor\nnamelist = [\"os\", \"re\"]\nmy_import(*namelist)\n\n", "According to __import__'s help:\n__import__(name, globals={}, locals={}, fromlist=[], level=-1) -> module\n\nImport a module. The globals are only used to determine the context;\nthey are not modified. ...\n\nSo you can simply get the globals of your parent frame and use that for the __import__ call.\ndef import_something(s):\n return __import__(s, sys._getframe(1).f_globals)\n\nNote: Pre-2.6, __import__'s signature differed in that it simply had optional parameters instead of using kwargs. Since globals is the second argument in both cases, the way it's called above works fine. Just something to be aware of if you decided to use any of the other arguments.\n" ]
[ 4, 1, 1, 0 ]
[]
[]
[ "import", "python" ]
stackoverflow_0001668882_import_python.txt
Q: Python auto define variables I am new to programming and am learning Python as my first language. I have been tasked with writing a script that converts one input file type to another. My problem is this: There is one part of the input files where there can be any number of rows of data. I wrote a loop to determine how many rows there are but cannot seem to write a loop that defines each line to its own variable eg: rprim1, rprim2, rprim3, etc. This is the code I am using to pull variables from the files: rprim1=linecache.getline(infile,7) To reiterate, I would like the parser to define however many lines of data there are, X, as rprimx, with each line,7 to 7+X. Any help would be appreciated. Thanks A: You can dynamically create the variables, but it doesn't make sense unless this is homework. instead use rprim=infile.readlines() then the lines are rprim[0], rprim[1], rprim[2], rprim[3], rprim[4], rprim[5], rprim[6] you can find out how many rows there are with len(rprim) A: That is something you really don't want. Suppose you have those variables rprim1, rprim2 .. etc how would you know how many of them do you have? Read up on lists in python link text
Python auto define variables
I am new to programming and am learning Python as my first language. I have been tasked with writing a script that converts one input file type to another. My problem is this: There is one part of the input files where there can be any number of rows of data. I wrote a loop to determine how many rows there are but cannot seem to write a loop that defines each line to its own variable eg: rprim1, rprim2, rprim3, etc. This is the code I am using to pull variables from the files: rprim1=linecache.getline(infile,7) To reiterate, I would like the parser to define however many lines of data there are, X, as rprimx, with each line,7 to 7+X. Any help would be appreciated. Thanks
[ "You can dynamically create the variables, but it doesn't make sense unless this is homework.\ninstead use\nrprim=infile.readlines()\n\nthen the lines are \nrprim[0], rprim[1], rprim[2], rprim[3], rprim[4], rprim[5], rprim[6]\n\nyou can find out how many rows there are with\nlen(rprim)\n\n", "That is something you really don't want. Suppose you have those variables rprim1, rprim2 .. etc how would you know how many of them do you have?\nRead up on lists in python link text\n" ]
[ 13, 1 ]
[]
[]
[ "python", "variables" ]
stackoverflow_0001670252_python_variables.txt
Q: How can I get pyplot images to show on a console app? I'm trying to create an image using matplotlib.pyplot.imshow(). However, when I run the program from my console, it doesn't display anything? This is the code: import matplotlib.pyplot myimage = gen_image() matplotlib.pyplot.gray() matplotlib.pyplot.imshow(results) But this shows nothing. A: You have to call the show function to actually display anything, like matplotlib.pyplot.show() Unfortunately the matplotlib documentation seems to be currently broken, so I can't provide a link. Note that for interactive plotting one typically uses IPython, which has special support for matplotlib. By the way, you can do import matplotlib.pyplot as plt to make the typing less tedious (this is pretty much the official standard way). A: You could try http://fishsoup.net/software/reinteract/
How can I get pyplot images to show on a console app?
I'm trying to create an image using matplotlib.pyplot.imshow(). However, when I run the program from my console, it doesn't display anything? This is the code: import matplotlib.pyplot myimage = gen_image() matplotlib.pyplot.gray() matplotlib.pyplot.imshow(results) But this shows nothing.
[ "You have to call the show function to actually display anything, like\nmatplotlib.pyplot.show()\n\nUnfortunately the matplotlib documentation seems to be currently broken, so I can't provide a link.\nNote that for interactive plotting one typically uses IPython, which has special support for matplotlib.\nBy the way, you can do\nimport matplotlib.pyplot as plt\n\nto make the typing less tedious (this is pretty much the official standard way).\n", "You could try http://fishsoup.net/software/reinteract/\n" ]
[ 14, 0 ]
[]
[]
[ "console", "image", "matplotlib", "python" ]
stackoverflow_0001670480_console_image_matplotlib_python.txt
Q: Set Max Width for Frame with ScrolledWindow in wxPython I created a Frame object and I want to limit the width it can expand to. The only window in the frame is a ScrolledWindow object and that contains all other children. I have a lot of objects arranged with a BoxSizer oriented vertically so the ScrolledWindow object gets pretty tall. There is often a scrollbar to the right so you can scroll up and down. The problem comes when I try to set a max size for the frame. I'm using the scrolled_window.GetBestSize() (or scrolled_window.GetEffectiveMinSize()) functions of ScrolledWindow, but they don't take into account the vertical scrollbar. I end up having a frame that's just a little too narrow and there's a horizontal scrollbar that will never go away. Is there a method that will account compensate for the vertical scrollbar's width? If not, how would I get the scrollbar's width so I can manually add it to the my frame's max size? Here's an example with a tall but narrow frame: class TallFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self, parent=None, title='Tall Frame') self.scroll = wx.ScrolledWindow(parent=self) # our scroll area where we'll put everything scroll_sizer = wx.BoxSizer(wx.VERTICAL) # Fill the scroll area with something... for i in xrange(10): textbox = wx.StaticText(self.scroll, -1, "%d) Some random text" % i, size=(400, 100)) scroll_sizer.Add(textbox, 0, wx.EXPAND) self.scroll.SetSizer(scroll_sizer) self.scroll.Fit() width, height = self.scroll.GetBestSize() self.SetMaxSize((width, -1)) # Trying to limit the width of our frame self.scroll.SetScrollbars(1, 1, width, height) # throwing up some scrollbars If you create this frame you'll see that self.SetMaxSize is set too narrow. There will always be a horizontal scrollbar since self.scroll.GetBestSize() didn't account for the width of scrollbar. A: This is a little ugly, but seems to work on Window and Linux. There is difference, though. The self.GetVirtualSize() seems to return different values on each platform. At any rate, I think this may help you. width, height = self.scroll.GetBestSize() width_2, height_2 = self.GetVirtualSize() print width print width_2 dx = wx.SystemSettings_GetMetric(wx.SYS_VSCROLL_X) print dx self.SetMaxSize((width + (width - width_2) + dx, -1)) # Trying to limit the width of our frame self.scroll.SetScrollbars(1, 1, width, height) # throwing up some scrollbars
Set Max Width for Frame with ScrolledWindow in wxPython
I created a Frame object and I want to limit the width it can expand to. The only window in the frame is a ScrolledWindow object and that contains all other children. I have a lot of objects arranged with a BoxSizer oriented vertically so the ScrolledWindow object gets pretty tall. There is often a scrollbar to the right so you can scroll up and down. The problem comes when I try to set a max size for the frame. I'm using the scrolled_window.GetBestSize() (or scrolled_window.GetEffectiveMinSize()) functions of ScrolledWindow, but they don't take into account the vertical scrollbar. I end up having a frame that's just a little too narrow and there's a horizontal scrollbar that will never go away. Is there a method that will account compensate for the vertical scrollbar's width? If not, how would I get the scrollbar's width so I can manually add it to the my frame's max size? Here's an example with a tall but narrow frame: class TallFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self, parent=None, title='Tall Frame') self.scroll = wx.ScrolledWindow(parent=self) # our scroll area where we'll put everything scroll_sizer = wx.BoxSizer(wx.VERTICAL) # Fill the scroll area with something... for i in xrange(10): textbox = wx.StaticText(self.scroll, -1, "%d) Some random text" % i, size=(400, 100)) scroll_sizer.Add(textbox, 0, wx.EXPAND) self.scroll.SetSizer(scroll_sizer) self.scroll.Fit() width, height = self.scroll.GetBestSize() self.SetMaxSize((width, -1)) # Trying to limit the width of our frame self.scroll.SetScrollbars(1, 1, width, height) # throwing up some scrollbars If you create this frame you'll see that self.SetMaxSize is set too narrow. There will always be a horizontal scrollbar since self.scroll.GetBestSize() didn't account for the width of scrollbar.
[ "This is a little ugly, but seems to work on Window and Linux. There is difference, though. The self.GetVirtualSize() seems to return different values on each platform. At any rate, I think this may help you.\nwidth, height = self.scroll.GetBestSize()\nwidth_2, height_2 = self.GetVirtualSize()\nprint width\nprint width_2\ndx = wx.SystemSettings_GetMetric(wx.SYS_VSCROLL_X)\nprint dx\nself.SetMaxSize((width + (width - width_2) + dx, -1)) # Trying to limit the width of our frame\nself.scroll.SetScrollbars(1, 1, width, height) # throwing up some scrollbars\n\n" ]
[ 1 ]
[]
[]
[ "python", "user_interface", "wxpython" ]
stackoverflow_0001371510_python_user_interface_wxpython.txt
Q: How to prepopulate ID in Django I have simple question model : class Question(Polymorph): text = models.CharField(max_length=256) poll = models.ForeignKey(Poll) index = models.IntegerField() And I would like to prepopulate ( when saving ) index field with ID value. Of course before save I dont have ID value ( its created after it ), so I wonder what is the simplest way to do it ? Any ideas ? I think about django-signal, but then I will have to call save() method twice. A: However you do it, you'll have to call save twice. The ID is generated directly by the database server (except for sqlite, I believe) when the new row is INSERTed, so you'll need to do that in any case. I would ask if you really need to have the ID value in your index field, though. It's always available as obj.id, after all, so even if you want it as part of a longer value you can always calculate that dynamically via a property. A: Yes, trying to figure the next ID out from somewhere wouldn't be portable. Even simpler than using signals is putting something like this in your save method: def save(self, *args, **kwargs): """Save method for Question model""" if not self.id: super(Question, self).save(*args, **kwargs) # Fill the index field self.index = self.id # plus whatever else you need return super(Question, self).save(*args, **kwargs) I guess you'll have to go this way if you can't fully derive the id of the object from the string you get as the input to the filter. But if your string where "something-{id}-something-else" it would be better to get rid of the index field and extract the id value from the string using a regexp and then filter directly by the id with Questions.objects.filter(id=id) A: One way to circumvent the double commit is to use something besides an autoincrementing primary key. Django doesn't really care what the primary key is so long as there is one, just one, and it is either a string or an integer. You could use something like a guid, which you can generate on the fly in your models at the time they are initialized, or you can let the database set it for you when you don't care what it is (assuming your database supports guids).
How to prepopulate ID in Django
I have simple question model : class Question(Polymorph): text = models.CharField(max_length=256) poll = models.ForeignKey(Poll) index = models.IntegerField() And I would like to prepopulate ( when saving ) index field with ID value. Of course before save I dont have ID value ( its created after it ), so I wonder what is the simplest way to do it ? Any ideas ? I think about django-signal, but then I will have to call save() method twice.
[ "However you do it, you'll have to call save twice. The ID is generated directly by the database server (except for sqlite, I believe) when the new row is INSERTed, so you'll need to do that in any case.\nI would ask if you really need to have the ID value in your index field, though. It's always available as obj.id, after all, so even if you want it as part of a longer value you can always calculate that dynamically via a property.\n", "Yes, trying to figure the next ID out from somewhere wouldn't be portable. Even simpler than using signals is putting something like this in your save method:\ndef save(self, *args, **kwargs):\n \"\"\"Save method for Question model\"\"\"\n if not self.id:\n super(Question, self).save(*args, **kwargs)\n # Fill the index field\n self.index = self.id # plus whatever else you need\n return super(Question, self).save(*args, **kwargs)\n\nI guess you'll have to go this way if you can't fully derive the id of the object from the string you get as the input to the filter. But if your string where \"something-{id}-something-else\" it would be better to get rid of the index field and extract the id value from the string using a regexp and then filter directly by the id with Questions.objects.filter(id=id)\n", "One way to circumvent the double commit is to use something besides an autoincrementing primary key. Django doesn't really care what the primary key is so long as there is one, just one, and it is either a string or an integer. You could use something like a guid, which you can generate on the fly in your models at the time they are initialized, or you can let the database set it for you when you don't care what it is (assuming your database supports guids). \n" ]
[ 5, 0, 0 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001653828_django_django_models_python.txt
Q: Namespace-respecting relative import in Python I have this folder structure: package/ __init__.py misc/ __init__.py tools.py subpackage/ __init__.py submodule.py I am in submodule.py, and I would like to import misc.tools. I don't want to use absolute import to import package.misc.tools, because then my package would only work when it's on the PYTHONPATH. So I want to use relative imports. But then, I also want the imported name to be misc.tools, and not just tools. Is it possible? A: What about...: from .. import misc from ..misc import tools as _ print misc.tools.__file__ This makes misc.tools available, as the print confirms, and with the right name and contents. Inevitably, it also binds the same module to some barename -- I've chosen _ as a typical "throw-away barename", but of course you can del _ right after that, if you wish, and that won't affect misc.tools. Also, any other attribute of misc set in its __init__.py (or peculiarly in tools.py) will be available, but then, if the barename misc itself is available (as it must be if compound name misc.tools is required), then it's inevitable that it will have all attributes it builds for itself (or that get externally built for it from other code that executes).
Namespace-respecting relative import in Python
I have this folder structure: package/ __init__.py misc/ __init__.py tools.py subpackage/ __init__.py submodule.py I am in submodule.py, and I would like to import misc.tools. I don't want to use absolute import to import package.misc.tools, because then my package would only work when it's on the PYTHONPATH. So I want to use relative imports. But then, I also want the imported name to be misc.tools, and not just tools. Is it possible?
[ "What about...:\nfrom .. import misc\nfrom ..misc import tools as _\n\nprint misc.tools.__file__\n\nThis makes misc.tools available, as the print confirms, and with the right name and contents.\nInevitably, it also binds the same module to some barename -- I've chosen _ as a typical \"throw-away barename\", but of course you can del _ right after that, if you wish, and that won't affect misc.tools.\nAlso, any other attribute of misc set in its __init__.py (or peculiarly in tools.py) will be available, but then, if the barename misc itself is available (as it must be if compound name misc.tools is required), then it's inevitable that it will have all attributes it builds for itself (or that get externally built for it from other code that executes).\n" ]
[ 6 ]
[]
[]
[ "import", "python", "relative_path" ]
stackoverflow_0001671362_import_python_relative_path.txt
Q: feedparser and Google News I'm trying to download a corpus of news (to try to do some natural language processing) from Google News using the universal feedparser with python. I really know nothing of XML, I'm just using an example of how to use the feedparser. The problem is that I can't find in the dict I get from the RSS feed the content of the news just the title. The code I'm currently trying to use is this: import feedparser url = 'http://news.google.com.br/news?pz=1&cf=all&ned=us&hl=en&output=rss' # just some GNews feed - I'll use a specific search later feed = feedparser.parse(url) for post in feed.entries: print post.title print post.keys() The keys I get in this post are just the title, summary, date, etc... there's no content. Is this some issue with Google News or am I doing anything wrong? Is there a way to do it? A: Have you examined the feed from Google News? There is a root element in each feed which contains a bunch of information and the actual entries dict. Here's a dirty way to see what's available: import feedparser d = feedparser.parse('http://news.google.com/news?pz=1&cf=all&ned=ca&hl=en&topic=w&output=rss') print [field for field in d] From what we can see we have an entries field which most likely contains .. news entries! If you: import pprint pprint.pprint(entry for entry in d['entries']) We get some more information :) That will show you all the fields related to each entry in a pretty printed manner (that's what pprint is for) So, to fetch all the titles of our news entries from this feed: titles = [entry.title for entry in d['entries'] so, play around with that. Hopefully that's a helpful start A: First you need to check out RSS Specification. And here is a feed parser. That should get you started.
feedparser and Google News
I'm trying to download a corpus of news (to try to do some natural language processing) from Google News using the universal feedparser with python. I really know nothing of XML, I'm just using an example of how to use the feedparser. The problem is that I can't find in the dict I get from the RSS feed the content of the news just the title. The code I'm currently trying to use is this: import feedparser url = 'http://news.google.com.br/news?pz=1&cf=all&ned=us&hl=en&output=rss' # just some GNews feed - I'll use a specific search later feed = feedparser.parse(url) for post in feed.entries: print post.title print post.keys() The keys I get in this post are just the title, summary, date, etc... there's no content. Is this some issue with Google News or am I doing anything wrong? Is there a way to do it?
[ "Have you examined the feed from Google News?\nThere is a root element in each feed which contains a bunch of information and the actual entries dict. Here's a dirty way to see what's available:\nimport feedparser\nd = feedparser.parse('http://news.google.com/news?pz=1&cf=all&ned=ca&hl=en&topic=w&output=rss')\n\nprint [field for field in d]\n\nFrom what we can see we have an entries field which most likely contains .. news entries! If you:\nimport pprint\npprint.pprint(entry for entry in d['entries'])\n\nWe get some more information :) That will show you all the fields related to each entry in a pretty printed manner (that's what pprint is for)\nSo, to fetch all the titles of our news entries from this feed:\ntitles = [entry.title for entry in d['entries']\n\nso, play around with that. Hopefully that's a helpful start\n", "First you need to check out RSS Specification. And here is a feed parser. That should get you started. \n" ]
[ 8, 1 ]
[]
[]
[ "feedparser", "google_news", "python", "rss" ]
stackoverflow_0001671428_feedparser_google_news_python_rss.txt
Q: Send and receive messages via (libpurple) messenger protocols I had an idea that would require me be able to send and receive messages via the standard messenger protocols such as msn, icq, aim, skype, etc... I am currently only familiar with PHP and Python and would thus enjoy a library which I can access from said languages. I have found phurple (http://sourceforge.net/projects/phurple/) for php and python-purple (http://developer.pidgin.im/wiki/PythonHowTo) which don't seem to be to up to date. What would you guys suggest to do? My goal will be to write a webapplication in a distant way like meebo.com The answer should include a tutorial or example implementation and a decent documentations.. the pidgin.im doesn't really have a useful tutorial.. alternativly you can also just tell me different kinds of implementations, so that I would build my own class out of an existing icq, aim, msn etc implementation. An example of how to connect to an account (login) and then sending one message would be the ultimate help! Come one guys :) A: Here is how to connect to the Pidgin DBus server. #!/usr/bin/env python import dbus bus = dbus.SessionBus() if "im.pidgin.purple.PurpleService" in bus.list_names(): purple = bus.get_object("im.pidgin.purple.PurpleService", "/im/pidgin/purple/PurpleObject", "im.pidgin.purple.PurpleInterface") print "Connected to the pidgin DBus." for conv in purple.PurpleGetIms(): purple.PurpleConvImSend(purple.PurpleConvIm(conv), "Ignore this message.") else: print "Could not find piding DBus service, make sure Pidgin is running." Don't know if you have seen this, but here is the official python DBus tutorial: link. EDIT: Re-adding link to the pidgin dev wiki. It teaches you everything I posted above, just scroll further down the page. http://developer.pidgin.im/wiki/PythonHowTo A: A good bet would be to go through the DBus interface: Pidgin (purple) fully supports it and the DBus interface library for Python is quite stable. A: If you decompress the file from phurple you get some example like this: <?php if(!extension_loaded('phurple')) { dl('phurple.' . PHP_SHLIB_SUFFIX); } class CustomPhurpleClient extends PhurpleClient { private $someVar; protected function initInternal() { $this->someVar = "Hello World"; } protected function writeIM($conversation, $buddy, $message, $flags, $time) { if(PhurpleClient::MESSAGE_RECV == $flags) { printf( "(%s) %s %s: %s\n", $conversation->getName() ? $conversation->getName() : $buddy->getName(), date("H:i:s", $time), is_object($buddy) ? $buddy->getAlias() : $buddy, $message ); } } protected function onSignedOn($connection) { print $this->justForFun($this->someVar); } public function justForFun($param) { return "just for fun, the param is: $param"; } } // end Class CustomPhurpleClient // Example Code Below: try { $user_dir = "/tmp/phphurple-test"; if(!file_exists($user_dir) || !is_dir($user_dir)) { mkdir($user_dir); } PhurpleClient::setUserDir($user_dir); PhurpleClient::setDebug(true); PhurpleClient::setUiId("TestUI"); $client = CustomPhurpleClient::getInstance(); $client->addAccount("msn://[email protected]:[email protected]:1863"); $client->connect(); $client->runLoop(); } catch (Exception $e) { echo "[Phurple]: " . $e->getMessage() . "\n"; die(); } ?> A: I use WebIcqLite: ICQ messages sender for the ICQ protocol. It works and the class is easy to understand. I don't know about other protocols, though. What's wrong with the Phurple library?
Send and receive messages via (libpurple) messenger protocols
I had an idea that would require me be able to send and receive messages via the standard messenger protocols such as msn, icq, aim, skype, etc... I am currently only familiar with PHP and Python and would thus enjoy a library which I can access from said languages. I have found phurple (http://sourceforge.net/projects/phurple/) for php and python-purple (http://developer.pidgin.im/wiki/PythonHowTo) which don't seem to be to up to date. What would you guys suggest to do? My goal will be to write a webapplication in a distant way like meebo.com The answer should include a tutorial or example implementation and a decent documentations.. the pidgin.im doesn't really have a useful tutorial.. alternativly you can also just tell me different kinds of implementations, so that I would build my own class out of an existing icq, aim, msn etc implementation. An example of how to connect to an account (login) and then sending one message would be the ultimate help! Come one guys :)
[ "Here is how to connect to the Pidgin DBus server.\n#!/usr/bin/env python\nimport dbus\n\nbus = dbus.SessionBus()\n\nif \"im.pidgin.purple.PurpleService\" in bus.list_names():\n purple = bus.get_object(\"im.pidgin.purple.PurpleService\",\n \"/im/pidgin/purple/PurpleObject\",\n \"im.pidgin.purple.PurpleInterface\")\n\n print \"Connected to the pidgin DBus.\"\n for conv in purple.PurpleGetIms():\n purple.PurpleConvImSend(purple.PurpleConvIm(conv), \"Ignore this message.\")\n\nelse:\n print \"Could not find piding DBus service, make sure Pidgin is running.\"\n\nDon't know if you have seen this, but here is the official python DBus tutorial: link.\nEDIT: Re-adding link to the pidgin dev wiki. It teaches you everything I posted above,\njust scroll further down the page. http://developer.pidgin.im/wiki/PythonHowTo\n", "A good bet would be to go through the DBus interface: Pidgin (purple) fully supports it and the DBus interface library for Python is quite stable.\n", "If you decompress the file from phurple you get some example like this:\n<?php\n if(!extension_loaded('phurple')) {\n dl('phurple.' . PHP_SHLIB_SUFFIX);\n }\n\n class CustomPhurpleClient extends PhurpleClient {\n private $someVar;\n protected function initInternal() {\n $this->someVar = \"Hello World\";\n }\n\n protected function writeIM($conversation, $buddy, $message, $flags, $time) {\n if(PhurpleClient::MESSAGE_RECV == $flags) {\n printf( \"(%s) %s %s: %s\\n\",\n $conversation->getName() ? $conversation->getName() : $buddy->getName(),\n date(\"H:i:s\", $time),\n is_object($buddy) ? $buddy->getAlias() : $buddy,\n $message\n );\n }\n }\n\n protected function onSignedOn($connection) {\n print $this->justForFun($this->someVar);\n }\n\n public function justForFun($param) {\n return \"just for fun, the param is: $param\";\n }\n } \n // end Class CustomPhurpleClient\n\n // Example Code Below:\n try {\n $user_dir = \"/tmp/phphurple-test\";\n if(!file_exists($user_dir) || !is_dir($user_dir)) {\n mkdir($user_dir);\n }\n\n PhurpleClient::setUserDir($user_dir);\n PhurpleClient::setDebug(true);\n PhurpleClient::setUiId(\"TestUI\");\n\n $client = CustomPhurpleClient::getInstance();\n $client->addAccount(\"msn://[email protected]:[email protected]:1863\");\n $client->connect();\n\n $client->runLoop();\n } catch (Exception $e) {\n echo \"[Phurple]: \" . $e->getMessage() . \"\\n\";\n die();\n }\n?>\n\n", "I use WebIcqLite: ICQ messages sender for the ICQ protocol. It works and the class is easy to understand. I don't know about other protocols, though. What's wrong with the Phurple library?\n" ]
[ 11, 2, 1, 0 ]
[]
[]
[ "libpurple", "php", "python" ]
stackoverflow_0001620793_libpurple_php_python.txt
Q: Decorating Instance Methods in Python Here's the gist of what I'm trying to do. I have a list of objects, and I know they have an instance method that looks like: def render(self, name, value, attrs) # Renders a widget... I want to (essentialy) decorate these functions at runtime, as I'm iterating over the list of objects. So that their render functions become this: def render(self, name, value, attrs) self.attrs=attrs # Renders a widget... Two caveats: The render function is part of django. I can't put a decorator inside their library (well I could, but then I have to maintain and migrate this change). It's an instance method. An example here: http://wiki.python.org/moin/PythonDecoratorLibrary Shows how to add a new instance method to a class. The difference here is I want to fall through to the original method after I've memorized that attrs parameter. A: def decorate_method(f): def wrapper(self, name, value, attrs): self.attrs = attrs return f(self, name, value, attrs) return wrapper def decorate_class(c): for n in dir(c): f = getattr(c, n) if hasattr(f, 'im_func'): setattr(c, n, decorate_method(f.im_func)) You'll probably need some other test to skip methods with a different signature, but, apart from that, decorate_class(whatever) should do what you want on any given class whatever. A: The "classic" way is to subclass. This way you don't have to mess with other peoples classes. class someclass(object): def render(self, name, value, attrs): print hasattr(self, 'attrs') class my_render(object): def render(self, name, value, attrs): self.attrs = attrs # kind of decorating the function here return super(my_render, self).render(name, value, attrs) class my_class(my_render, someclass): pass someclass().render(1,2,3) # -> False my_class().render(1,2,3) # -> True The reason for MI is that all classes can inherit from my_render. I like the mixin concept ;-) class my_otherclass(my_render, someotherclass): pass class my_thirdclass(my_render, thirdclass): pass # or less explicit classlist = [ someclass, someotherclass ] newclasses = [ type('my_'+cls.__name__, (my_render,cls), {}) for cls in classlist ]
Decorating Instance Methods in Python
Here's the gist of what I'm trying to do. I have a list of objects, and I know they have an instance method that looks like: def render(self, name, value, attrs) # Renders a widget... I want to (essentialy) decorate these functions at runtime, as I'm iterating over the list of objects. So that their render functions become this: def render(self, name, value, attrs) self.attrs=attrs # Renders a widget... Two caveats: The render function is part of django. I can't put a decorator inside their library (well I could, but then I have to maintain and migrate this change). It's an instance method. An example here: http://wiki.python.org/moin/PythonDecoratorLibrary Shows how to add a new instance method to a class. The difference here is I want to fall through to the original method after I've memorized that attrs parameter.
[ "def decorate_method(f):\n def wrapper(self, name, value, attrs):\n self.attrs = attrs\n return f(self, name, value, attrs)\n return wrapper\n\ndef decorate_class(c):\n for n in dir(c):\n f = getattr(c, n)\n if hasattr(f, 'im_func'):\n setattr(c, n, decorate_method(f.im_func))\n\nYou'll probably need some other test to skip methods with a different signature, but, apart from that, decorate_class(whatever) should do what you want on any given class whatever.\n", "The \"classic\" way is to subclass. This way you don't have to mess with other peoples classes.\nclass someclass(object):\n def render(self, name, value, attrs):\n print hasattr(self, 'attrs')\n\nclass my_render(object):\n def render(self, name, value, attrs):\n self.attrs = attrs # kind of decorating the function here\n return super(my_render, self).render(name, value, attrs)\n\nclass my_class(my_render, someclass): \n pass \n\nsomeclass().render(1,2,3) # -> False\nmy_class().render(1,2,3) # -> True\n\nThe reason for MI is that all classes can inherit from my_render. I like the mixin concept ;-)\nclass my_otherclass(my_render, someotherclass): pass\nclass my_thirdclass(my_render, thirdclass): pass\n\n# or less explicit\nclasslist = [ someclass, someotherclass ]\nnewclasses = [ type('my_'+cls.__name__, (my_render,cls), {}) for cls in classlist ]\n\n" ]
[ 7, 3 ]
[]
[]
[ "class", "decorator", "django", "instance", "python" ]
stackoverflow_0001672064_class_decorator_django_instance_python.txt
Q: text-mine PDF files with Python? Is there a package/library for python that would allow me to open a PDF, and search the text for certain words? A: Using PyPdf2 you can use extractText() method to extract pdf text and work on it. Update: Changed text to refer to PyPdf2, thanks to @Aditya Kumar for heads up. A: I don't think you can do it in one step, but you can certainly get the text out of a pdf with pdfminer. Then you can apply whatever text search to that recovered data.
text-mine PDF files with Python?
Is there a package/library for python that would allow me to open a PDF, and search the text for certain words?
[ "Using PyPdf2 you can use extractText() method to extract pdf text and work on it.\nUpdate: Changed text to refer to PyPdf2, thanks to @Aditya Kumar for heads up.\n", "I don't think you can do it in one step, but you can certainly get the text out of a pdf with pdfminer. Then you can apply whatever text search to that recovered data.\n" ]
[ 12, 4 ]
[]
[]
[ "pdf", "python", "text_mining" ]
stackoverflow_0001672202_pdf_python_text_mining.txt
Q: Reading from files in python I need to find out the maximum and minimum value in a line by reading a file and should be dividing the maximum value by the minimum value. Am interested to do this in python. the contents of the file (file.txt) looks like this.. A28102_at,151,263,88,484,118,270,458,872,62,194 AB000114_at,72,21,20,61,20,85,20,25,20,65 AB000115_at,281,250,358,118,197,71,168,296,198,113 The problem am facing is i should be neglecting the first value, that is upto the first occurrence of comma and am unable to figure out a method. And also am interested to store the values in an array and then do the comparision. Is this approach correct or any better method is sugegsted? A: Python comes with batteries! Use the csv module to parse csv files: #!/usr/bin/env python import csv csvobj=csv.reader(open('file.txt','r')) for datum in csvobj: datum=[float(val) for val in datum[1:]] print(datum) maximum=max(datum) minimum=min(datum) print(maximum/minimum) # [151.0, 263.0, 88.0, 484.0, 118.0, 270.0, 458.0, 872.0, 62.0, 194.0] # 14.064516129 # [72.0, 21.0, 20.0, 61.0, 20.0, 85.0, 20.0, 25.0, 20.0, 65.0] # 4.25 # [281.0, 250.0, 358.0, 118.0, 197.0, 71.0, 168.0, 296.0, 198.0, 113.0] # 5.04225352113 A: As you are a beginner, I won't give a copy/paste code snippet; there is more to learn if you figure the details out yourself. But here's what you could do: read the file line-by-line in a loop, storing the current line in a string for each line, split the string on commas, resulting in a list drop the first element of the list take the maximum of the rest Maybe someone else will come up with the actual code. A: I'm not sure if this is the most "pythonic" way of doing it, but it should work. I'm using lists instead of arrays. for line in fp: tokens = line.split(',') # tokenize the line with comma as the only delimiter numbers = map(int, tokens[1:]) # skip the first value and convert to integer values maxValue = max(numbers) # max() operates on lists or sequences, I can't recall minValue = min(numbers) # so does min() print maxValue / minValue # TADA :)
Reading from files in python
I need to find out the maximum and minimum value in a line by reading a file and should be dividing the maximum value by the minimum value. Am interested to do this in python. the contents of the file (file.txt) looks like this.. A28102_at,151,263,88,484,118,270,458,872,62,194 AB000114_at,72,21,20,61,20,85,20,25,20,65 AB000115_at,281,250,358,118,197,71,168,296,198,113 The problem am facing is i should be neglecting the first value, that is upto the first occurrence of comma and am unable to figure out a method. And also am interested to store the values in an array and then do the comparision. Is this approach correct or any better method is sugegsted?
[ "Python comes with batteries! Use the csv module to parse csv files:\n#!/usr/bin/env python\nimport csv\ncsvobj=csv.reader(open('file.txt','r'))\nfor datum in csvobj:\n datum=[float(val) for val in datum[1:]] \n print(datum)\n maximum=max(datum)\n minimum=min(datum)\n print(maximum/minimum)\n\n# [151.0, 263.0, 88.0, 484.0, 118.0, 270.0, 458.0, 872.0, 62.0, 194.0]\n# 14.064516129\n# [72.0, 21.0, 20.0, 61.0, 20.0, 85.0, 20.0, 25.0, 20.0, 65.0]\n# 4.25\n# [281.0, 250.0, 358.0, 118.0, 197.0, 71.0, 168.0, 296.0, 198.0, 113.0]\n# 5.04225352113\n\n", "As you are a beginner, I won't give a copy/paste code snippet; there is more to learn if you figure the details out yourself. But here's what you could do:\n\nread the file line-by-line in a loop, storing the current line in a string\nfor each line, split the string on commas, resulting in a list\ndrop the first element of the list\ntake the maximum of the rest\n\nMaybe someone else will come up with the actual code.\n", "I'm not sure if this is the most \"pythonic\" way of doing it, but it should work. I'm using lists instead of arrays.\nfor line in fp:\n tokens = line.split(',') # tokenize the line with comma as the only delimiter\n numbers = map(int, tokens[1:]) # skip the first value and convert to integer values\n maxValue = max(numbers) # max() operates on lists or sequences, I can't recall\n minValue = min(numbers) # so does min()\n print maxValue / minValue # TADA\n\n:)\n" ]
[ 6, 4, 3 ]
[]
[]
[ "file_io", "python" ]
stackoverflow_0001672360_file_io_python.txt
Q: How to upload huge files from Nokia 95 to webserver? I'm trying to upload a huge file from my Nokia N95 mobile to my webserver using Pys60 python code. However the code crashes because I'm trying to load the file into memory and trying to post to a HTTP url. Any idea how to upload huge files > 120 MB to webserver using Pys60. Following is the code I use to send the HTTP request. f = open(soundpath + audio_filename) fields = [('timestamp', str(audio_start_time)), ('test_id', str(test_id)), ('tester_name', tester_name), ('sensor_position', str(sensor_position)), ('sensor', 'audio') ] files = [('data', audio_filename, f.read())] post_multipart(MOBILE_CONTEXT_HOST, MOBILE_CONTEXT_SERVER_PORT, '/MobileContext/AudioServlet', fields, files) f.close A: You can't. It's pretty much physically impossible. You'll need to split the file into small chunks and upload it bit by bit, which is very difficult to do quickly and efficiently on that sort of platform. Jamie A: You'll need to craft a client code to split your source file in small chunks and rebuild that pieces server-side. A: where does this post_multipart() function comes from ? if it is from here, then it should be easy to adapt the code so that it takes a file object in argument and not the full content of the file, so that post_mutipart reads small chunks of data while posting instead of loading the whole file in memory before posting. this is definitely possible.
How to upload huge files from Nokia 95 to webserver?
I'm trying to upload a huge file from my Nokia N95 mobile to my webserver using Pys60 python code. However the code crashes because I'm trying to load the file into memory and trying to post to a HTTP url. Any idea how to upload huge files > 120 MB to webserver using Pys60. Following is the code I use to send the HTTP request. f = open(soundpath + audio_filename) fields = [('timestamp', str(audio_start_time)), ('test_id', str(test_id)), ('tester_name', tester_name), ('sensor_position', str(sensor_position)), ('sensor', 'audio') ] files = [('data', audio_filename, f.read())] post_multipart(MOBILE_CONTEXT_HOST, MOBILE_CONTEXT_SERVER_PORT, '/MobileContext/AudioServlet', fields, files) f.close
[ "You can't. It's pretty much physically impossible. You'll need to split the file into small chunks and upload it bit by bit, which is very difficult to do quickly and efficiently on that sort of platform.\nJamie\n", "You'll need to craft a client code to split your source file in small chunks and rebuild that pieces server-side.\n", "where does this post_multipart() function comes from ? \nif it is from here, then it should be easy to adapt the code so that it takes a file object in argument and not the full content of the file, so that post_mutipart reads small chunks of data while posting instead of loading the whole file in memory before posting.\nthis is definitely possible.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "file_upload", "http", "post", "pys60", "python" ]
stackoverflow_0001670944_file_upload_http_post_pys60_python.txt
Q: overloading __init__ of unittest.testcase I want to add two variables to my subclass which is inherited from unittest.testcase like I have: import unittest class mrp_repair_test_case(unittest.TestCase): def __init__(self, a=None, b=None, methodName=['runTest']): unittest.TestCase.__init__(self) self.a= a self.b = b def test1(self): .......... ....... def runtest() mrp_repair_test_case(a=10,b=20) suite = unittest.TestLoader().loadTestsFromTestCase(mrp_repair_test_case) res = unittest.TextTestRunner(stream=out,verbosity=2).run(suite) how can I acvhieve this: I am getting this error: ValueError: no such test method in ****<class 'mrp_repair.unit_test.test.mrp_repair_test_case'>:**** runTest thanks A: At first glance, it looks like you need to create an instance of mrp_repair_test_case. Your current line: mrp_repair_test_case(a=10,b=20) doesn't actually do anything. Try (not tested): def runtest(): m = mrp_repair_test_case(a=10, b=20) suite = unittest.TestLoader().loadsTestsFromTestCase(m) res = unittest.TextTestRunner(stream=out, verbosity=2).run(suite) This assumes you've set up 'out' as a stream already. Edit: By the way, is there any reason you're not using a setUp method to set these values? That would be normal best practice. Looking at the documentation of loadTestsFromTestCase it looks like it will only accept the Class itself not an instance of it, which would mean you're rather working against the design of the unittest module. Edit 2: In response to your further information, I would actually set your uid and cursor values seperately at module level before calling the tests. I'm not a huge fan of globals normally, but if I'm understanding you correctly these values will be A) read-only B) always the same for the same customer which avoids most of the normal pitfalls in using them. Edit 3: To answer your edit, if you really want to use __init__ you probably can, but you will have to roll your own loadsTestsFromTestCase alternative, and possibly your own TestSuite (you'll have to check the internals of how it works). As I said above, you'll be working against the existing design of the module - to the extent that if you decide to do your testing that way it might be easier to roll your own solution completely from scratch than use unittest. Amend: just checked, you'd definately have to roll your own version of TestSuite, as the existing one creates a new instance of the TestCaseClass for each test.
overloading __init__ of unittest.testcase
I want to add two variables to my subclass which is inherited from unittest.testcase like I have: import unittest class mrp_repair_test_case(unittest.TestCase): def __init__(self, a=None, b=None, methodName=['runTest']): unittest.TestCase.__init__(self) self.a= a self.b = b def test1(self): .......... ....... def runtest() mrp_repair_test_case(a=10,b=20) suite = unittest.TestLoader().loadTestsFromTestCase(mrp_repair_test_case) res = unittest.TextTestRunner(stream=out,verbosity=2).run(suite) how can I acvhieve this: I am getting this error: ValueError: no such test method in ****<class 'mrp_repair.unit_test.test.mrp_repair_test_case'>:**** runTest thanks
[ "At first glance, it looks like you need to create an instance of mrp_repair_test_case. Your current line:\nmrp_repair_test_case(a=10,b=20)\n\ndoesn't actually do anything.\nTry (not tested):\ndef runtest():\n m = mrp_repair_test_case(a=10, b=20)\n suite = unittest.TestLoader().loadsTestsFromTestCase(m)\n res = unittest.TextTestRunner(stream=out, verbosity=2).run(suite)\n\nThis assumes you've set up 'out' as a stream already.\nEdit:\nBy the way, is there any reason you're not using a setUp method to set these values? That would be normal best practice. Looking at the documentation of loadTestsFromTestCase it looks like it will only accept the Class itself not an instance of it, which would mean you're rather working against the design of the unittest module.\nEdit 2:\nIn response to your further information, I would actually set your uid and cursor values seperately at module level before calling the tests. I'm not a huge fan of globals normally, but if I'm understanding you correctly these values will be A) read-only B) always the same for the same customer which avoids most of the normal pitfalls in using them.\nEdit 3:\nTo answer your edit, if you really want to use __init__ you probably can, but you will have to roll your own loadsTestsFromTestCase alternative, and possibly your own TestSuite (you'll have to check the internals of how it works). As I said above, you'll be working against the existing design of the module - to the extent that if you decide to do your testing that way it might be easier to roll your own solution completely from scratch than use unittest. Amend: just checked, you'd definately have to roll your own version of TestSuite, as the existing one creates a new instance of the TestCaseClass for each test.\n" ]
[ 6 ]
[]
[]
[ "python", "unit_testing" ]
stackoverflow_0001672520_python_unit_testing.txt
Q: web chart with hover events I am after a library with a Python interface to render nice looking charts with hover events for each point. ChartDirector does what I want, but I would prefer an open source solution. OpenFlashChart looks good, although ideally I would want a non-Flash solution. Any other contenders? A: Not strictly Python, but you may want to look at Flot. (assuming by web chart you mean those that are to be embedded on web pages)
web chart with hover events
I am after a library with a Python interface to render nice looking charts with hover events for each point. ChartDirector does what I want, but I would prefer an open source solution. OpenFlashChart looks good, although ideally I would want a non-Flash solution. Any other contenders?
[ "Not strictly Python, but you may want to look at Flot. (assuming by web chart you mean those that are to be embedded on web pages)\n" ]
[ 2 ]
[]
[]
[ "charts", "graph", "hover", "python" ]
stackoverflow_0001671520_charts_graph_hover_python.txt
Q: SQL returning extra data Hey there, was wondering if anyone could help a newbie on SQL and Python. I thought I had a pretty decent grasp of it, however something odd happened recently. Here is the the following code snipped from a larger portion: try: self.db.query("SELECT * FROM account WHERE email = '{0}' AND pass = '{1}'".format(self.mail.strip(self.bchars),self.pw.strip(self.bchars))) except MySQLdb.Error, e: print "Error %d: %s" % (e.args[0], e.args[1]) exists = self.db.store_result().fetch_row() print "EXISTS",exists It use to print this: EXISTS ((2, '[email protected]', '1234', 1, 0, 2161, '192.168.1.47', 0),) Now, it prints this: It use to print this: EXISTS ((2L, '[email protected]', '1234', 1L, 0L, 2161, '192.168.1.47', 0L),) I have no idea where these L's came from. I checked the SQL Database and even reloaded it to be sure. I have reverted all my code for the last day (where all was functioning), but still can't find a solution. I have also tried searching, but I am not even sure what this problem is even called so it's hard to search. Thanks for any help or information anyone can provide. A: I think, python's dbapi is supposed to always return integer-fields as long. Anyway, 10L, 5L and so on is the way repr (which is used on every item of a tuple in your case) works for longs. One more thing. I see, you are using MySQLdb. In that case, I strongly suggest, that you stop using the c-api wrapper, but instead use the "real" interface, that has all that automatic conversion/escaping and a bunch of other wonderful things (dict-cursors for one). This will save you a lot of grief and WILL make your code more stable/secure. In short, just forget about _mysql. Trust me. Your example can be rewritten as this (using the proper interface): import MySQLdb db = MySQLdb.connect(host=your_host, db=your_db, user=your_user, passwd=your_password) cur = db.cursor() cur.execute("""SELECT * FROM account WHERE email = %s AND pass = %s """, (self.mail, self.pw)) result = cur.fetchall() print "exists:", result This does the same as you are doing (except the error handling), but without manual string-formatting, escaping and so on. I know, this is going to be downvoted for irrelevance, probably, but if this answer helps even a single person to start using the proper database api, that would be really great. A: 2L is the long version of 2: different object but the same value. >>> print 2L == 2 True >>> print 2L is 2 False Are you running all the same (python, packages, db, OS...)? A: Is this because the data type in the SQL is 'long'? As you can see in the tuple, they still are valid numbers, not strings, so this should not pose any problem. A: The L's indicate that the numbers are of the type Long. Your field types in the db are probably set to be of this type.
SQL returning extra data
Hey there, was wondering if anyone could help a newbie on SQL and Python. I thought I had a pretty decent grasp of it, however something odd happened recently. Here is the the following code snipped from a larger portion: try: self.db.query("SELECT * FROM account WHERE email = '{0}' AND pass = '{1}'".format(self.mail.strip(self.bchars),self.pw.strip(self.bchars))) except MySQLdb.Error, e: print "Error %d: %s" % (e.args[0], e.args[1]) exists = self.db.store_result().fetch_row() print "EXISTS",exists It use to print this: EXISTS ((2, '[email protected]', '1234', 1, 0, 2161, '192.168.1.47', 0),) Now, it prints this: It use to print this: EXISTS ((2L, '[email protected]', '1234', 1L, 0L, 2161, '192.168.1.47', 0L),) I have no idea where these L's came from. I checked the SQL Database and even reloaded it to be sure. I have reverted all my code for the last day (where all was functioning), but still can't find a solution. I have also tried searching, but I am not even sure what this problem is even called so it's hard to search. Thanks for any help or information anyone can provide.
[ "I think, python's dbapi is supposed to always return integer-fields as long. \nAnyway, 10L, 5L and so on is the way repr (which is used on every item of a tuple in your case) works for longs.\nOne more thing. I see, you are using MySQLdb. In that case, I strongly suggest, that you stop using the c-api wrapper, but instead use the \"real\" interface, that has all that automatic conversion/escaping and a bunch of other wonderful things (dict-cursors for one).\nThis will save you a lot of grief and WILL make your code more stable/secure. In short, just forget about _mysql. Trust me.\nYour example can be rewritten as this (using the proper interface):\nimport MySQLdb\n\ndb = MySQLdb.connect(host=your_host, db=your_db,\n user=your_user, passwd=your_password)\n\ncur = db.cursor()\ncur.execute(\"\"\"SELECT * FROM account WHERE email = %s AND pass = %s \"\"\",\n (self.mail, self.pw))\nresult = cur.fetchall()\nprint \"exists:\", result\n\nThis does the same as you are doing (except the error handling), but without manual string-formatting, escaping and so on.\nI know, this is going to be downvoted for irrelevance, probably, but if this answer helps even a single person to start using the proper database api, that would be really great.\n", "2L is the long version of 2: different object but the same value.\n>>> print 2L == 2\nTrue\n\n>>> print 2L is 2\nFalse\n\nAre you running all the same (python, packages, db, OS...)?\n", "Is this because the data type in the SQL is 'long'?\nAs you can see in the tuple, they still are valid numbers, not strings, so this should not pose any problem.\n", "The L's indicate that the numbers are of the type Long. Your field types in the db are probably set to be of this type.\n" ]
[ 7, 2, 1, 1 ]
[]
[]
[ "mysql", "python", "sql" ]
stackoverflow_0001672814_mysql_python_sql.txt
Q: python chat client lib I'm trying to write a Python lib that will implement the client side of a certain chat protocol. After I connect to the server, I start the main loop where I read from the server and handle received commands and here I need to call a callback function (like on_message or on file_received, etc). How should I go about implementing this? Should a start a new thread for each callback function? As maybe some callbacks will take some time to return and I will timeout. Also, If the main loop where I read from the server is in a thread can I write to the socket from another thread(send messages to the server)? Or is there a better approach? Thanks. A: For a python app doing this, I wouldn't use threads. I would use a framework like Twisted. The docs have examples; here's a chat example. A: I would use the select module, or alternately twisted, however select is a bit more portable, and to my mind somewhat more pythonic. A: Threads are just an unnecessary complication here and will lead to obscure bugs if you're not familiar with how to use them correctly. asyncore or asynchat are simple routes to the same goal, however.
python chat client lib
I'm trying to write a Python lib that will implement the client side of a certain chat protocol. After I connect to the server, I start the main loop where I read from the server and handle received commands and here I need to call a callback function (like on_message or on file_received, etc). How should I go about implementing this? Should a start a new thread for each callback function? As maybe some callbacks will take some time to return and I will timeout. Also, If the main loop where I read from the server is in a thread can I write to the socket from another thread(send messages to the server)? Or is there a better approach? Thanks.
[ "For a python app doing this, I wouldn't use threads. I would use a framework like Twisted.\nThe docs have examples; here's a chat example.\n", "I would use the select module, or alternately twisted, however select is a bit more portable, and to my mind somewhat more pythonic.\n", "Threads are just an unnecessary complication here and will lead to obscure bugs if you're not familiar with how to use them correctly. asyncore or asynchat are simple routes to the same goal, however.\n" ]
[ 6, 2, 1 ]
[]
[]
[ "chat", "multithreading", "python" ]
stackoverflow_0001670735_chat_multithreading_python.txt
Q: parse.unquote_plus TypeError I'm trying to format a file so that it can be inserted into a database, the file is originally compressed and arround 1.3MB big. Each line looks something like this: 398,%7EAnoniem+001%7E,543,480,7525010,1775,0 This is how the code looks like that parses this file: Village = gzip.open(Root+'\\data'+'\\' +str(Newest_Date[0])+'\\' +str(Newest_Date[1])+'\\' +str(Newest_Date[2])\ +'\\'+str(Newest_Date[3])+' village.gz'); Village_Parsed = str for line in Village: Village_Parsed = Village_Parsed + urllib.parse.unquote_plus(line); print(Village.readline()); When I run the program I get this error: Village_Parsed = Village_Parsed + urllib.parse.unquote_plus(line); file "C:\Python31\lib\urllib\parse.py", line 404, in unquote_plus string = string.replace('+', ' ') TypeError: expected an object with the buffer interface Any idea what is wrong here? Thanks in advance for any help :) A: PROBLEM 1 is that urllib.unquote_plus doesn't like the line that you have fed it. The message should be "Please supply a str object" :-) I suggest that you fix problem 2 below, and insert: print('line', type(line), repr(line)) immediately after your for statement so that you can see what you are getting in line. You will find that it returns bytes objects: >>> [line for line in gzip.open('test.gz')] [b'nudge nudge\n', b'wink wink\n'] Using a mode of 'r' has scant effect: >>> [line for line in gzip.open('test.gz', 'r')] [b'nudge nudge\n', b'wink wink\n'] I suggest that instead of passing line to the parsing routine you pass line.decode('UTF-8') ... or whatever encoding was used when the gz file was written. PROBLEM 2 is in this line: Village_Parsed = str str is a type. You need an empty str object. To get that, you could call the type i.e. str() which is formally correct but impractical/unusual/scoffable/weird when compared to using a string constant '' ... so do this: Village_Parsed = '' You also have PROBLEM 3: your last statement is trying to read the gz file after EOF. A: import gzip, os, urllib.parse archive_relpath = os.sep.join(map(str, Newest_Date[:4])) + ' village.gz' archive_path = os.path.join(Root, 'data', archive_relpath) with gzip.open(archive_path) as Village: Village_Parsed = ''.join(urllib.parse.unquote_plus(line.decode('ascii')) for line in Village) print(Village_Parsed) Output: 398,~Anoniem 001~,543,480,7525010,1775,0 NOTE: RFC 3986 - Uniform Resource Identifier (URI): Generic Syntax says: This specification does not mandate any particular character encoding for mapping between URI characters and the octets used to store or transmit those characters. When a URI appears in a protocol element, the character encoding is defined by that protocol; without such a definition, a URI is assumed to be in the same character encoding as the surrounding text. Therefore 'ascii' in the line.decode('ascii') fragment should be replaced by whatever character encoding you've used to encode your text.
parse.unquote_plus TypeError
I'm trying to format a file so that it can be inserted into a database, the file is originally compressed and arround 1.3MB big. Each line looks something like this: 398,%7EAnoniem+001%7E,543,480,7525010,1775,0 This is how the code looks like that parses this file: Village = gzip.open(Root+'\\data'+'\\' +str(Newest_Date[0])+'\\' +str(Newest_Date[1])+'\\' +str(Newest_Date[2])\ +'\\'+str(Newest_Date[3])+' village.gz'); Village_Parsed = str for line in Village: Village_Parsed = Village_Parsed + urllib.parse.unquote_plus(line); print(Village.readline()); When I run the program I get this error: Village_Parsed = Village_Parsed + urllib.parse.unquote_plus(line); file "C:\Python31\lib\urllib\parse.py", line 404, in unquote_plus string = string.replace('+', ' ') TypeError: expected an object with the buffer interface Any idea what is wrong here? Thanks in advance for any help :)
[ "PROBLEM 1 is that urllib.unquote_plus doesn't like the line that you have fed it. The message should be \"Please supply a str object\" :-) I suggest that you fix problem 2 below, and insert:\nprint('line', type(line), repr(line))\n\nimmediately after your for statement so that you can see what you are getting in line.\nYou will find that it returns bytes objects:\n>>> [line for line in gzip.open('test.gz')]\n[b'nudge nudge\\n', b'wink wink\\n']\n\nUsing a mode of 'r' has scant effect:\n>>> [line for line in gzip.open('test.gz', 'r')]\n[b'nudge nudge\\n', b'wink wink\\n']\n\nI suggest that instead of passing line to the parsing routine you pass line.decode('UTF-8') ... or whatever encoding was used when the gz file was written.\nPROBLEM 2 is in this line:\nVillage_Parsed = str\n\nstr is a type. You need an empty str object. To get that, you could call the type i.e. str() which is formally correct but impractical/unusual/scoffable/weird when compared to using a string constant '' ... so do this:\nVillage_Parsed = ''\n\nYou also have PROBLEM 3: your last statement is trying to read the gz file after EOF.\n", "import gzip, os, urllib.parse\n\narchive_relpath = os.sep.join(map(str, Newest_Date[:4])) + ' village.gz' \narchive_path = os.path.join(Root, 'data', archive_relpath)\n\nwith gzip.open(archive_path) as Village:\n Village_Parsed = ''.join(urllib.parse.unquote_plus(line.decode('ascii'))\n for line in Village)\n print(Village_Parsed)\n\nOutput:\n\n398,~Anoniem 001~,543,480,7525010,1775,0\n\nNOTE: RFC 3986 - Uniform Resource Identifier (URI): Generic Syntax says:\n\nThis specification does not mandate\nany particular character encoding\nfor mapping between URI characters and\nthe octets used to store or\ntransmit those characters. When a URI\nappears in a protocol element, the\ncharacter encoding is defined by that\nprotocol; without such a\ndefinition, a URI is assumed to be in\nthe same character encoding as the\nsurrounding text.\n\nTherefore 'ascii' in the line.decode('ascii') fragment should be replaced by whatever character encoding you've used to encode your text.\n" ]
[ 2, 0 ]
[]
[]
[ "parsing", "python", "typeerror", "urllib" ]
stackoverflow_0001672621_parsing_python_typeerror_urllib.txt
Q: Setting a lambda function as a property Consider these two classes: class Test(int): difference = property(lambda self: self.__sub__) class Test2(int): difference=lambda self: self.__sub__ Is there any difference between these two classes? New: If so, what is the purpose of using the property to store a lambda function that returns another function? Update: Changed the question to what I should have asked in the first place. Sorry. Even though I can now know the solution from the answers, it would be unfair for me to do a self answer in these circumstances. (without leaving the answer for a few days at least). Update 2: Sorry, I wasn't clear enough again. The question was about the particular construction, not properties in general. A: For Test1, you could use .difference - for Test2, you'd need to use .difference() instead. As for why you might use it, a potential use would be to replace something that was previously directly stored as a property with a dynamic calculation instead. For instance, if you used to store property obj.a, but then you expanded your implementation so that it knew instead properties obj.b and obj.c that could be used to calculate a, but could also be used to calculate different things. If you still wanted to provide backwards-compat with things that used the previous object form, you could implement obj.a as a property() that calculated a based on b and c and it'd behave to those older code fragments as it previously did, with no other code modification needed. A: People have given there opinion without analyzing it, it can be better solved by python itself, below is the code to check the difference import difflib from pprint import pprint s1 = """ class Test(int): difference=property(lambda self: self.__sub__) """ s2 = """ class Test(int): difference=lambda self: self.__sub__ """ d = difflib.Differ() print "and the difference is..." for c in d.compare(s1, s2): if c[0] in '+-': print c[1:], and as expected it says and the difference is... p r o p e r t y ( ) A: Edit: Ah, I see. You are asking why anybody would do exactly the code above. It's not, in fact a question about why to make a lambda or a property at all, it's not a question of the differences between the two examples, and not even why you want to make a property out of a lambda. Your question is "Why would anybody make a property of a lambda that just returns self.__sub__". And the answer is: One wouldn't. Let's assume somebody wants to do this: >>> foo = MyInt(8) >>> print foo.difference(7) 1 So he tries to accomplish it by this class: class MyInt(int): def difference(self, i): return self - i But that's two lines, and since he is a Ruby programmer and believes that good code is code that has few lines of code, he changes it to: class MyInt(int): difference = int.__sub__ To save one line of code. But apparently, things are still too easy. He learned in Ruby that a problem is not properly solved unless you use anonymous code blocks, so he will try to use Pythons nearest equivalent, lambdas, for absolutely no reason: class MyInt(int): difference=lambda self, i: self - i All these works. But things are still WAY to uncomplicated, so instead he decides to make things more complex, by not doing the calculation, but returning the sub method: class MyInt(int): difference=lambda self: self.__sub__ Ah, but that doesn't work, because he needs to call difference to get the sub-method: >>> foo = MyInt(8) >>> print foo.difference()(7) 1 So he makes it a property: class MyInt(int): difference=property(lambda self: self.__sub__) There. Now he has found the maximum complexity to solve a non-problem. But normal people wouldn't do any of these, but do: >>> foo = 8 >>> print foo - 7 1 A: Yes, in one case difference is a property. If you are asking what a property is, you can see it as a method that gets automatically called. A: Yes, in one case difference is a property A: Purpose of property can be 1. To provide get/set hooks while accessing an attribute e.g. if you used to have class with attribute a, later on you want to do something else when it is set, you can convert that attribute to property without affecting the interface or how users use your class. So in the example below class A and B are exactly same for a user but internally in B you can do many things in get/setX class A(object): def __init__(self): self.x = 0 a = A() a.x = 1 class B(object): def __init__(self): self.x = 0 def getX(self): return self._x def setX(self, x): self._x = x x = property(getX, setX) b = B() B.x = 1 2. As implied in 1, property is a better alternative to get/set calls, so instead of getX, setX user uses less verbose self.x and self.x = 1, though personally I never make a property just for getting or setting a attribute, if need arises it can be done later on as shown in #1/ as far as difference in concerned, property provide you with get/set/del for an atribute, but in the example you have given a method(lambda or proper function) can only be used to do one of get/set or del, so you will need three such lambdas differenceSet, differenceGet, differenceDel
Setting a lambda function as a property
Consider these two classes: class Test(int): difference = property(lambda self: self.__sub__) class Test2(int): difference=lambda self: self.__sub__ Is there any difference between these two classes? New: If so, what is the purpose of using the property to store a lambda function that returns another function? Update: Changed the question to what I should have asked in the first place. Sorry. Even though I can now know the solution from the answers, it would be unfair for me to do a self answer in these circumstances. (without leaving the answer for a few days at least). Update 2: Sorry, I wasn't clear enough again. The question was about the particular construction, not properties in general.
[ "For Test1, you could use .difference - for Test2, you'd need to use .difference() instead.\nAs for why you might use it, a potential use would be to replace something that was previously directly stored as a property with a dynamic calculation instead.\nFor instance, if you used to store property obj.a, but then you expanded your implementation so that it knew instead properties obj.b and obj.c that could be used to calculate a, but could also be used to calculate different things. If you still wanted to provide backwards-compat with things that used the previous object form, you could implement obj.a as a property() that calculated a based on b and c and it'd behave to those older code fragments as it previously did, with no other code modification needed.\n", "People have given there opinion without analyzing it, it can be better solved by python itself, below is the code to check the difference\nimport difflib\nfrom pprint import pprint\n\ns1 = \"\"\"\nclass Test(int):\n difference=property(lambda self: self.__sub__)\n\"\"\"\n\ns2 = \"\"\"\nclass Test(int):\n difference=lambda self: self.__sub__\n\"\"\"\n\nd = difflib.Differ()\n\nprint \"and the difference is...\"\nfor c in d.compare(s1, s2):\n if c[0] in '+-': print c[1:],\n\nand as expected it says\nand the difference is...\n p r o p e r t y ( )\n\n", "Edit: Ah, I see. You are asking why anybody would do exactly the code above. It's not, in fact a question about why to make a lambda or a property at all, it's not a question of the differences between the two examples, and not even why you want to make a property out of a lambda.\nYour question is \"Why would anybody make a property of a lambda that just returns self.__sub__\".\nAnd the answer is: One wouldn't.\nLet's assume somebody wants to do this:\n>>> foo = MyInt(8)\n>>> print foo.difference(7)\n1\n\nSo he tries to accomplish it by this class:\nclass MyInt(int):\n def difference(self, i):\n return self - i\n\nBut that's two lines, and since he is a Ruby programmer and believes that good code is code that has few lines of code, he changes it to:\nclass MyInt(int):\n difference = int.__sub__\n\nTo save one line of code. But apparently, things are still too easy. He learned in Ruby that a problem is not properly solved unless you use anonymous code blocks, so he will try to use Pythons nearest equivalent, lambdas, for absolutely no reason:\nclass MyInt(int):\n difference=lambda self, i: self - i\n\nAll these works. But things are still WAY to uncomplicated, so instead he decides to make things more complex, by not doing the calculation, but returning the sub method:\nclass MyInt(int):\n difference=lambda self: self.__sub__\n\nAh, but that doesn't work, because he needs to call difference to get the sub-method:\n>>> foo = MyInt(8)\n>>> print foo.difference()(7)\n1\n\nSo he makes it a property:\nclass MyInt(int):\n difference=property(lambda self: self.__sub__)\n\nThere. Now he has found the maximum complexity to solve a non-problem.\nBut normal people wouldn't do any of these, but do:\n>>> foo = 8\n>>> print foo - 7\n1\n\n", "Yes, in one case difference is a property. If you are asking what a property is, you can see it as a method that gets automatically called.\n", "Yes, in one case difference is a property\n", "Purpose of property can be\n1.\nTo provide get/set hooks while accessing an attribute\ne.g. if you used to have class with attribute a, later on you want to do something else when it is set, you can convert that attribute to property without affecting the interface or how users use your class. So in the example below class A and B are exactly same for a user but internally in B you can do many things in get/setX\nclass A(object):\n def __init__(self):\n self.x = 0\n\na = A()\na.x = 1\n\nclass B(object):\n def __init__(self):\n self.x = 0\n\n def getX(self): return self._x\n def setX(self, x): self._x = x \n x = property(getX, setX)\n\nb = B()\nB.x = 1\n\n2.\nAs implied in 1, property is a better alternative to get/set calls, so instead of getX, setX user uses less verbose self.x and self.x = 1, though personally I never make a property just for getting or setting a attribute, if need arises it can be done later on as shown in #1/\nas far as difference in concerned, property provide you with get/set/del for an atribute, but in the example you have given a method(lambda or proper function) can only be used to do one of get/set or del, so you will need three such lambdas differenceSet, differenceGet, differenceDel\n" ]
[ 8, 3, 3, 2, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001665789_python.txt
Q: Combining 2 lists in python I have 2 lists each of equal size and am interested to combine these two lists and write it into a file. alist=[1,2,3,5] blist=[2,3,4,5] --the resulting list should be like [(1,2), (2,3), (3,4), (5,5)] After that i want that to be written it to a file. How can i accomplish this? A: # combine the lists zipped = zip(alist, blist) # write to a file (in append mode) file = open("filename", 'a') for item in zipped: file.write("%d, %d\n" % item) file.close() The resulting output in the file will be: 1,2 2,3 3,4 5,5 A: For the sake of completeness, I'll add to Ben's solution that itertools.izip is preferable especially for larger lists if the result is used iteratively, as the final result is not an actual list but a generator: from itertools import izip zipped = izip(alist, blist) with open("output.txt", "wt") as f: for item in zipped: f.write("{0},{1}\n".format(*item)) The documentation for izip can be found here.
Combining 2 lists in python
I have 2 lists each of equal size and am interested to combine these two lists and write it into a file. alist=[1,2,3,5] blist=[2,3,4,5] --the resulting list should be like [(1,2), (2,3), (3,4), (5,5)] After that i want that to be written it to a file. How can i accomplish this?
[ "# combine the lists\nzipped = zip(alist, blist)\n\n# write to a file (in append mode)\nfile = open(\"filename\", 'a') \nfor item in zipped:\n file.write(\"%d, %d\\n\" % item) \nfile.close()\n\nThe resulting output in the file will be:\n 1,2\n 2,3\n 3,4\n 5,5\n\n", "For the sake of completeness, I'll add to Ben's solution that itertools.izip is preferable especially for larger lists if the result is used iteratively, as the final result is not an actual list but a generator:\nfrom itertools import izip\nzipped = izip(alist, blist)\nwith open(\"output.txt\", \"wt\") as f:\n for item in zipped:\n f.write(\"{0},{1}\\n\".format(*item))\n\nThe documentation for izip can be found here.\n" ]
[ 13, 6 ]
[]
[]
[ "list", "python" ]
stackoverflow_0001673005_list_python.txt
Q: python and pyPdf - how to extract text from the pages so that there are spaces between lines currently, if I make a page object of a pdf page with pyPdf, and extractText(), what happens is that lines are concatenated together. For example, if line 1 of the page says "hello" and line 2 says "world" the resulting text returned from extractText() is "helloworld" instead of "hello world." Does anyone know how to fix this, or have suggestions for a work around? I really need the text to have spaces in between the lines because i'm doing text mining on this pdf text and not having spaces in between lines kills it.... A: This is a common problem with pdf parsing. You can also expect trailing dashes that you will have to fix in some cases. I came up with a workaround for one of my projects which I will describe here shortly: I used pdfminer to extract XML from PDF and also found concatenated words in the XML. I extracted the same PDF as HTML and the HTML can be described by lines of the following regex: <span style="position:absolute; writing-mode:lr-tb; left:[0-9]+px; top:([0-9]+)px; font-size:[0-9]+px;">([^<]*)</span> The spans are positioned absolutely and have a top-style that you can use to determine if a line break happened. If a line break happened and the last word on the last line does not have a trailing dash you can separate the last word on the last line and the first word on the current line. It can be tricky in the details, but you might be able to fix almost all text parsing errors. Additionally you might want to run a dictionary library like enchant over your text, find errors and if the fix suggested by the dictionary is like the error word but with a space somewhere, the error word is likely to be a parsing error and can be fixed with the dictionaries suggestion. Parsing PDF sucks and if you find a better source, use it.
python and pyPdf - how to extract text from the pages so that there are spaces between lines
currently, if I make a page object of a pdf page with pyPdf, and extractText(), what happens is that lines are concatenated together. For example, if line 1 of the page says "hello" and line 2 says "world" the resulting text returned from extractText() is "helloworld" instead of "hello world." Does anyone know how to fix this, or have suggestions for a work around? I really need the text to have spaces in between the lines because i'm doing text mining on this pdf text and not having spaces in between lines kills it....
[ "This is a common problem with pdf parsing. You can also expect trailing dashes that you will have to fix in some cases. I came up with a workaround for one of my projects which I will describe here shortly:\nI used pdfminer to extract XML from PDF and also found concatenated words in the XML. I extracted the same PDF as HTML and the HTML can be described by lines of the following regex:\n<span style=\"position:absolute; writing-mode:lr-tb; left:[0-9]+px; top:([0-9]+)px; font-size:[0-9]+px;\">([^<]*)</span>\n\nThe spans are positioned absolutely and have a top-style that you can use to determine if a line break happened. If a line break happened and the last word on the last line does not have a trailing dash you can separate the last word on the last line and the first word on the current line. It can be tricky in the details, but you might be able to fix almost all text parsing errors.\nAdditionally you might want to run a dictionary library like enchant over your text, find errors and if the fix suggested by the dictionary is like the error word but with a space somewhere, the error word is likely to be a parsing error and can be fixed with the dictionaries suggestion.\nParsing PDF sucks and if you find a better source, use it.\n" ]
[ 2 ]
[]
[]
[ "formatting", "pypdf", "python", "text" ]
stackoverflow_0001672466_formatting_pypdf_python_text.txt
Q: What do backticks mean to the Python interpreter? Example: `num` I'm playing around with list comprehensions and I came across this little snippet on another site: return ''.join([`num` for num in xrange(loop_count)]) I spent a few minutes trying to replicate the function (by typing) before realising the `num` bit was breaking it. What does enclosing a statement in those characters do? From what I can see it is the equivalent of str(num). But when I timed it: return ''.join([str(num) for num in xrange(10000000)]) It takes 4.09 seconds whereas: return ''.join([`num` for num in xrange(10000000)]) takes 2.43 seconds. Both give identical results, but one is a lot slower. What is going on here? Oddly... repr() gives slightly slower results than `num`. 2.99 seconds vs 2.43 seconds. I am using Python 2.6 (haven't tried 3.0 yet). A: Backticks are a deprecated alias for repr(). Don't use them any more; the syntax was removed in Python 3.0. Using backticks seems to be faster than using repr(num) or num.__repr__() in version 2.x. I guess it's because additional dictionary lookup is required in the global namespace (for repr), or in the object's namespace (for __repr__), respectively. Using the dis module proves my assumption: def f1(a): return repr(a) def f2(a): return a.__repr__() def f3(a): return `a` Disassembling shows: >>> import dis >>> dis.dis(f1) 3 0 LOAD_GLOBAL 0 (repr) 3 LOAD_FAST 0 (a) 6 CALL_FUNCTION 1 9 RETURN_VALUE >>> dis.dis(f2) 6 0 LOAD_FAST 0 (a) 3 LOAD_ATTR 0 (__repr__) 6 CALL_FUNCTION 0 9 RETURN_VALUE >>> dis.dis(f3) 9 0 LOAD_FAST 0 (a) 3 UNARY_CONVERT 4 RETURN_VALUE f1 involves a global lookup for repr, f2 an attribute lookup for __repr__, whereas the backtick operator is implemented in a separate opcode. Since there is no overhead for dictionary lookup (LOAD_GLOBAL/LOAD_ATTR) nor for function calls (CALL_FUNCTION), backticks are faster. I guess that the Python folks decided that having a separate low-level operation for repr() is not worth it, and having both repr() and backticks violates the principle "There should be one-- and preferably only one --obvious way to do it" so the feature was removed in Python 3.0. A: Backtick quoting is generally non-useful and is gone in Python 3. For what it's worth, this: ''.join(map(repr, xrange(10000000))) is marginally faster than the backtick version for me. But worrying about this is probably a premature optimisation. A: My guess is that num doesn't define the method __str__(), so str() has to do a second lookup for __repr__. The backticks look directly for __repr__. If that's true, then using repr() instead of the backticks should give you the same results.
What do backticks mean to the Python interpreter? Example: `num`
I'm playing around with list comprehensions and I came across this little snippet on another site: return ''.join([`num` for num in xrange(loop_count)]) I spent a few minutes trying to replicate the function (by typing) before realising the `num` bit was breaking it. What does enclosing a statement in those characters do? From what I can see it is the equivalent of str(num). But when I timed it: return ''.join([str(num) for num in xrange(10000000)]) It takes 4.09 seconds whereas: return ''.join([`num` for num in xrange(10000000)]) takes 2.43 seconds. Both give identical results, but one is a lot slower. What is going on here? Oddly... repr() gives slightly slower results than `num`. 2.99 seconds vs 2.43 seconds. I am using Python 2.6 (haven't tried 3.0 yet).
[ "Backticks are a deprecated alias for repr(). Don't use them any more; the syntax was removed in Python 3.0.\nUsing backticks seems to be faster than using repr(num) or num.__repr__() in version 2.x. I guess it's because additional dictionary lookup is required in the global namespace (for repr), or in the object's namespace (for __repr__), respectively.\n\nUsing the dis module proves my assumption:\ndef f1(a):\n return repr(a)\n\ndef f2(a):\n return a.__repr__()\n\ndef f3(a):\n return `a`\n\nDisassembling shows:\n>>> import dis\n>>> dis.dis(f1)\n 3 0 LOAD_GLOBAL 0 (repr)\n 3 LOAD_FAST 0 (a)\n 6 CALL_FUNCTION 1\n 9 RETURN_VALUE\n>>> dis.dis(f2)\n 6 0 LOAD_FAST 0 (a)\n 3 LOAD_ATTR 0 (__repr__)\n 6 CALL_FUNCTION 0\n 9 RETURN_VALUE\n>>> dis.dis(f3)\n 9 0 LOAD_FAST 0 (a)\n 3 UNARY_CONVERT\n 4 RETURN_VALUE\n\nf1 involves a global lookup for repr, f2 an attribute lookup for __repr__, whereas the backtick operator is implemented in a separate opcode. Since there is no overhead for dictionary lookup (LOAD_GLOBAL/LOAD_ATTR) nor for function calls (CALL_FUNCTION), backticks are faster.\nI guess that the Python folks decided that having a separate low-level operation for repr() is not worth it, and having both repr() and backticks violates the principle\n\n\"There should be one-- and preferably only one --obvious way to do it\"\n\nso the feature was removed in Python 3.0.\n", "Backtick quoting is generally non-useful and is gone in Python 3.\nFor what it's worth, this:\n''.join(map(repr, xrange(10000000)))\n\nis marginally faster than the backtick version for me. But worrying about this is probably a premature optimisation.\n", "My guess is that num doesn't define the method __str__(), so str() has to do a second lookup for __repr__.\nThe backticks look directly for __repr__. If that's true, then using repr() instead of the backticks should give you the same results.\n" ]
[ 134, 10, 1 ]
[]
[]
[ "list_comprehension", "python" ]
stackoverflow_0001673071_list_comprehension_python.txt
Q: How to deal with query parameter's encoding? I assumed that any data being sent to my parameter strings would be utf-8, since that is what my whole site uses throughout. Lo-and-behold I was wrong. For this example has the character ä in utf-8 in the document (from the query string) but proceeds to send a B\xe4ule (which is either ISO-8859-1 or windows 1252) when you click submit. It also fires off a ajax request which also fails from trying to decode the non-utf8 character. An in django, my request.POST is really screwed up : >>> print request.POST <QueryDict: {u'alias': [u'eu.wowarmory.com/character-sheet.xml?r=Der Rat von Dalaran&cn=B\ufffde']}> How can I just make all these headaches go away and work in utf8? A: Since Django 1.0 all values you get from form submission are unicode objects, not bytestrings like in Django 0.96 and earlier. To get utf-8 from your values encode them with utf-8 codec: request.POST['somefield'].encode('utf-8') To get query parameters decoded properly, they have to be properly encoded first: In [3]: urllib.quote('ä') Out[3]: '%C3%A4' I think your problem comes from bad encoding of query parameters. A: You should also add accept-charset="UTF-8" to the <form/> tag. A: Although it's AFAIK not specified anywhere, all browsers use the character encoding of the HTML page, on which the form is embedded as the encoding for submitting the form back to the server. So if you want the URL parameters to be UTF-8-encoded, you have to make sure that the HTML page, on which the form is embedded, is also UTF-8 encoded. A: According to Get non-UTF-8-form fields as UTF-8 in PHP?, you'll need to make sure the page itself is served up using UTF8 encoding. A: Getting an utf-8 string from the submitted form should just be a matter of encoding the unicode object: next = request.POST['next'].encode('utf-8') For the AJAX request, can you confirm that that request is also being sent as utf-8 and declared as utf-8 in the headers?
How to deal with query parameter's encoding?
I assumed that any data being sent to my parameter strings would be utf-8, since that is what my whole site uses throughout. Lo-and-behold I was wrong. For this example has the character ä in utf-8 in the document (from the query string) but proceeds to send a B\xe4ule (which is either ISO-8859-1 or windows 1252) when you click submit. It also fires off a ajax request which also fails from trying to decode the non-utf8 character. An in django, my request.POST is really screwed up : >>> print request.POST <QueryDict: {u'alias': [u'eu.wowarmory.com/character-sheet.xml?r=Der Rat von Dalaran&cn=B\ufffde']}> How can I just make all these headaches go away and work in utf8?
[ "Since Django 1.0 all values you get from form submission are unicode objects, not bytestrings like in Django 0.96 and earlier. To get utf-8 from your values encode them with utf-8 codec:\nrequest.POST['somefield'].encode('utf-8')\n\nTo get query parameters decoded properly, they have to be properly encoded first:\nIn [3]: urllib.quote('ä')\nOut[3]: '%C3%A4'\n\nI think your problem comes from bad encoding of query parameters.\n", "You should also add accept-charset=\"UTF-8\" to the <form/> tag.\n", "Although it's AFAIK not specified anywhere, all browsers use the character encoding of the HTML page, on which the form is embedded as the encoding for submitting the form back to the server. So if you want the URL parameters to be UTF-8-encoded, you have to make sure that the HTML page, on which the form is embedded, is also UTF-8 encoded.\n", "According to Get non-UTF-8-form fields as UTF-8 in PHP?, you'll need to make sure the page itself is served up using UTF8 encoding. \n", "Getting an utf-8 string from the submitted form should just be a matter of encoding the\nunicode object:\nnext = request.POST['next'].encode('utf-8')\nFor the AJAX request, can you confirm that that request is also being sent as utf-8 and declared as utf-8 in the headers?\n" ]
[ 3, 1, 0, 0, 0 ]
[]
[]
[ "django", "python", "unicode", "utf_8" ]
stackoverflow_0001526965_django_python_unicode_utf_8.txt
Q: Need help on making the recursive parser using pyparsing I am trying the python pyparsing for parsing. I got stuck up while making the recursive parser. Let me explain the problem I want to make the Cartesian product of the elements. The syntax is cross({elements },{element}) I put in more specific way cross({a},{c1}) or cross({a,b},{c1}) or cross({a,b,c,d},{c1}) or So the general form is first group will have n elements (a,b,c,d). The second group will have one element that so the final output will be Cartesian Product. The syntax is to be made recursive because it can go to n level like cross(cross({a,b},{c1}),{c2}) This means cross a,b with c1. Lets say outcome us y. We again cross y it with c2 This can be till n level cross(cross(cross(cross...... What i want is to have object to be initialized using setparseAction So i will have 2 class class object1(object): This will be used by a,b,c,d class object2(object): This will hold cross elements I need help on this i am not able to make the recursive parser. A: You should look at definitions of other languages to see how this is usually handled. For example, look at how multiplication is defined. It isn't {expression} * {expression} Because the recursion is hard to deal with, and there's no implied left-to-right ordering. What you see more often are things like {term} + {factor} {factor} * {unary-expression} Which puts priorities and a left-to-right ordering around the operators. Look at something like http://www.cs.man.ac.uk/~pjj/bnf/c_syntax.bnf for examples of how things like this are commonly structured. A: I agree with @S.Lott you should reconsider your grammar. Recursive definitions can be introduced using Forward(): from pyparsing import (Literal, Word, OneOrMore, Forward, nums, alphas) def BNF(): """ element :: id elements :: '{' element [ ',' element ]+ '}' | 'cross' '(' elements ',' '{' element '}' ')' """ lcb, rcb, lb, rb, comma = [Literal(c).suppress() for c in '{}(),'] element = Word(alphas, alphas+nums+"_") # id elements = Forward() elements << ((lcb + element + OneOrMore(comma + element) + rcb) | (Literal('cross') + lb + elements + comma + lcb + element + rcb + rb)) return elements print BNF().parseString("cross(cross({a,b},{c1}),{c2})") Output: ['cross', 'cross', 'a', 'b', 'c1', 'c2'] A: I don't know if this is any help, but here is how you would do what you want in lepl. Since the grammar appears to be correct I assume that it would be easy to translate to pyparsing. from lepl import * def compile_parser(): class Cross(Node): pass word = Token('[a-z0-9]+') par, en, bra, ket = [~Token('\\'+c) for c in '(){}'] comma = ~Token(',') cross = Delayed() vector = bra & word[1:,comma] & ket > list arg = vector | cross cross += ~word('cross') & par & arg[2,comma] & en > Cross parser = cross.string_parser() return lambda expr: parser(expr)[0] if __name__ == '__main__': parser = compile_parser() print parser('cross({a},{c1})') print parser('cross({a,b},{c1})') print parser('cross({a,b,c,d},{c1})') print parser('cross(cross({a,b},{c1}),{c2})') The output is: Cross +- [u'a'] `- [u'c1'] Cross +- [u'a', u'b'] `- [u'c1'] Cross +- [u'a', u'b', u'c', u'd'] `- [u'c1'] Cross +- Cross | +- [u'a', u'b'] | `- [u'c1'] `- [u'c2']
Need help on making the recursive parser using pyparsing
I am trying the python pyparsing for parsing. I got stuck up while making the recursive parser. Let me explain the problem I want to make the Cartesian product of the elements. The syntax is cross({elements },{element}) I put in more specific way cross({a},{c1}) or cross({a,b},{c1}) or cross({a,b,c,d},{c1}) or So the general form is first group will have n elements (a,b,c,d). The second group will have one element that so the final output will be Cartesian Product. The syntax is to be made recursive because it can go to n level like cross(cross({a,b},{c1}),{c2}) This means cross a,b with c1. Lets say outcome us y. We again cross y it with c2 This can be till n level cross(cross(cross(cross...... What i want is to have object to be initialized using setparseAction So i will have 2 class class object1(object): This will be used by a,b,c,d class object2(object): This will hold cross elements I need help on this i am not able to make the recursive parser.
[ "You should look at definitions of other languages to see how this is usually handled.\nFor example, look at how multiplication is defined.\nIt isn't\n{expression} * {expression}\n\nBecause the recursion is hard to deal with, and there's no implied left-to-right ordering. What you see more often are things like\n{term} + {factor}\n{factor} * {unary-expression}\n\nWhich puts priorities and a left-to-right ordering around the operators.\nLook at something like http://www.cs.man.ac.uk/~pjj/bnf/c_syntax.bnf for examples of how things like this are commonly structured.\n", "I agree with @S.Lott you should reconsider your grammar.\nRecursive definitions can be introduced using Forward():\nfrom pyparsing import (Literal, Word, OneOrMore, Forward, nums, alphas)\n\ndef BNF():\n \"\"\"\n element :: id\n elements :: '{' element [ ',' element ]+ '}' \n | 'cross' '(' elements ',' '{' element '}' ')'\n \"\"\"\n lcb, rcb, lb, rb, comma = [Literal(c).suppress() for c in '{}(),']\n element = Word(alphas, alphas+nums+\"_\") # id\n elements = Forward()\n elements << ((lcb + element + OneOrMore(comma + element) + rcb) \n | (Literal('cross') + lb + elements + comma\n + lcb + element + rcb + rb))\n return elements\n\nprint BNF().parseString(\"cross(cross({a,b},{c1}),{c2})\")\n\nOutput:\n['cross', 'cross', 'a', 'b', 'c1', 'c2']\n\n", "I don't know if this is any help, but here is how you would do what you want in lepl. Since the grammar appears to be correct I assume that it would be easy to translate to pyparsing.\nfrom lepl import *\n\ndef compile_parser():\n\n class Cross(Node): pass\n\n word = Token('[a-z0-9]+')\n par, en, bra, ket = [~Token('\\\\'+c) for c in '(){}']\n comma = ~Token(',')\n\n cross = Delayed()\n vector = bra & word[1:,comma] & ket > list\n arg = vector | cross\n cross += ~word('cross') & par & arg[2,comma] & en > Cross\n\n parser = cross.string_parser()\n return lambda expr: parser(expr)[0]\n\n\nif __name__ == '__main__':\n\n parser = compile_parser()\n print parser('cross({a},{c1})')\n print parser('cross({a,b},{c1})')\n print parser('cross({a,b,c,d},{c1})')\n print parser('cross(cross({a,b},{c1}),{c2})')\n\nThe output is:\nCross\n +- [u'a']\n `- [u'c1']\n\nCross\n +- [u'a', u'b']\n `- [u'c1']\n\nCross\n +- [u'a', u'b', u'c', u'd']\n `- [u'c1']\n\nCross\n +- Cross\n | +- [u'a', u'b']\n | `- [u'c1']\n `- [u'c2']\n\n" ]
[ 6, 4, 3 ]
[]
[]
[ "parsing", "pyparsing", "python", "recursion" ]
stackoverflow_0000634432_parsing_pyparsing_python_recursion.txt
Q: DBus Python Problems When I'm trying to get the idle time of the gnome screensaver in seconds, through dbus, python throws an TypeError. In the documentation I found for the screensaver sessionIdleTime, it returns a unsigned integer. http://www.gnome.org/~mccann/gnome-screensaver/docs/gnome-screensaver.html#gs-method-GetSessionIdle However, when I'm in the python shell, the output is converterted to a string, while I can't seen to be able to cast it as a string in the program. gs = gs = bus.get_object('org.gnome.ScreenSaver','/org/gnome/ScreenSaver') message = str(gs.GetSessionIdleTime()) A: str(gs.GetSessionIdleTime()) cast the integer into a string. And after that, using + in a string variable incorporated it into another dbus call that was called by the output.
DBus Python Problems
When I'm trying to get the idle time of the gnome screensaver in seconds, through dbus, python throws an TypeError. In the documentation I found for the screensaver sessionIdleTime, it returns a unsigned integer. http://www.gnome.org/~mccann/gnome-screensaver/docs/gnome-screensaver.html#gs-method-GetSessionIdle However, when I'm in the python shell, the output is converterted to a string, while I can't seen to be able to cast it as a string in the program. gs = gs = bus.get_object('org.gnome.ScreenSaver','/org/gnome/ScreenSaver') message = str(gs.GetSessionIdleTime())
[ "str(gs.GetSessionIdleTime()) cast the integer into a string.\nAnd after that, using + in a string variable incorporated it into another dbus call that was called by the output.\n" ]
[ 0 ]
[]
[]
[ "dbus", "gnome", "python" ]
stackoverflow_0001672113_dbus_gnome_python.txt
Q: How to match alphabetical chars without numeric chars with Python regexp? Using Python module re, how to get the equivalent of the "\w" (which matches alphanumeric chars) WITHOUT matching the numeric characters (those which can be matched by "[0-9]")? Notice that the basic need is to match any character (including all unicode variation) without numerical chars (which are matched by "[0-9]"). As a final note, I really need a regexp as it is part of a greater regexp. Underscores should not be matched. EDIT: I hadn't thought about underscores state, so thanks for warnings about this being matched by "\w" and for the elected solution that addresses this issue. A: You want [^\W\d]: the group of characters that is not (either a digit or not an alphanumeric). Add an underscore in that negated set if you don't want them either. A bit twisted, if you ask me, but it works. Should be faster than the lookahead alternative. A: (?!\d)\w A position that is not followed by a digit, and then \w. Effectively cancels out digits but allows the \w range by using a negative look-ahead. The same could be expressed as a positive look-ahead and \D: (?=\D)\w To match multiple of these, enclose in parens: (?:(?!\d)\w)+
How to match alphabetical chars without numeric chars with Python regexp?
Using Python module re, how to get the equivalent of the "\w" (which matches alphanumeric chars) WITHOUT matching the numeric characters (those which can be matched by "[0-9]")? Notice that the basic need is to match any character (including all unicode variation) without numerical chars (which are matched by "[0-9]"). As a final note, I really need a regexp as it is part of a greater regexp. Underscores should not be matched. EDIT: I hadn't thought about underscores state, so thanks for warnings about this being matched by "\w" and for the elected solution that addresses this issue.
[ "You want [^\\W\\d]: the group of characters that is not (either a digit or not an alphanumeric). Add an underscore in that negated set if you don't want them either.\nA bit twisted, if you ask me, but it works. Should be faster than the lookahead alternative.\n", "(?!\\d)\\w\n\nA position that is not followed by a digit, and then \\w. Effectively cancels out digits but allows the \\w range by using a negative look-ahead.\nThe same could be expressed as a positive look-ahead and \\D:\n(?=\\D)\\w\n\nTo match multiple of these, enclose in parens:\n(?:(?!\\d)\\w)+\n\n" ]
[ 37, 9 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001673749_python_regex.txt
Q: How to generate graphical sitemap of large website I would like to generate a graphical sitemap for my website. There are two stages, as far as I can tell: crawl the website and analyse the link relationship to extract the tree structure generate a visually pleasing render of the tree Does anyone have advice or experience with achieving this, or know of existing work I can build on (ideally in Python)? I came across some nice CSS for rendering the tree, but it only works for 3 levels. Thanks A: The only automatic way to create a sitemap is to know the structure of your site and write a program which builds on that knowledge. Just crawling the links won't usually work because links can be between any pages so you get a graph (i.e. connections between nodes). There is no way to convert a graph into a tree in the general case. So you must identify the structure of your tree yourself and then crawl the relevant pages to get the titles of the pages. As for "but it only works for 3 levels": Three levels is more than enough. If you try to create more levels, your sitemap will become unusable (too big, too wide). No one will want to download a 1MB sitemap and then scroll through 100'000 pages of links. If your site grows that big, then you must implement some kind of search. A: Here is a python web crawler, which should make a good starting point. Your general strategy is this: you need to take care that outbound links are never followed, including links on the same domain but higher up than your starting point. as you spider, the site collect a hash of page urls mapped to a list of all the internal urls included in each page. take a pass over this list, assigning a token to each unique url. use your hash of {token => [tokens]} to generate a graphviz file that will lay out a graph for you convert the graphviz output into an imagemap where each node links to its corresponding webpage The reason you need to do all this is, as leonm noted, that websites are graphs, not trees, and laying out graphs is a harder problem than you can do in a simple piece of javascript and css. Graphviz is good at what it does. A: Please see http://aaron.oirt.rutgers.edu/myapp/docs/W1100_2200.TreeView on how to format tree views. You can also probably modify the example application http://aaron.oirt.rutgers.edu/myapp/DirectoryTree/index to scrape your pages if they are organized as directories of HTML files.
How to generate graphical sitemap of large website
I would like to generate a graphical sitemap for my website. There are two stages, as far as I can tell: crawl the website and analyse the link relationship to extract the tree structure generate a visually pleasing render of the tree Does anyone have advice or experience with achieving this, or know of existing work I can build on (ideally in Python)? I came across some nice CSS for rendering the tree, but it only works for 3 levels. Thanks
[ "The only automatic way to create a sitemap is to know the structure of your site and write a program which builds on that knowledge. Just crawling the links won't usually work because links can be between any pages so you get a graph (i.e. connections between nodes). There is no way to convert a graph into a tree in the general case.\nSo you must identify the structure of your tree yourself and then crawl the relevant pages to get the titles of the pages.\nAs for \"but it only works for 3 levels\": Three levels is more than enough. If you try to create more levels, your sitemap will become unusable (too big, too wide). No one will want to download a 1MB sitemap and then scroll through 100'000 pages of links. If your site grows that big, then you must implement some kind of search.\n", "Here is a python web crawler, which should make a good starting point. Your general strategy is this:\n\nyou need to take care that outbound links are never followed, including links on the same domain but higher up than your starting point. \nas you spider, the site collect a hash of page urls mapped to a list of all the internal urls included in each page. \ntake a pass over this list, assigning a token to each unique url.\nuse your hash of {token => [tokens]} to generate a graphviz file that will lay out a graph for you\nconvert the graphviz output into an imagemap where each node links to its corresponding webpage\n\nThe reason you need to do all this is, as leonm noted, that websites are graphs, not trees, and laying out graphs is a harder problem than you can do in a simple piece of javascript and css. Graphviz is good at what it does.\n", "Please see http://aaron.oirt.rutgers.edu/myapp/docs/W1100_2200.TreeView\non how to format tree views. You can also probably modify the example application\nhttp://aaron.oirt.rutgers.edu/myapp/DirectoryTree/index to scrape your\npages if they are organized as directories of HTML files.\n" ]
[ 4, 3, 1 ]
[]
[]
[ "python", "sitemap", "web", "web_crawler" ]
stackoverflow_0001672532_python_sitemap_web_web_crawler.txt
Q: How can I get the full list of running processes on a Mac from a python app I want to get the list of running processes on the Mac, similar to what you get from 'ps -ea' I have tried os.popen('ps -ea') but this only lists a small subset of the processes, presumably those owned by the owning shell. Other options I have tried are 'sh -c /bin/ps -ea' 'bash -c /bin/ps -ea' 'csh -c /bin/ps -ea' Running as root via sudo data = subprocess.Popen(['ps','ea'], stdout=subprocess.PIPE).stdout.readlines() What other methods are there that might give me the full process information listing? This is for a wx python app to monitor specific processes and spot when they die. A: os.popen('ps aux') looks like it's listing all processes for me.
How can I get the full list of running processes on a Mac from a python app
I want to get the list of running processes on the Mac, similar to what you get from 'ps -ea' I have tried os.popen('ps -ea') but this only lists a small subset of the processes, presumably those owned by the owning shell. Other options I have tried are 'sh -c /bin/ps -ea' 'bash -c /bin/ps -ea' 'csh -c /bin/ps -ea' Running as root via sudo data = subprocess.Popen(['ps','ea'], stdout=subprocess.PIPE).stdout.readlines() What other methods are there that might give me the full process information listing? This is for a wx python app to monitor specific processes and spot when they die.
[ "os.popen('ps aux') looks like it's listing all processes for me.\n" ]
[ 8 ]
[]
[]
[ "macos", "process", "python" ]
stackoverflow_0001673874_macos_process_python.txt