content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Splitting up a list into parts of balanced lengths I need an algorithm which given a list L and a number N, returns a list of N smaller lists where the sublists are "balanced". Examples: algo(range(1, 8), 3) -> [[1,2,3], [4,5], [6,7]] algo(range(1, 6), 4) -> [[1,2], [3], [4], [5]] algo(range(1, 12), 5) -> [[1,2,3], [4,5], [6,7], [8,9], [10, 11]] As you can see, the algorithm should "prefer" the first list in the output. I've been trying for hours, but I can't figure out a nice and terse algorithm for it. This will be implemented in Python, by the way, but it's really the algorithm that I'm after here. This is not homework, this is for a website which will display contents in a list in three columns (Django). I got the best answer from #python on freenode and it is as follows: def split_up(l, n): q, r = divmod(len(l), n) def division_point(i): return i * q + min(i, r) return [l[division_point(i):division_point(i+1)] for i in range(n)] Don't ask me why it works though. :) I'll give the correct answer to the one with most votes though. A: This is the code I came up with, without the sorting. Just slap on a lst.sort() if the input is not sorted. I think this came out nicely, using iterators and using islice to cut off the next piece. import itertools def partlst(lst, n): """Partition @lst in @n balanced parts, in given order""" parts, rest = divmod(len(lst), n) lstiter = iter(lst) for j in xrange(n): plen = len(lst)/n + (1 if rest > 0 else 0) rest -= 1 yield list(itertools.islice(lstiter, plen)) parts = list(partlst(range(1, 15), 5)) print len(parts) print parts A: Assuming you want output to contain lists of equal length when possible, otherwise give preference to lists in the beginning. Difference between lengths of sub-lists no more than one. >>> l = [0, 1, 2, 3, 4, 5, 6] >>> def algo(li, n): a, b = divmod(len(li), n) c = [a + 1] * b + [a] * (n-b) s = 0 for i, j in enumerate(c): c[i] = li[s:s+j] s += j return c >>> algo(l, 3) [[0, 1, 2], [3, 4], [5, 6]] >>> algo(l, 4) [[0, 1], [2, 3], [4, 5], [6]] A: If I understand your problem... you would only have to add one item for each list under mod(n), where you have algo (range(a,b), n) So you should: Have b-a > n Calculate b-a = n*x + y (I dont really know if the operator % exists on python, so you should get y) The first y lists will have (b-a/n + 1) elements and the other lists will have (b-a/n) A: Here's a tribute to functional lovers: def algo(l, n): if n == 1: return [l] q, r = divmod(len(l),n) if r: q += 1 return [l[:q]] + algo(l[q:], n - 1) This one is a little bit smaller: def algo(l, n): k = l[:] q, r = divmod(len(l),n) return [[k.pop(0) for _ in [0] * m] for m in [q + 1] * r + [q] * (n - r)] A: Bit late to the party, but... def algo(l, n): return [l[-(-len(l)*i//n):-(-len(l)*(i+1)//n)] for i in range(n)] Use / instead of // in older versions of Python.
Splitting up a list into parts of balanced lengths
I need an algorithm which given a list L and a number N, returns a list of N smaller lists where the sublists are "balanced". Examples: algo(range(1, 8), 3) -> [[1,2,3], [4,5], [6,7]] algo(range(1, 6), 4) -> [[1,2], [3], [4], [5]] algo(range(1, 12), 5) -> [[1,2,3], [4,5], [6,7], [8,9], [10, 11]] As you can see, the algorithm should "prefer" the first list in the output. I've been trying for hours, but I can't figure out a nice and terse algorithm for it. This will be implemented in Python, by the way, but it's really the algorithm that I'm after here. This is not homework, this is for a website which will display contents in a list in three columns (Django). I got the best answer from #python on freenode and it is as follows: def split_up(l, n): q, r = divmod(len(l), n) def division_point(i): return i * q + min(i, r) return [l[division_point(i):division_point(i+1)] for i in range(n)] Don't ask me why it works though. :) I'll give the correct answer to the one with most votes though.
[ "This is the code I came up with, without the sorting. Just slap on a lst.sort() if the input is not sorted.\nI think this came out nicely, using iterators and using islice to cut off the next piece.\nimport itertools\n\ndef partlst(lst, n):\n \"\"\"Partition @lst in @n balanced parts, in given order\"\"\"\n parts, rest = divmod(len(lst), n)\n lstiter = iter(lst)\n for j in xrange(n):\n plen = len(lst)/n + (1 if rest > 0 else 0)\n rest -= 1\n yield list(itertools.islice(lstiter, plen))\n\nparts = list(partlst(range(1, 15), 5))\nprint len(parts)\nprint parts\n\n", "Assuming you want output to contain lists of equal length when possible, otherwise give preference to lists in the beginning. Difference between lengths of sub-lists no more than one.\n>>> l = [0, 1, 2, 3, 4, 5, 6]\n>>> def algo(li, n):\n a, b = divmod(len(li), n)\n c = [a + 1] * b + [a] * (n-b)\n s = 0\n for i, j in enumerate(c):\n c[i] = li[s:s+j]\n s += j\n return c\n\n>>> algo(l, 3)\n[[0, 1, 2], [3, 4], [5, 6]]\n>>> algo(l, 4)\n[[0, 1], [2, 3], [4, 5], [6]]\n\n", "If I understand your problem... you would only have to add one item for each list under mod(n), where you have algo (range(a,b), n)\nSo you should:\n\nHave b-a > n\nCalculate b-a = n*x + y (I dont really know if the operator % exists on python, so you should get y)\nThe first y lists will have (b-a/n + 1) elements and the other lists will have (b-a/n)\n\n", "Here's a tribute to functional lovers:\ndef algo(l, n):\n if n == 1: return [l]\n q, r = divmod(len(l),n)\n if r: q += 1\n return [l[:q]] + algo(l[q:], n - 1)\n\nThis one is a little bit smaller:\ndef algo(l, n):\n k = l[:]\n q, r = divmod(len(l),n)\n return [[k.pop(0) for _ in [0] * m]\n for m in [q + 1] * r + [q] * (n - r)]\n\n", "Bit late to the party, but...\ndef algo(l, n):\n return [l[-(-len(l)*i//n):-(-len(l)*(i+1)//n)] for i in range(n)]\n\nUse / instead of // in older versions of Python.\n" ]
[ 5, 1, 0, 0, 0 ]
[]
[]
[ "algorithm", "python" ]
stackoverflow_0001380162_algorithm_python.txt
Q: Dynamically decompose list into variables in Python I have 2 dimensional list created at runtime (the number of entries in either dimension is unknown). For example: long_list = [ [2, 3, 6], [3, 7, 9] ] I want to iterate through it by getting the ith entry from each list inside the long_list: for entry in long_list.iter(): #entry will be [2, 3] then [3, 7] then [6, 9] I know that Python's itertools.izip_longest() method does this. Except it takes in a different variable for each list. itertools.izip_longest(var1, var2, var3 ...) So, how do I split my long_list into a different variable for each list and then call izip_longest() with all those variable at runtime? A: >>> long_list = [ [2, 3, 6], [3, 7, 9] ] >>> import itertools >>> for i in itertools.izip_longest(*long_list): # called zip_longest in py3k print(i) (2, 3) (3, 7) (6, 9) Basically, you need to use unpacking feature here. It would work similarly for zip.
Dynamically decompose list into variables in Python
I have 2 dimensional list created at runtime (the number of entries in either dimension is unknown). For example: long_list = [ [2, 3, 6], [3, 7, 9] ] I want to iterate through it by getting the ith entry from each list inside the long_list: for entry in long_list.iter(): #entry will be [2, 3] then [3, 7] then [6, 9] I know that Python's itertools.izip_longest() method does this. Except it takes in a different variable for each list. itertools.izip_longest(var1, var2, var3 ...) So, how do I split my long_list into a different variable for each list and then call izip_longest() with all those variable at runtime?
[ ">>> long_list = [ [2, 3, 6], [3, 7, 9] ]\n>>> import itertools\n>>> for i in itertools.izip_longest(*long_list): # called zip_longest in py3k\n print(i)\n\n\n(2, 3)\n(3, 7)\n(6, 9)\n\nBasically, you need to use unpacking feature here. It would work similarly for zip.\n" ]
[ 4 ]
[]
[]
[ "list", "python", "python_itertools", "reflection" ]
stackoverflow_0001674621_list_python_python_itertools_reflection.txt
Q: How to store callback methods? i am trying to store some method callbacks but referring to it will keep the bound object alive, so i tried to keep a weakref to method but that doesn't seems to be possible? so Why can't i keep a weak ref. to method (see example below) What is the best way to keep method ref? any thing in standard lib? Or I will have to keep function and object ref. separate? example: import weakref class A(object): def m(self): pass a = A() import weakref class A(object): def m(self): pass a = A() rm = weakref.ref(a.m) print "is weak ref to method dead?",rm() is None print "Q1. why can't i keep weakref to bound method?" ra = weakref.ref(a) m = a.m print "delete object" del a print "is object dead?",ra() is None print "delete method" del m print "is object dead?",ra() is None print "Q2. hmmm so i am stuck i can't keep a ref as it stops the object from gc, but weakref to method isn't working?" A: Since the method is bound to the object, what would you expect to do with it if the object doesnt exist? What would self contain? If you dont need the object in the method, make it a classmethod. Then your object will be GC:d even if you have a normal reference to the method. A: Recipe 6.10 in Python Cookbook, "Keeping References to Bound Methods Without Inhibiting Garbage Collection", offers a pretty thorough though concise discussion and solutions. You can read it online (on Google Books) here; we give credit for that recipe to Knapka, Jolliton and Nicodemus (partly from the original activestate cookbook recipe that another answer already mentioned) though of course, as usual in the Cookbook, we (me, my wife Anna, and David Ascher) are the ones responsible for the overall flow of discussion and the exact code version chosen for printing, so, if something's wrong with those, it's our fault;-). A: I have asked the same question here! In my question, I talk about GObject, but recognize it is a general problem in any kind of Python! I got help by lioro there, and what I use in my current code is below. Some important points: You can't weakref the method object. You have to weakref the instance and its function attribute, or simply the method name (as I do in my code below) You can add some mechanism to unregister the callback when your connected object goes away, if you don't do this, you will have the WeakCallback object live on instead, and exectute an empty method when the even occurs. . class WeakCallback (object): """A Weak Callback object that will keep a reference to the connecting object with weakref semantics. This allows object A to pass a callback method to object S, without object S keeping A alive. """ def __init__(self, mcallback): """Create a new Weak Callback calling the method @mcallback""" obj = mcallback.im_self attr = mcallback.im_func.__name__ self.wref = weakref.ref(obj, self.object_deleted) self.callback_attr = attr self.token = None def __call__(self, *args, **kwargs): obj = self.wref() if obj: attr = getattr(obj, self.callback_attr) attr(*args, **kwargs) else: self.default_callback(*args, **kwargs) def default_callback(self, *args, **kwargs): """Called instead of callback when expired""" pass def object_deleted(self, wref): """Called when callback expires""" pass Usage notes: # illustration how I typically use it weak_call = WeakCallback(self._something_changed) long_lived_object.connect("on_change", weak_call) I use the WeakCallback.token attribute in subclasses I've made to manage disconnecting the callback when the connecter goes away A: They have a nice solution in: http://code.activestate.com/recipes/81253/ Take a look at the last example, posted by Anonymous.
How to store callback methods?
i am trying to store some method callbacks but referring to it will keep the bound object alive, so i tried to keep a weakref to method but that doesn't seems to be possible? so Why can't i keep a weak ref. to method (see example below) What is the best way to keep method ref? any thing in standard lib? Or I will have to keep function and object ref. separate? example: import weakref class A(object): def m(self): pass a = A() import weakref class A(object): def m(self): pass a = A() rm = weakref.ref(a.m) print "is weak ref to method dead?",rm() is None print "Q1. why can't i keep weakref to bound method?" ra = weakref.ref(a) m = a.m print "delete object" del a print "is object dead?",ra() is None print "delete method" del m print "is object dead?",ra() is None print "Q2. hmmm so i am stuck i can't keep a ref as it stops the object from gc, but weakref to method isn't working?"
[ "Since the method is bound to the object, what would you expect to do with it if the object doesnt exist? What would self contain?\nIf you dont need the object in the method, make it a classmethod. Then your object will be GC:d even if you have a normal reference to the method.\n", "Recipe 6.10 in Python Cookbook, \"Keeping References to Bound Methods Without Inhibiting Garbage Collection\", offers a pretty thorough though concise discussion and solutions. You can read it online (on Google Books) here; we give credit for that recipe to Knapka, Jolliton and Nicodemus (partly from the original activestate cookbook recipe that another answer already mentioned) though of course, as usual in the Cookbook, we (me, my wife Anna, and David Ascher) are the ones responsible for the overall flow of discussion and the exact code version chosen for printing, so, if something's wrong with those, it's our fault;-).\n", "I have asked the same question here! In my question, I talk about GObject, but recognize it is a general problem in any kind of Python! I got help by lioro there, and what I use in my current code is below. Some important points:\n\nYou can't weakref the method object. You have to weakref the instance and its function attribute, or simply the method name (as I do in my code below)\nYou can add some mechanism to unregister the callback when your connected object goes away, if you don't do this, you will have the WeakCallback object live on instead, and exectute an empty method when the even occurs.\n\n.\nclass WeakCallback (object):\n \"\"\"A Weak Callback object that will keep a reference to\n the connecting object with weakref semantics.\n\n This allows object A to pass a callback method to object S,\n without object S keeping A alive.\n \"\"\"\n def __init__(self, mcallback):\n \"\"\"Create a new Weak Callback calling the method @mcallback\"\"\"\n obj = mcallback.im_self\n attr = mcallback.im_func.__name__\n self.wref = weakref.ref(obj, self.object_deleted)\n self.callback_attr = attr\n self.token = None\n\n def __call__(self, *args, **kwargs):\n obj = self.wref()\n if obj:\n attr = getattr(obj, self.callback_attr)\n attr(*args, **kwargs)\n else:\n self.default_callback(*args, **kwargs)\n\n def default_callback(self, *args, **kwargs):\n \"\"\"Called instead of callback when expired\"\"\"\n pass\n\n def object_deleted(self, wref):\n \"\"\"Called when callback expires\"\"\"\n pass\n\nUsage notes:\n# illustration how I typically use it\nweak_call = WeakCallback(self._something_changed)\nlong_lived_object.connect(\"on_change\", weak_call)\n\nI use the WeakCallback.token attribute in subclasses I've made to manage disconnecting the callback when the connecter goes away\n", "They have a nice solution in:\nhttp://code.activestate.com/recipes/81253/\nTake a look at the last example, posted by Anonymous. \n" ]
[ 3, 2, 0, 0 ]
[]
[]
[ "methods", "python", "weak_references" ]
stackoverflow_0001673483_methods_python_weak_references.txt
Q: Dictionary Operations... Index / Iterate / Validate I'd like to: Check a key / value at position i Check to see if key / value contains a string delete / store in another variable either the key / value The equivelant of this Java code: //Some list... ArrayList<String> example; ... //Index into data structure example.get(i); //Check for some string... if (example.get(i).contains("someText")){ somestuff; } //Store in some other variable exam = example.get(i) That's what I'm effectively trying to in Java, however I'd like to be able to do that with Python dictionarties however I'm not sure if this is possible, as I find the Python documentation rather hard to read. A: Python dictionaries are implemented as hash tables, so there is no intrinsic ordering; therefore, "position i" is a totally nonsensical concept for a dict -- it's like asking for the dict entry that's most yellow, or that least resembles a llama... those concepts just don't apply to dict entries, and "position i" is just as totally inapplicable. Where does that i come from, i.e., what's the real issue you're trying to solve? If your requirement is to iterate over the dictionary, you do that directly, without the crutch of a "numeric index". Or, if you do need to keep some specific order or other, then you don't use a dict, but rather some different data structure. If you explain exactly the purpose you're trying to address, I'm sure we can help you. A: Direct translation (for an ArrayList<String>, you do not want a dictionary, you want a list): example = ["foo", "bar", "baz"] str = example[i] if "someText" in str: somestuff() Get used to the for keyword, though, it's awesome in Python: for str in example: if "someText" in str: someStuff() Here's an example using dictionaries: fruits = { "apple": "red", "orange": "orange", "banana": "yellow", "pear": "green" } for key in fruits: if fruits[key] == "apple": print "An apple is my favorite fruit, and it is", fruits[key] else: print "A", key, "is not my favorite fruit, and it is", fruits[key] Iteration using for on a dictionary results in the keys, it's still up to you to index the item itself. As Alex pointed out, we're really off-base answering you with so little information, and it sounds like you're not well-rooted in data structures (dictionaries will probably yield a different order every time you iterate it). A: Yo can do that to reproduce the same behavior that your example in Java. # Some list example = {} # or example = dict() ... # Index into data estructure. example[example.keys(i)] # Check for some string... if example[example.keys(i)] == 'someText' : pass # Store in some other variable... exam = example[example.keys(i)] del example[example.keys(i)] # ...or exam = example.pop(example.keys(i)) A: What's nice about Python is that you can try code interactively. So we create a list which is like a Java List: >>> mylist = ["python","java","ruby"] >>> mylist ['python', 'java', 'ruby'] We can get an entry in the list via its index: >>> mylist[0] 'python' And use the find function to search for substrings: >>> mylist[1].find("av") 1 >>> mylist[1].find("ub") -1 It returns -1 if the string isn't found. Copying an entry to a new variable is done just how you'd expect: >>> newvalue = mylist[2] >>> newvalue 'ruby' Or we can create a dict which is like a Java Map, storing by key rather than index, but these work very similarly to lists in Python: >>> mydict = { 'python':'Guido', 'java':'James', 'ruby':'Yukihiro' } >>> mydict['java'] 'James' >>> othervalue = mydict['ruby'] >>> othervalue 'Yukihiro' >>> mydict['python'].find('uid') 1 >>> mydict['python'].find('hiro') -1 >>> mydict['ruby'].find('hiro') 4
Dictionary Operations... Index / Iterate / Validate
I'd like to: Check a key / value at position i Check to see if key / value contains a string delete / store in another variable either the key / value The equivelant of this Java code: //Some list... ArrayList<String> example; ... //Index into data structure example.get(i); //Check for some string... if (example.get(i).contains("someText")){ somestuff; } //Store in some other variable exam = example.get(i) That's what I'm effectively trying to in Java, however I'd like to be able to do that with Python dictionarties however I'm not sure if this is possible, as I find the Python documentation rather hard to read.
[ "Python dictionaries are implemented as hash tables, so there is no intrinsic ordering; therefore, \"position i\" is a totally nonsensical concept for a dict -- it's like asking for the dict entry that's most yellow, or that least resembles a llama... those concepts just don't apply to dict entries, and \"position i\" is just as totally inapplicable.\nWhere does that i come from, i.e., what's the real issue you're trying to solve? If your requirement is to iterate over the dictionary, you do that directly, without the crutch of a \"numeric index\". Or, if you do need to keep some specific order or other, then you don't use a dict, but rather some different data structure. If you explain exactly the purpose you're trying to address, I'm sure we can help you.\n", "Direct translation (for an ArrayList<String>, you do not want a dictionary, you want a list):\nexample = [\"foo\", \"bar\", \"baz\"]\nstr = example[i]\nif \"someText\" in str:\n somestuff()\n\nGet used to the for keyword, though, it's awesome in Python:\nfor str in example:\n if \"someText\" in str:\n someStuff()\n\n\nHere's an example using dictionaries:\nfruits = {\n \"apple\": \"red\",\n \"orange\": \"orange\",\n \"banana\": \"yellow\",\n \"pear\": \"green\"\n}\n\nfor key in fruits:\n if fruits[key] == \"apple\":\n print \"An apple is my favorite fruit, and it is\", fruits[key]\n else:\n print \"A\", key, \"is not my favorite fruit, and it is\", fruits[key]\n\nIteration using for on a dictionary results in the keys, it's still up to you to index the item itself. As Alex pointed out, we're really off-base answering you with so little information, and it sounds like you're not well-rooted in data structures (dictionaries will probably yield a different order every time you iterate it).\n", "Yo can do that to reproduce the same behavior that your example in Java.\n# Some list\nexample = {} # or example = dict()\n...\n# Index into data estructure.\nexample[example.keys(i)]\n# Check for some string...\nif example[example.keys(i)] == 'someText' :\n pass\n# Store in some other variable...\nexam = example[example.keys(i)]\ndel example[example.keys(i)]\n# ...or\nexam = example.pop(example.keys(i))\n\n", "What's nice about Python is that you can try code interactively.\nSo we create a list which is like a Java List:\n>>> mylist = [\"python\",\"java\",\"ruby\"]\n>>> mylist\n['python', 'java', 'ruby']\n\nWe can get an entry in the list via its index:\n>>> mylist[0]\n'python'\n\nAnd use the find function to search for substrings:\n>>> mylist[1].find(\"av\")\n1\n>>> mylist[1].find(\"ub\")\n-1\n\nIt returns -1 if the string isn't found.\nCopying an entry to a new variable is done just how you'd expect:\n>>> newvalue = mylist[2]\n>>> newvalue\n'ruby'\n\nOr we can create a dict which is like a Java Map, storing by key rather than index, but these work very similarly to lists in Python:\n>>> mydict = { 'python':'Guido', 'java':'James', 'ruby':'Yukihiro' }\n>>> mydict['java']\n'James'\n>>> othervalue = mydict['ruby']\n>>> othervalue\n'Yukihiro'\n>>> mydict['python'].find('uid')\n1\n>>> mydict['python'].find('hiro')\n-1\n>>> mydict['ruby'].find('hiro')\n4\n\n" ]
[ 5, 3, 1, 0 ]
[]
[]
[ "dictionary", "python" ]
stackoverflow_0001674683_dictionary_python.txt
Q: Can I add Runtime Properties to a Python App Engine App? Coming from a java background I'm used to having a bunch of properties files I can swap round at runtime dependent on what server I'm running on e.g. dev/production. Is there a method in python to do similar, specifically on Google's App Engine framework? At the minute I have them defined in .py files, obviously I'd like a better separation. A: You can: edit records in the datastore through the dashboard ( if you really have to ) upload new scripts / files ( you can access files in READ-ONLY ) export a WEB Service API to configuration records in the datastore ( probably not what you had in mind ) access a page somewhere through an HTTP end-point A: I don't see what is wrong with using python files to configure your application (apart from cultural issues :) ). In fact I have an issue with frameworks which don't allow me to script the configuration parameters. That said, please have a look http://aaron.oirt.rutgers.edu/myapp/docs/W1100_2300.GAEDeploy for a discussion of how to configure WHIFF application resources to configure applications to work in and out of the GAE framework in a portable manner.
Can I add Runtime Properties to a Python App Engine App?
Coming from a java background I'm used to having a bunch of properties files I can swap round at runtime dependent on what server I'm running on e.g. dev/production. Is there a method in python to do similar, specifically on Google's App Engine framework? At the minute I have them defined in .py files, obviously I'd like a better separation.
[ "You can:\n\nedit records in the datastore through the dashboard ( if you really have to )\nupload new scripts / files ( you can access files in READ-ONLY )\nexport a WEB Service API to configuration records in the datastore ( probably not what you had in mind )\naccess a page somewhere through an HTTP end-point\n\n", "I don't see what is wrong with using python files to configure your application (apart from cultural issues :) ). In fact I have an issue with frameworks which don't allow me to script the configuration parameters.\nThat said, please have a look http://aaron.oirt.rutgers.edu/myapp/docs/W1100_2300.GAEDeploy for\na discussion of how to configure WHIFF application resources to configure applications to work in and out of the GAE framework in a portable manner.\n" ]
[ 1, 1 ]
[]
[]
[ "google_app_engine", "properties", "python", "runtime" ]
stackoverflow_0001674764_google_app_engine_properties_python_runtime.txt
Q: Delineating a Read File Not really too sure how to word this question, therefore if you don't particularly understand it then I can try again. I have a file called example.txt and I'd like to import this into my Python program. Here I will do some calculations with what it contains and other things that are irrelevant. Instead of me importing this file, going through it line-by-line and extracting the information I want.. can Python do it instead? As in, if I structure the .txt correctly (whether it be key / value pairs seperated by an equals on each line), is there a current Python 'way' where it can handle it all and I work with that? A: with open("example.txt") as f: for line in f: key, value = line.strip().split("=") do_something(key,value) looks like a starting point if I understand you correctly. You need Python 2.6 or 3.x for this. Another place to look is the csv module that can parse comma-separated value files - and you can tell it to use = as a separator instead. This will abstract away some of the "manual work" in that previous example - but it seems your example doesn't especially need that kind of abstraction. Another idea: with open("example.txt") as f: d = dict([line.strip().split("=") for line in f]) Now that's concise and pythonic :) A: for line in open("file") key, value = line.strip().split("=") key=key.strip() value=value.strip() do_something(key,value) A: There's also another method - you can create a valid python file (let it be a list, dict definition or whatever else), read its content using f = open('file.txt', r) content = f.read() #assuming file isn't too long And then just parse it: parsedContent = eval(content) You can pass any environment to eval (see docs), so it might not have access to your globals and locals. This is evil and wrong, but in small program that won't be distributed and won't get 'file.txt' from network or from so called malicious user - you can use it.
Delineating a Read File
Not really too sure how to word this question, therefore if you don't particularly understand it then I can try again. I have a file called example.txt and I'd like to import this into my Python program. Here I will do some calculations with what it contains and other things that are irrelevant. Instead of me importing this file, going through it line-by-line and extracting the information I want.. can Python do it instead? As in, if I structure the .txt correctly (whether it be key / value pairs seperated by an equals on each line), is there a current Python 'way' where it can handle it all and I work with that?
[ "with open(\"example.txt\") as f:\n for line in f:\n key, value = line.strip().split(\"=\")\n do_something(key,value)\n\nlooks like a starting point if I understand you correctly. You need Python 2.6 or 3.x for this.\nAnother place to look is the csv module that can parse comma-separated value files - and you can tell it to use = as a separator instead. This will abstract away some of the \"manual work\" in that previous example - but it seems your example doesn't especially need that kind of abstraction.\nAnother idea:\nwith open(\"example.txt\") as f:\n d = dict([line.strip().split(\"=\") for line in f])\n\nNow that's concise and pythonic :)\n", "for line in open(\"file\")\n key, value = line.strip().split(\"=\")\n key=key.strip() \n value=value.strip() \n do_something(key,value)\n\n", "There's also another method - you can create a valid python file (let it be a list, dict definition or whatever else), read its content using \nf = open('file.txt', r)\ncontent = f.read() #assuming file isn't too long\n\nAnd then just parse it:\nparsedContent = eval(content)\n\nYou can pass any environment to eval (see docs), so it might not have access to your globals and locals. This is evil and wrong, but in small program that won't be distributed and won't get 'file.txt' from network or from so called malicious user - you can use it.\n" ]
[ 4, 0, 0 ]
[]
[]
[ "file_handling", "python" ]
stackoverflow_0001673740_file_handling_python.txt
Q: Why does python -V write to the error stream? I was writing a script to inspect python's version on my system and I've noticed that python -V writes to the error stream, while python -h, for instance, uses the standard output. Is there a good reason for this behavior? A: The -h option also used to print to stderr because it is not part of the output of your program, i.e. the output is not produced by your Python script but by the Python interpreter itself. As for why they changed the -h to use stdout? Try typing python -h with your terminal window set to the standard 24 lines. It scrolls off the screen. Now most people would react by trying python -h |less but that only works if you send the output of -h to the stdout instead of stderr. So there was a good reason for making -h go to stdout, but no good reason for changing -V. A: -h used to print to stderr too as you see here from main.c usage(int exitcode, char* program) { fprintf(stderr, usage_line, program); fprintf(stderr, usage_top); fprintf(stderr, usage_mid); fprintf(stderr, usage_bot, DELIM, DELIM, PYTHONHOMEHELP); exit(exitcode); /*NOTREACHED*/ } ... if (help) usage(0, argv[0]); if (version) { fprintf(stderr, "Python %s\n", PY_VERSION); exit(0); The current main.c has changed the way usage is defined usage(int exitcode, char* program) { FILE *f = exitcode ? stderr : stdout; fprintf(f, usage_line, program); if (exitcode) fprintf(f, "Try `python -h' for more information.\n"); else { fputs(usage_1, f); fputs(usage_2, f); fputs(usage_3, f); fprintf(f, usage_4, DELIM); fprintf(f, usage_5, DELIM, PYTHONHOMEHELP); } So usage uses stdout for -h and stderr for -Q. I can't see any evidence of a good reason one way of the other. Possibly it cannot be changed now without breaking backward compatibility A: Why? Because it's not the actual output of your actual script. That's the long-standing, standard, common, typical, ordinary use for standard error: everything NOT output from your script. A: Probably for no good reason, some digging revealed the the patch adding the options, but I can find any references to why the different streams are used in the discussion about the patch.
Why does python -V write to the error stream?
I was writing a script to inspect python's version on my system and I've noticed that python -V writes to the error stream, while python -h, for instance, uses the standard output. Is there a good reason for this behavior?
[ "The -h option also used to print to stderr because it is not part of the output of your program, i.e. the output is not produced by your Python script but by the Python interpreter itself. \nAs for why they changed the -h to use stdout? Try typing python -h with your terminal window set to the standard 24 lines. It scrolls off the screen.\nNow most people would react by trying python -h |less but that only works if you send the output of -h to the stdout instead of stderr. So there was a good reason for making -h go to stdout, but no good reason for changing -V.\n", "-h used to print to stderr too as you see here from main.c\nusage(int exitcode, char* program)\n{\nfprintf(stderr, usage_line, program);\nfprintf(stderr, usage_top);\nfprintf(stderr, usage_mid);\nfprintf(stderr, usage_bot, DELIM, DELIM, PYTHONHOMEHELP);\nexit(exitcode);\n/*NOTREACHED*/\n}\n\n...\n\nif (help)\n usage(0, argv[0]);\n\nif (version) {\n fprintf(stderr, \"Python %s\\n\", PY_VERSION);\n exit(0);\n\nThe current main.c has changed the way usage is defined \nusage(int exitcode, char* program)\n{\nFILE *f = exitcode ? stderr : stdout;\n\nfprintf(f, usage_line, program);\nif (exitcode)\n fprintf(f, \"Try `python -h' for more information.\\n\");\nelse {\n fputs(usage_1, f);\n fputs(usage_2, f);\n fputs(usage_3, f);\n fprintf(f, usage_4, DELIM);\n fprintf(f, usage_5, DELIM, PYTHONHOMEHELP);\n}\n\nSo usage uses stdout for -h and stderr for -Q.\nI can't see any evidence of a good reason one way of the other. Possibly it cannot be changed now without breaking backward compatibility\n", "Why?\nBecause it's not the actual output of your actual script.\nThat's the long-standing, standard, common, typical, ordinary use for standard error: everything NOT output from your script.\n", "Probably for no good reason, some digging revealed the the patch adding the options, but I can find any references to why the different streams are used in the discussion about the patch.\n" ]
[ 3, 2, 2, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001672650_python.txt
Q: How to interpret status code in Python commands.getstatusoutput() In a related question, I asked where to find the documentation for the C function "wait." This was an attempt to figure out return codes for the commands.getstatusoutput() module. Stackoverflow came through, but the documentation didn't help. Here's what puzzles me: #!/usr/bin/python import commands goodcommand = 'ls /' badcommand = 'ls /fail' status, output = commands.getstatusoutput(goodcommand) print('Good command reported status of %s' % status) status, output = commands.getstatusoutput(badcommand) print('Bad command reported status of %s' % status) When run on OS X (Leopard) I get the following output: (Which matches the documentation.) $ python waitest.py Good command reported status of 0 Bad command reported status of 256 On OS X, doing an "ls /fail ; echo $?" gets the following output: $ ls /fail ; echo $? ls: /fail: No such file or directory 1 When run on Linux (Ubuntu Hardy) I get the following output: $ python waitest.py Good command reported status of 0 Bad command reported status of 512 On Ubuntu, doing "ls /fail" gets a 2: $ ls /fail ; echo $? ls: cannot access /fail: No such file or directory 2 So Python appears to be multiplying status codes by 256. Huh? Is this documented somewhere? A: There is a set of functions in os module (os.WIFCONTINUED, os.WIFSTOPPED, os.WTERMSIG, os.WCOREDUMP, os.WIFEXITED, os.WEXITSTATUS, os.WIFSIGNALED, os.WSTOPSIG), which correspond to macros from wait(2) manual. You should use them to interpret the status code. For example, to get the exit code you should use os.WEXITSTATUS(status) A better idea would be to switch to subprocess module. A: Wow. The insight that it was multiplying by 256 got me there. Searching for "python commands +256" got me to a Python Module Of The Week article which explains what's going on. Here's a snippet from that page: The function getstatusoutput() runs a command via the shell and returns the exit code and the text output (stdout and stderr combined). The exit codes are the same as for the C function wait() or os.wait(). The code is a 16-bit number. The low byte contains the signal number that killed the process. When the signal is zero, the high byte is the exit status of the program. If a core file was produced, the high bit of the low byte is set. And some of Doug's code: from commands import * def run_command(cmd): print 'Running: "%s"' % cmd status, text = getstatusoutput(cmd) exit_code = status >> 8 signal_num = status % 256 print 'Signal: %d' % signal_num print 'Exit : %d' % exit_code print 'Core? : %s' % bool(exit_code / 256) print 'Output:' print text print run_command('ls -l *.py') run_command('ls -l *.notthere') run_command('echo "WAITING TO BE KILLED"; read input') A: Looking at commands.py: def getstatusoutput(cmd): """Return (status, output) of executing cmd in a shell.""" import os pipe = os.popen('{ ' + cmd + '; } 2>&1', 'r') text = pipe.read() sts = pipe.close() if sts is None: sts = 0 if text[-1:] == '\n': text = text[:-1] return sts, text We see sts holds the value of os.popen(...).close(). Looking at that documentation, os.popen(...).close() returns the value of os.wait: os.wait() Wait for completion of a child process, and return a tuple containing its pid and exit status indication: a 16-bit number, whose low byte is the signal number that killed the process, and whose high byte is the exit status (if the signal number is zero); the high bit of the low byte is set if a core file was produced. Availability: Unix. Emphasis was mine. I agree that this "encoding" isn't terribly intuitive, but at least it was fairly obvious at a glance that it was being multiplied/bit-shifted. A: I think the code detection is incorrect. "If a core file was produced, the high bit of the low byte is set." means 128. so I think the core line should be print 'Core? : %s' % bool(status & 128)
How to interpret status code in Python commands.getstatusoutput()
In a related question, I asked where to find the documentation for the C function "wait." This was an attempt to figure out return codes for the commands.getstatusoutput() module. Stackoverflow came through, but the documentation didn't help. Here's what puzzles me: #!/usr/bin/python import commands goodcommand = 'ls /' badcommand = 'ls /fail' status, output = commands.getstatusoutput(goodcommand) print('Good command reported status of %s' % status) status, output = commands.getstatusoutput(badcommand) print('Bad command reported status of %s' % status) When run on OS X (Leopard) I get the following output: (Which matches the documentation.) $ python waitest.py Good command reported status of 0 Bad command reported status of 256 On OS X, doing an "ls /fail ; echo $?" gets the following output: $ ls /fail ; echo $? ls: /fail: No such file or directory 1 When run on Linux (Ubuntu Hardy) I get the following output: $ python waitest.py Good command reported status of 0 Bad command reported status of 512 On Ubuntu, doing "ls /fail" gets a 2: $ ls /fail ; echo $? ls: cannot access /fail: No such file or directory 2 So Python appears to be multiplying status codes by 256. Huh? Is this documented somewhere?
[ "There is a set of functions in os module (os.WIFCONTINUED, os.WIFSTOPPED, os.WTERMSIG, os.WCOREDUMP, os.WIFEXITED, os.WEXITSTATUS, os.WIFSIGNALED, os.WSTOPSIG), which correspond to macros from wait(2) manual. You should use them to interpret the status code.\nFor example, to get the exit code you should use os.WEXITSTATUS(status)\nA better idea would be to switch to subprocess module.\n", "Wow. The insight that it was multiplying by 256 got me there. Searching for \"python commands +256\" got me to a Python Module Of The Week article which explains what's going on.\nHere's a snippet from that page:\n\nThe function getstatusoutput() runs a\n command via the shell and returns the\n exit code and the text output (stdout\n and stderr combined). The exit codes\n are the same as for the C function\n wait() or os.wait(). The code is a\n 16-bit number. The low byte contains\n the signal number that killed the\n process. When the signal is zero, the\n high byte is the exit status of the\n program. If a core file was produced,\n the high bit of the low byte is set.\n\nAnd some of Doug's code:\nfrom commands import *\n\ndef run_command(cmd):\n print 'Running: \"%s\"' % cmd\n status, text = getstatusoutput(cmd)\n exit_code = status >> 8\n signal_num = status % 256\n print 'Signal: %d' % signal_num\n print 'Exit : %d' % exit_code\n print 'Core? : %s' % bool(exit_code / 256)\n print 'Output:'\n print text\n print\n\nrun_command('ls -l *.py')\nrun_command('ls -l *.notthere')\nrun_command('echo \"WAITING TO BE KILLED\"; read input')\n\n", "Looking at commands.py:\ndef getstatusoutput(cmd):\n \"\"\"Return (status, output) of executing cmd in a shell.\"\"\"\n import os\n pipe = os.popen('{ ' + cmd + '; } 2>&1', 'r')\n text = pipe.read()\n sts = pipe.close()\n if sts is None: sts = 0\n if text[-1:] == '\\n': text = text[:-1]\n return sts, text\n\nWe see sts holds the value of os.popen(...).close(). Looking at that documentation, os.popen(...).close() returns the value of os.wait:\n\nos.wait()\nWait for completion of a child process, and return a tuple containing its pid and exit status indication: a 16-bit number, whose low byte is the signal number that killed the process, and whose high byte is the exit status (if the signal number is zero); the high bit of the low byte is set if a core file was produced. Availability: Unix.\n\nEmphasis was mine. I agree that this \"encoding\" isn't terribly intuitive, but at least it was fairly obvious at a glance that it was being multiplied/bit-shifted.\n", "I think the code detection is incorrect.\n\"If a core file was produced, the high bit of the low byte is set.\" means 128.\nso I think the core line should be\nprint 'Core? : %s' % bool(status & 128)\n\n" ]
[ 11, 4, 3, 0 ]
[]
[]
[ "command", "exit_code", "python", "subprocess" ]
stackoverflow_0001535672_command_exit_code_python_subprocess.txt
Q: Fastest Way To Remove Duplicates In Lists Python I have two very large lists and to loop through it once takes at least a second and I need to do it 200,000 times. What's the fastest way to remove duplicates in two lists to form one? A: This is the fastest way I can think of: import itertools output_list = list(set(itertools.chain(first_list, second_list))) Slight update: As jcd points out, depending on your application, you probably don't need to convert the result back to a list. Since a set is iterable by itself, you might be able to just use it directly: output_set = set(itertools.chain(first_list, second_list)) for item in output_set: # do something Beware though that any solution involving the use of set() will probably reorder the elements in your list, so there's no guarantee that elements will be in any particular order. That said, since you're combining two lists, it's hard to come up with a good reason why you would need a particular ordering over them anyway, so this is probably not something you need to worry about. A: I'd recommend something like this: def combine_lists(list1, list2): s = set(list1) s.update(list2) return list(s) This eliminates the problem of creating a monster list of the concatenation of the first two. Depending on what you're doing with the output, don't bother to convert back to a list. If ordering is important, you might need some sort of decorate/sort/undecorate shenanigans around this. A: As Daniel states, a set cannot contain duplicate entries - so concatenate the lists: list1 + list2 Then convert the new list to a set: set(list1 + list2) Then back to a list: list(set(list1 + list2)) A: result = list(set(list1).union(set(list2))) That's how I'd do it. I am not so sure about performance, though, but it is certainly better, than doing it by hand.
Fastest Way To Remove Duplicates In Lists Python
I have two very large lists and to loop through it once takes at least a second and I need to do it 200,000 times. What's the fastest way to remove duplicates in two lists to form one?
[ "This is the fastest way I can think of:\nimport itertools\noutput_list = list(set(itertools.chain(first_list, second_list)))\n\nSlight update: As jcd points out, depending on your application, you probably don't need to convert the result back to a list. Since a set is iterable by itself, you might be able to just use it directly:\noutput_set = set(itertools.chain(first_list, second_list))\nfor item in output_set:\n # do something\n\nBeware though that any solution involving the use of set() will probably reorder the elements in your list, so there's no guarantee that elements will be in any particular order. That said, since you're combining two lists, it's hard to come up with a good reason why you would need a particular ordering over them anyway, so this is probably not something you need to worry about.\n", "I'd recommend something like this:\ndef combine_lists(list1, list2):\n s = set(list1)\n s.update(list2)\n return list(s)\n\nThis eliminates the problem of creating a monster list of the concatenation of the first two.\nDepending on what you're doing with the output, don't bother to convert back to a list. If ordering is important, you might need some sort of decorate/sort/undecorate shenanigans around this.\n", "As Daniel states, a set cannot contain duplicate entries - so concatenate the lists:\nlist1 + list2\n\nThen convert the new list to a set:\nset(list1 + list2)\n\nThen back to a list:\nlist(set(list1 + list2))\n\n", "result = list(set(list1).union(set(list2)))\n\nThat's how I'd do it. I am not so sure about performance, though, but it is certainly better, than doing it by hand.\n" ]
[ 23, 11, 8, 3 ]
[]
[]
[ "list", "python", "sorting" ]
stackoverflow_0001675321_list_python_sorting.txt
Q: SMTP and XMPP deployment/workflow I'm developing a website that incorporates an XMPP bot and a custom SMTP server (mainly these services process commands and reply). I'd like to set up a system where I can develop locally, push changes to a staging server, and finally to a production system. (Essentially I'm developing on the live server currently.) I'm using python, and I'm reading a bit about fabric, but I'm running into a mental block. I am using sqlalchemy-migrate to manage database versions and have the basic DNS stuff set up for the host. Additionally, I have a library that I'm currently working on that these two services both use (in my global site-packages directory). I deploy this egg after I change anything. This would ideally also be deployable, but only available to the correct version. Would I need two versions, stage-lib and live-lib? Is this possible with python eggs? Would I need another host to act as a staging server for these services? Or is there a way to tell DNS that [email protected] goes to a different port than 25? I have a fabfile right now that has a bunch of methods like stage_smtp, stage_xmpp, live_smtp, live_xmpp. A: Partial answer: DNS has no way to tell you to connect to a non-standard SMTP port, even with SRV records. (XMPP does.) So, for sending email, you'll have to do something like: import smtplib server = smtplib.SMTP('localhost:2525') server.sendmail(fromaddr, toaddrs, msg) server.quit()
SMTP and XMPP deployment/workflow
I'm developing a website that incorporates an XMPP bot and a custom SMTP server (mainly these services process commands and reply). I'd like to set up a system where I can develop locally, push changes to a staging server, and finally to a production system. (Essentially I'm developing on the live server currently.) I'm using python, and I'm reading a bit about fabric, but I'm running into a mental block. I am using sqlalchemy-migrate to manage database versions and have the basic DNS stuff set up for the host. Additionally, I have a library that I'm currently working on that these two services both use (in my global site-packages directory). I deploy this egg after I change anything. This would ideally also be deployable, but only available to the correct version. Would I need two versions, stage-lib and live-lib? Is this possible with python eggs? Would I need another host to act as a staging server for these services? Or is there a way to tell DNS that [email protected] goes to a different port than 25? I have a fabfile right now that has a bunch of methods like stage_smtp, stage_xmpp, live_smtp, live_xmpp.
[ "Partial answer: DNS has no way to tell you to connect to a non-standard SMTP port, even with SRV records. (XMPP does.)\nSo, for sending email, you'll have to do something like:\nimport smtplib\nserver = smtplib.SMTP('localhost:2525')\nserver.sendmail(fromaddr, toaddrs, msg)\nserver.quit()\n\n" ]
[ 2 ]
[]
[]
[ "deployment", "python", "smtp", "xmpp" ]
stackoverflow_0001552417_deployment_python_smtp_xmpp.txt
Q: Using Google AppEngine as a "cache" for personal websites (wordpress blogs, wikis) I read an article of an indie game developer who is using Google AppEngine to cache his main site and blog, to protect provide high-availability during traffic spikes (Digg, Slashdot effect). Wolfire Blog - Google App Engine for Indie Developers There's not a lot of detail on the exactly what they developed in Python on Google AppEngine that they used to cache the site. The only details I could find were about the AppEngine python app reading the backend wordpress articles through an RSS feed: Wordpress runs on a dedicated server, and we import it into www.wolfire.com via RSS, which is the App Engine part. Dumping Wordpress entirely is on my list though of things to do someday. ;) Does anyone know of any open source Python or Java web frameworks that you can use to customize caching a site that you could build and deploy on Google AppEngine to act as a "scalable" provider for your web content? I'm using an "Ok" shared hosting service called bluehost to host my wordpress blog, I'd like to be able to instead put my blog on a separate domain (blog.ddaniels.net) and host google app-engine on www.ddaniels.net that would point to blog.ddaniels.net. This could be extended for almost any type of website, you would still need links to dynamic content to point to the original host (for things like comments and editing wiki pages etc, basically any HTTP PUT type operations). I'd assume you'd basically need a Java or Python framework that you could: Configure your back end host e.g. blog.yourname.com Configure Google App Engine framework as www.yourname.com (details for Google App Engine mapping to your domain, the key is you have to use subdomains and "www" is a subdomain) On first access of page (or after expiration time) HTTP GET the page from backing host and cache it on Google AppEngine A: You could start by taking the code for DryDrop, which mirrors static pages from a repository hosted on GitHub, and making it a more general reverse proxy. For example, you'd need to ensure that POST requests or logged-in users get passed through directly to the proxy.
Using Google AppEngine as a "cache" for personal websites (wordpress blogs, wikis)
I read an article of an indie game developer who is using Google AppEngine to cache his main site and blog, to protect provide high-availability during traffic spikes (Digg, Slashdot effect). Wolfire Blog - Google App Engine for Indie Developers There's not a lot of detail on the exactly what they developed in Python on Google AppEngine that they used to cache the site. The only details I could find were about the AppEngine python app reading the backend wordpress articles through an RSS feed: Wordpress runs on a dedicated server, and we import it into www.wolfire.com via RSS, which is the App Engine part. Dumping Wordpress entirely is on my list though of things to do someday. ;) Does anyone know of any open source Python or Java web frameworks that you can use to customize caching a site that you could build and deploy on Google AppEngine to act as a "scalable" provider for your web content? I'm using an "Ok" shared hosting service called bluehost to host my wordpress blog, I'd like to be able to instead put my blog on a separate domain (blog.ddaniels.net) and host google app-engine on www.ddaniels.net that would point to blog.ddaniels.net. This could be extended for almost any type of website, you would still need links to dynamic content to point to the original host (for things like comments and editing wiki pages etc, basically any HTTP PUT type operations). I'd assume you'd basically need a Java or Python framework that you could: Configure your back end host e.g. blog.yourname.com Configure Google App Engine framework as www.yourname.com (details for Google App Engine mapping to your domain, the key is you have to use subdomains and "www" is a subdomain) On first access of page (or after expiration time) HTTP GET the page from backing host and cache it on Google AppEngine
[ "You could start by taking the code for DryDrop, which mirrors static pages from a repository hosted on GitHub, and making it a more general reverse proxy. For example, you'd need to ensure that POST requests or logged-in users get passed through directly to the proxy.\n" ]
[ 9 ]
[]
[]
[ "caching", "google_app_engine", "java", "python" ]
stackoverflow_0001675715_caching_google_app_engine_java_python.txt
Q: Computing article abstracts I'm looking for a way to automatically produce an abstract, basically the first few sentances/paragraphs of a blog entry, to display in a list of articles (which are written in markdown). Currently, I'm doing something like this: def abstract(article, paras=3): return '\n'.join(article.split('\n')[0:paras]) to just grab the first few lines worth of text, but i'm not totally happy with the results. What I'm really looking for is to end up with about 1/3 of a screenful of formatted text to display in the list of entries, but using the algorithm above, the amount pulled ends up with wildly varying amounts, as little as a line or two, is frequently mixed with more ideal sized abstracts. Is there a library that's good at this kind of thing? if not, do you have any suggestions to improve the output? A: EDIT: You can do something like this: from textwrap import wrap def getAbstract(text, lines=5, screenwidth=100): width = len(' '.join([ line for block in text.splitlines() for line in wrap(block, width=screenwidth) ][:lines])) return text[:width] + '...' This makes use of the textwrap algorithm to get the ideal text length. It will break the text into screen-sized lines and use them to calculate the length of the desirable number of lines. For example applying this algorithm on the python wikipedia page entry: print getAbstract(text, lines=7) will give you this output: Python is a general-purpose high-level programming language.2 Its design philosophy emphasizes code readability.[3] Python claims to "[combine] remarkable power with very clear syntax",[4] and its standard library is large and comprehensive. Its use of indentation as block delimiters is unusual among popular programming languages. Python supports multiple programming paradigms (primarily object oriented, imperative, and functional) and features a fully dynamic type system and automatic memory management, similar to Perl, Ruby, Scheme, and Tcl. Like other dynamic languages, Python is often used as a scripting... Without further details it's hard to help you. But if your problem was that taking the first few lines was too much for some entries you may need to have a look at textwrap For example if you only want 100 character abstracts you can do the following: import textwrap abstract = textwrap.wrap(text, 100)[0] That will also replace newlines with spaces which might be desirable depending on your requirements. A: I'm not exactly sure of what you want. However, I would suggest cutting the article after X characters and put "...". Then you have more control over the size of your "abstract" (if that's what bothers you in your current implementation).
Computing article abstracts
I'm looking for a way to automatically produce an abstract, basically the first few sentances/paragraphs of a blog entry, to display in a list of articles (which are written in markdown). Currently, I'm doing something like this: def abstract(article, paras=3): return '\n'.join(article.split('\n')[0:paras]) to just grab the first few lines worth of text, but i'm not totally happy with the results. What I'm really looking for is to end up with about 1/3 of a screenful of formatted text to display in the list of entries, but using the algorithm above, the amount pulled ends up with wildly varying amounts, as little as a line or two, is frequently mixed with more ideal sized abstracts. Is there a library that's good at this kind of thing? if not, do you have any suggestions to improve the output?
[ "EDIT:\nYou can do something like this:\nfrom textwrap import wrap\n\ndef getAbstract(text, lines=5, screenwidth=100):\n width = len(' '.join([\n line for block in text.splitlines()\n for line in wrap(block, width=screenwidth)\n ][:lines]))\n return text[:width] + '...'\n\nThis makes use of the textwrap algorithm to get the ideal text length. It will break the text into screen-sized lines and use them to calculate the length of the desirable number of lines. \nFor example applying this algorithm on the python wikipedia page entry:\nprint getAbstract(text, lines=7)\n\nwill give you this output:\n\nPython is a general-purpose high-level\n programming language.2 Its design\n philosophy emphasizes code\n readability.[3] Python claims to\n \"[combine] remarkable power with very\n clear syntax\",[4] and its standard\n library is large and comprehensive.\n Its use of indentation as block\n delimiters is unusual among popular\n programming languages.\nPython supports multiple programming\n paradigms (primarily object oriented,\n imperative, and functional) and\n features a fully dynamic type system\n and automatic memory management,\n similar to Perl, Ruby, Scheme, and\n Tcl. Like other dynamic languages,\n Python is often used as a scripting...\n\n\nWithout further details it's hard to help you. But if your problem was that taking the first few lines was too much for some entries you may need to have a look at textwrap\nFor example if you only want 100 character abstracts you can do the following:\nimport textwrap\n\nabstract = textwrap.wrap(text, 100)[0]\n\nThat will also replace newlines with spaces which might be desirable depending on your requirements.\n", "I'm not exactly sure of what you want.\nHowever, I would suggest cutting the article after X characters and put \"...\". Then you have more control over the size of your \"abstract\" (if that's what bothers you in your current implementation).\n" ]
[ 7, 0 ]
[]
[]
[ "markdown", "python" ]
stackoverflow_0001675943_markdown_python.txt
Q: Network analysis and adjacency matrices I want to try and create a network for several hundred shapefiles that consist of polylines. The polylines are snapped to each other and consistent. Then I want to create an adjacency matrix for this network. What is the best way of doing this? I know how to do it on an individual basis by clicking through options within ArcCatalog, but I want to try and explore how to automate this. I do have some VBA that I previously downloaded that creates an adjacency matrix once I have made the network, but I can only run that once the network is loaded in to ArcMap, with the layers in a specific order. I appreciate any suggestion or advice about how to do this, in any language. I know this is quite a program specific question; and I have asked it on the ESRI forum too, but my previous questions did not results in an answer that enabled me to achieve this so I thought I would also ask it here. A: I don't know what exactly you want to achieve but when it comes to network analysis in python take a look at networkx.
Network analysis and adjacency matrices
I want to try and create a network for several hundred shapefiles that consist of polylines. The polylines are snapped to each other and consistent. Then I want to create an adjacency matrix for this network. What is the best way of doing this? I know how to do it on an individual basis by clicking through options within ArcCatalog, but I want to try and explore how to automate this. I do have some VBA that I previously downloaded that creates an adjacency matrix once I have made the network, but I can only run that once the network is loaded in to ArcMap, with the layers in a specific order. I appreciate any suggestion or advice about how to do this, in any language. I know this is quite a program specific question; and I have asked it on the ESRI forum too, but my previous questions did not results in an answer that enabled me to achieve this so I thought I would also ask it here.
[ "I don't know what exactly you want to achieve but when it comes to network analysis in python take a look at networkx. \n" ]
[ 2 ]
[]
[]
[ "arcgis", "arcmap", "esri", "python", "vba" ]
stackoverflow_0001675347_arcgis_arcmap_esri_python_vba.txt
Q: Feedparser - retrieve old messages from Google Reader I'm using the feedparser library in python to retrieve news from a local newspaper (my intent is to do Natural Language Processing over this corpus) and would like to be able to retrieve many past entries from the RSS feed. I'm not very acquainted with the technical issues of RSS, but I think this should be possible (I can see that, e.g., Google Reader and Feedly can do this ''on demand'' as I move the scrollbar). When I do the following: import feedparser url = 'http://feeds.folha.uol.com.br/folha/emcimadahora/rss091.xml' feed = feedparser.parse(url) for post in feed.entries: title = post.title I get only a dozen entries or so. I was thinking about hundreds. Maybe all entries in the last month, if possible. Is it possible to do this only with feedparser? I intend to get from the rss feed only the link to the news item and parse the full page with BeautifulSoup to obtain the text I want. An alternate solution would be a crawler that follows all local links in the page to get a lot of news items, but I want to avoid that for now. -- One solution that appeared is to use the Google Reader RSS cache: http://www.google.com/reader/atom/feed/http://feeds.folha.uol.com.br/folha/emcimadahora/rss091.xml?n=1000 But to access this I must be logged in to Google Reader. Anyone knows how I do that from python? (I really don't know a thing about web, I usually only mess with numerical calculus). A: You're only getting a dozen entries or so because that's what the feed contains. If you want historic data you will have to find a feed/database of said data. Check out this ReadWriteWeb article for some resources on finding open data on the web. Note that Feedparser has nothing to do with this as your title suggests. Feedparser parses what you give it. It can't find historic data unless you find it and pass it into it. It is simply a parser. Hope that clears things up! :) A: To expand on Bartek's answer: You could also start storing all of the entries in the feed that you've already seen, and build up your own historical archive of the feed's content. This would delay your ability to start using it as a corpus (because you'd have to do this for a month to build up a collection of a month's worth of entries), but you wouldn't be dependent on anyone else for the data. I may be mistaken, but I'm pretty sure that's how Google Reader can go back in time: They have each feed's past entries stored somewhere.
Feedparser - retrieve old messages from Google Reader
I'm using the feedparser library in python to retrieve news from a local newspaper (my intent is to do Natural Language Processing over this corpus) and would like to be able to retrieve many past entries from the RSS feed. I'm not very acquainted with the technical issues of RSS, but I think this should be possible (I can see that, e.g., Google Reader and Feedly can do this ''on demand'' as I move the scrollbar). When I do the following: import feedparser url = 'http://feeds.folha.uol.com.br/folha/emcimadahora/rss091.xml' feed = feedparser.parse(url) for post in feed.entries: title = post.title I get only a dozen entries or so. I was thinking about hundreds. Maybe all entries in the last month, if possible. Is it possible to do this only with feedparser? I intend to get from the rss feed only the link to the news item and parse the full page with BeautifulSoup to obtain the text I want. An alternate solution would be a crawler that follows all local links in the page to get a lot of news items, but I want to avoid that for now. -- One solution that appeared is to use the Google Reader RSS cache: http://www.google.com/reader/atom/feed/http://feeds.folha.uol.com.br/folha/emcimadahora/rss091.xml?n=1000 But to access this I must be logged in to Google Reader. Anyone knows how I do that from python? (I really don't know a thing about web, I usually only mess with numerical calculus).
[ "You're only getting a dozen entries or so because that's what the feed contains. If you want historic data you will have to find a feed/database of said data.\nCheck out this ReadWriteWeb article for some resources on finding open data on the web.\nNote that Feedparser has nothing to do with this as your title suggests. Feedparser parses what you give it. It can't find historic data unless you find it and pass it into it. It is simply a parser. Hope that clears things up! :)\n", "To expand on Bartek's answer: You could also start storing all of the entries in the feed that you've already seen, and build up your own historical archive of the feed's content. This would delay your ability to start using it as a corpus (because you'd have to do this for a month to build up a collection of a month's worth of entries), but you wouldn't be dependent on anyone else for the data.\nI may be mistaken, but I'm pretty sure that's how Google Reader can go back in time: They have each feed's past entries stored somewhere.\n" ]
[ 10, 3 ]
[]
[]
[ "feedparser", "google_reader", "python", "rss" ]
stackoverflow_0001676223_feedparser_google_reader_python_rss.txt
Q: How to pass flag to gcc in Python setup.py script? I'm writing a Python extension in C that requires the CoreFoundation framework (among other things). This compiles fine with: gcc -o foo foo.c -framework CoreFoundation -framework Python ("-framework" is an Apple-only gcc extension, but that's okay because I'm using their specific framework anyway) How do I tell setup.py to pass this flag to gcc? I tried this, but it doesn't seem to work (it compiles, but then complains of undefined symbols when I try to run it): from distutils.core import setup, Extension setup(name='foo', version='1.0', author='Me', ext_modules=[Extension('foo', ['foo.c'], extra_compile_args=['-framework CoreFoundation'])]) Edit: This appears to work: from distutils.core import setup, Extension setup(name='foo', version='1.0', author='Me', ext_modules=[Extension('foo', ['foo.c'], extra_link_args=['-framework', 'CoreFoundation'])]) A: Maybe you need to set extra_link_args, too? extra_compile_args is used when compiling the source code, extra_link_args when linking the result.
How to pass flag to gcc in Python setup.py script?
I'm writing a Python extension in C that requires the CoreFoundation framework (among other things). This compiles fine with: gcc -o foo foo.c -framework CoreFoundation -framework Python ("-framework" is an Apple-only gcc extension, but that's okay because I'm using their specific framework anyway) How do I tell setup.py to pass this flag to gcc? I tried this, but it doesn't seem to work (it compiles, but then complains of undefined symbols when I try to run it): from distutils.core import setup, Extension setup(name='foo', version='1.0', author='Me', ext_modules=[Extension('foo', ['foo.c'], extra_compile_args=['-framework CoreFoundation'])]) Edit: This appears to work: from distutils.core import setup, Extension setup(name='foo', version='1.0', author='Me', ext_modules=[Extension('foo', ['foo.c'], extra_link_args=['-framework', 'CoreFoundation'])])
[ "Maybe you need to set extra_link_args, too? extra_compile_args is used when compiling the source code, extra_link_args when linking the result.\n" ]
[ 19 ]
[]
[]
[ "distutils", "python", "python_c_api" ]
stackoverflow_0001676384_distutils_python_python_c_api.txt
Q: Exporting keyframes in blender python I'm trying to export animation from blender, here's what I've done so far: --- This is just to give you an idea of what I'm doing and I've left out a lot to keep it short. --- If it's too confusing or if it's needed I could post the whole source. # Get the armature arm = ob.getData() # Start at the root bone for bone in bones: if not bone.parent: traceBone(bone) def traceBone(bone): # Get the channel for this bone channel=action.getChannelIpo(bone.name); # Get the loc x, y, z channels c_locx=channel[Ipo.OB_LOCX].bezierPoints frameCount=len(c_locx) # Write each location frame for frameIndex in range(frameCount): frame_x=c_locx[frameIndex].pt frameTime=int(frame_x[0]-1) # Write the time of the frame writeInt(frameTime) # Write the x, y and z coordinates writeFloats(frame_x[1], frame_z[1], frame_y[1]) # Iv'e done the same for rotation c_quatx=channel[Ipo.PO_QUATX].bezierPoints # Write the quaternion w, x, y and z values writeFloats(frame_w[1], frame_x[1], frame_z[1], frame_y[1]) # Go through the children for child in bone.children: traceBone(child) As far as I can tell this all works fine, the problem is that these values are offsets, representing change, but what I need is absolute values representing the location and rotation values of the bone relative to it's parent. How do I get the position and rotation relative to it's parent? A: The channel data should be applied on top of the bind pose matrix. The complete formula is the following: Mr = Ms * B0*P0 * B1*P1 ... Bn*Pn where: Mr = result matrix for a bone 'n' Ms = skeleton->world matrix Bi = bind pose matrix for bone 'i' Pi = pose actual matrix constructed from stored channels (that you are exporting) 'n-1' is a parent bone for 'n', 'n-2' is parent of 'n-1', ... , '0' is a parent of '1'
Exporting keyframes in blender python
I'm trying to export animation from blender, here's what I've done so far: --- This is just to give you an idea of what I'm doing and I've left out a lot to keep it short. --- If it's too confusing or if it's needed I could post the whole source. # Get the armature arm = ob.getData() # Start at the root bone for bone in bones: if not bone.parent: traceBone(bone) def traceBone(bone): # Get the channel for this bone channel=action.getChannelIpo(bone.name); # Get the loc x, y, z channels c_locx=channel[Ipo.OB_LOCX].bezierPoints frameCount=len(c_locx) # Write each location frame for frameIndex in range(frameCount): frame_x=c_locx[frameIndex].pt frameTime=int(frame_x[0]-1) # Write the time of the frame writeInt(frameTime) # Write the x, y and z coordinates writeFloats(frame_x[1], frame_z[1], frame_y[1]) # Iv'e done the same for rotation c_quatx=channel[Ipo.PO_QUATX].bezierPoints # Write the quaternion w, x, y and z values writeFloats(frame_w[1], frame_x[1], frame_z[1], frame_y[1]) # Go through the children for child in bone.children: traceBone(child) As far as I can tell this all works fine, the problem is that these values are offsets, representing change, but what I need is absolute values representing the location and rotation values of the bone relative to it's parent. How do I get the position and rotation relative to it's parent?
[ "The channel data should be applied on top of the bind pose matrix.\nThe complete formula is the following:\nMr = Ms * B0*P0 * B1*P1 ... Bn*Pn\nwhere:\nMr = result matrix for a bone 'n'\nMs = skeleton->world matrix\nBi = bind pose matrix for bone 'i'\nPi = pose actual matrix constructed from stored channels (that you are exporting)\n'n-1' is a parent bone for 'n', 'n-2' is parent of 'n-1', ... , '0' is a parent of '1'\n" ]
[ 2 ]
[]
[]
[ "3d", "blender", "python" ]
stackoverflow_0001273588_3d_blender_python.txt
Q: Pyfacebook from buildout What is the best way to install the latest version of pyfacebook with buildout? The package is hosted on github and is not on pypi. This system doesn't have git installed, so a git-based recipe isn't unfortunately not an option. The github URL is http://github.com/sciyoshi/pyfacebook. TIA! A: You can add any python package hosted on git-hub by adding a find-links url pointing to the project tarball URL plus a #egg=packagename postfix. For pyfacebook that is: http://github.com/sciyoshi/pyfacebook/tarball/master#egg=pyfacebook So a simple buildout would be: [buildout] parts = whatever find-links = http://github.com/sciyoshi/pyfacebook/tarball/master#egg=pyfacebook eggs = pyfacebook
Pyfacebook from buildout
What is the best way to install the latest version of pyfacebook with buildout? The package is hosted on github and is not on pypi. This system doesn't have git installed, so a git-based recipe isn't unfortunately not an option. The github URL is http://github.com/sciyoshi/pyfacebook. TIA!
[ "You can add any python package hosted on git-hub by adding a find-links url pointing to the project tarball URL plus a #egg=packagename postfix. For pyfacebook that is:\nhttp://github.com/sciyoshi/pyfacebook/tarball/master#egg=pyfacebook\n\nSo a simple buildout would be:\n[buildout]\nparts = whatever\nfind-links =\n http://github.com/sciyoshi/pyfacebook/tarball/master#egg=pyfacebook\neggs = pyfacebook\n\n" ]
[ 5 ]
[]
[]
[ "buildout", "facebook", "github", "pypi", "python" ]
stackoverflow_0001676520_buildout_facebook_github_pypi_python.txt
Q: Comparing elements in a list in Python's for -loop What is wrong in the method end in the code? The method end returns always 1 although it should return 0 with the current data. # return 1 if the sum of four consecutive elements equal the sum over other sum of the other three sums # else return 0 # Eg the current sums "35 34 34 34" should return 0 data = "2|15|14|4|12|6|7|9|8|10|11|5|13|3|2|16" arra = data.split("|"); def do_row ( arra, n ): return arra[4*n:4 + 4*n] def row_summa (row): return sum(map(int,row)) def end ( summat ): # problem here! equality = 1 for i in summat[2:5]: print "Comparing: ", summat[1], " and ", i, ".\n" if summat[1] != i: equality = 0 print equality for i in range(0,4): summat = [] summat.append( row_summa( do_row(arra,i) ) ) print row_summa ( do_row(arra,i) ) summa = 0 end(summat) A: I can't really tell what you're trying to do here, but I can certainly say why end() returns 1 instead of 0. In your last for loop, you reset summat to [] at the start of the loop, so at the end, summat only contains a single value (the one you most recently appended on). So when you ask for summat[2:5] on a list of a single item, Python returns an empty list (as there are no values in that range) - in which case there are no chances for equality to be set to zero because the loop in end never runs. A: I think you may have an off-by-one error. Remember that array indexes in Python start at 0, not 1. So where you do this: for i in summat[2:5]: print "Comparing: ", summat[1], " and ", i, ".\n" if summat[1] != i: equality = 0 you are not looking at summat[0] at all. Try perhaps: for i in summat[1:4]: print "Comparing: ", summat[0], " and ", i, ".\n" if summat[0] != i: equality = 0 A: First off, end doesn't return 1. It returns None. It prints 1. Kind of deceptive if you're running it from the command line. Second, when you call end, summat is equal to [34]. So this: for i in summat[2:5]: never even executes. It won't do anything unless summat contains at least 3 elements. A: You have two problems. Initialising summat to [] inside the loop, also the off by one error Greg mentioned data = "2|15|14|4|12|6|7|9|8|10|11|5|13|3|2|16" arra = data.split("|"); def do_row ( arra, n ): return arra[4*n:4 + 4*n] def row_summa (row): return sum(map(int,row)) def end ( summat ): # problem here! equality = 1 for i in summat[1:]: # 1 <=== IS THE SECOND ELEMENT print "Comparing: ", summat[0], " and ", i, ".\n" if summat[0] != i: equality = 0 print equality summat = [] # <=== DO THIS BEFORE THE LOOP for i in range(0,4): summat.append( row_summa( do_row(arra,i) ) ) print row_summa ( do_row(arra,i) ) summa = 0 end(summat) A: You should also study this piece of code data = "2|15|14|4|12|6|7|9|8|10|11|5|13|3|2|16" arra = map(int,data.split("|")) summat = [sum(arra[i:i+4]) for i in range(0,len(arra),4)] print summat print len(set(summat))==1
Comparing elements in a list in Python's for -loop
What is wrong in the method end in the code? The method end returns always 1 although it should return 0 with the current data. # return 1 if the sum of four consecutive elements equal the sum over other sum of the other three sums # else return 0 # Eg the current sums "35 34 34 34" should return 0 data = "2|15|14|4|12|6|7|9|8|10|11|5|13|3|2|16" arra = data.split("|"); def do_row ( arra, n ): return arra[4*n:4 + 4*n] def row_summa (row): return sum(map(int,row)) def end ( summat ): # problem here! equality = 1 for i in summat[2:5]: print "Comparing: ", summat[1], " and ", i, ".\n" if summat[1] != i: equality = 0 print equality for i in range(0,4): summat = [] summat.append( row_summa( do_row(arra,i) ) ) print row_summa ( do_row(arra,i) ) summa = 0 end(summat)
[ "I can't really tell what you're trying to do here, but I can certainly say why end() returns 1 instead of 0. In your last for loop, you reset summat to [] at the start of the loop, so at the end, summat only contains a single value (the one you most recently appended on). So when you ask for summat[2:5] on a list of a single item, Python returns an empty list (as there are no values in that range) - in which case there are no chances for equality to be set to zero because the loop in end never runs.\n", "I think you may have an off-by-one error. Remember that array indexes in Python start at 0, not 1. So where you do this:\n for i in summat[2:5]:\n print \"Comparing: \", summat[1], \" and \", i, \".\\n\"\n if summat[1] != i:\n equality = 0 \n\nyou are not looking at summat[0] at all. Try perhaps:\n for i in summat[1:4]:\n print \"Comparing: \", summat[0], \" and \", i, \".\\n\"\n if summat[0] != i:\n equality = 0 \n\n", "First off, end doesn't return 1. It returns None. It prints 1. Kind of deceptive if you're running it from the command line.\nSecond, when you call end, summat is equal to [34]. So this:\nfor i in summat[2:5]:\n\nnever even executes. It won't do anything unless summat contains at least 3 elements.\n", "You have two problems. Initialising summat to [] inside the loop, also the off by one error Greg mentioned\ndata = \"2|15|14|4|12|6|7|9|8|10|11|5|13|3|2|16\"\narra = data.split(\"|\");\n\ndef do_row ( arra, n ):\n return arra[4*n:4 + 4*n]\n\ndef row_summa (row):\n return sum(map(int,row))\n\ndef end ( summat ): # problem here!\n equality = 1 \n for i in summat[1:]: # 1 <=== IS THE SECOND ELEMENT\n print \"Comparing: \", summat[0], \" and \", i, \".\\n\"\n if summat[0] != i:\n equality = 0 \n print equality\n\nsummat = [] # <=== DO THIS BEFORE THE LOOP\nfor i in range(0,4):\n summat.append( row_summa( do_row(arra,i) ) ) \n print row_summa ( do_row(arra,i) )\n summa = 0 \n\nend(summat)\n\n", "You should also study this piece of code\ndata = \"2|15|14|4|12|6|7|9|8|10|11|5|13|3|2|16\"\narra = map(int,data.split(\"|\"))\nsummat = [sum(arra[i:i+4]) for i in range(0,len(arra),4)]\nprint summat\nprint len(set(summat))==1\n\n" ]
[ 2, 2, 1, 1, 1 ]
[]
[]
[ "list", "python" ]
stackoverflow_0001675860_list_python.txt
Q: Does python optimize modules when they are imported multiple times? If a large module is loaded by some submodule of your code, is there any benefit to referencing the module from that namespace instead of importing it again? For example: I have a module MyLib, which makes extensive use of ReallyBigLib. If I have code that imports MyLib, should I dig the module out like so import MyLib ReallyBigLib = MyLib.SomeModule.ReallyBigLib or just import MyLib import ReallyBigLib A: Python modules could be considered as singletons... no matter how many times you import them they get initialized only once, so it's better to do: import MyLib import ReallyBigLib Relevant documentation on the import statement: https://docs.python.org/2/reference/simple_stmts.html#the-import-statement Once the name of the module is known (unless otherwise specified, the term “module” will refer to both packages and modules), searching for the module or package can begin. The first place checked is sys.modules, the cache of all modules that have been imported previously. If the module is found there then it is used in step (2) of import. The imported modules are cached in sys.modules: This is a dictionary that maps module names to modules which have already been loaded. This can be manipulated to force reloading of modules and other tricks. Note that removing a module from this dictionary is not the same as calling reload() on the corresponding module object. A: As others have pointed out, Python maintains an internal list of all modules that have been imported. When you import a module for the first time, the module (a script) is executed in its own namespace until the end, the internal list is updated, and execution of continues after the import statement. Try this code: # module/file a.py print "Hello from a.py!" import b # module/file b.py print "Hello from b.py!" import a There is no loop: there is only a cache lookup. >>> import b Hello from b.py! Hello from a.py! >>> import a >>> One of the beauties of Python is how everything devolves to executing a script in a namespace. A: It makes no substantial difference. If the big module has already been loaded, the second import in your second example does nothing except adding 'ReallyBigLib' to the current namespace. A: WARNING: Python does not guarantee that module will not be initialized twice. I've stubled upon such issue. See discussion: http://code.djangoproject.com/ticket/8193 A: The internal registry of imported modules is the sys.modules dictionary, which maps module names to module objects. You can look there to see all the modules that are currently imported. You can also pull some useful tricks (if you need to) by monkeying with sys.modules - for example adding your own objects as pseudo-modules which can be imported by other modules. A: It is the same performancewise. There is no JIT compiler in Python yet.
Does python optimize modules when they are imported multiple times?
If a large module is loaded by some submodule of your code, is there any benefit to referencing the module from that namespace instead of importing it again? For example: I have a module MyLib, which makes extensive use of ReallyBigLib. If I have code that imports MyLib, should I dig the module out like so import MyLib ReallyBigLib = MyLib.SomeModule.ReallyBigLib or just import MyLib import ReallyBigLib
[ "Python modules could be considered as singletons... no matter how many times you import them they get initialized only once, so it's better to do:\nimport MyLib\nimport ReallyBigLib\n\nRelevant documentation on the import statement:\nhttps://docs.python.org/2/reference/simple_stmts.html#the-import-statement\n\nOnce the name of the module is known (unless otherwise specified, the term “module” will refer to both packages and modules), searching for the module or package can begin. The first place checked is sys.modules, the cache of all modules that have been imported previously. If the module is found there then it is used in step (2) of import.\n\nThe imported modules are cached in sys.modules:\n\nThis is a dictionary that maps module names to modules which have already been loaded. This can be manipulated to force reloading of modules and other tricks. Note that removing a module from this dictionary is not the same as calling reload() on the corresponding module object.\n\n", "As others have pointed out, Python maintains an internal list of all modules that have been imported. When you import a module for the first time, the module (a script) is executed in its own namespace until the end, the internal list is updated, and execution of continues after the import statement. \nTry this code:\n # module/file a.py\n print \"Hello from a.py!\"\n import b\n\n # module/file b.py\n print \"Hello from b.py!\"\n import a\n\nThere is no loop: there is only a cache lookup.\n>>> import b\nHello from b.py!\nHello from a.py!\n>>> import a\n>>>\n\nOne of the beauties of Python is how everything devolves to executing a script in a namespace.\n", "It makes no substantial difference. If the big module has already been loaded, the second import in your second example does nothing except adding 'ReallyBigLib' to the current namespace.\n", "WARNING: Python does not guarantee that module will not be initialized twice.\nI've stubled upon such issue. See discussion:\nhttp://code.djangoproject.com/ticket/8193\n", "The internal registry of imported modules is the sys.modules dictionary, which maps module names to module objects. You can look there to see all the modules that are currently imported.\nYou can also pull some useful tricks (if you need to) by monkeying with sys.modules - for example adding your own objects as pseudo-modules which can be imported by other modules.\n", "It is the same performancewise. There is no JIT compiler in Python yet.\n" ]
[ 88, 42, 9, 8, 3, 2 ]
[]
[]
[ "python", "python_import" ]
stackoverflow_0000296036_python_python_import.txt
Q: Trying to upgrade Python to 3.0 on Mac OS 10.5.8 I'm having some problems upgrading Python on my Mac. For my first attempt, I downloaded and installed the 2.6.4 dmg MacPython installer from http://python.org/download/mac/. This did install 2.6.4, and when I ran 'python' from the terminal it says that version. However, I also had a test script where I am doing: import os, json But I get an error that the 'json' library was not found. In the script I included this shebang at the top to make it run from the terminal: #! /usr/bin/python I suspect that the symlinks that come directly from Apple that point to Python 2.5 were not updated by the 2.6.4 installer, so directly from the terminal 'python' is running the newer version, but my test.py file is executing 2.5. So at this point I read a couple of other SO pages on doing this upgrade, and people recommended using 3rd party packages that sit side-by-side so as not to break the OS-level dependencies on v2.5. I then found ActivePython offered a 3.x installer(that was also recommended on another SO page). I installed that, but 'python' still shows 2.6.4 and my script still can't find the json library. Finally, I'm baffled at how to safely remove MacPython( the Mac installer I mentioned above ). There's one sentence on the page that says to remove some things that seem pretty vital to Python on the Mac. Quote: A MacPython 2.5 folder in your Applications folder. In here you find IDLE, the development environment that is a standard part of official Python distributions; PythonLauncher, which handles double-clicking Python scripts from the Finder; and the “Build Applet” tool, which allows you to package Python scripts as standalone applications on your system. A framework /Library/Frameworks/Python.framework, which includes the Python executable and libraries. The installer adds this location to your shell path. To uninstall MacPython, you can simply remove these three things. A symlink to the Python executable is placed in /usr/local/bin/. So now I have 3 versions of Python installed and I'm not sure how to resolve this stupid mess. A: First, /usr/bin/python should always point to the Apple-supplied python and on 10.5 that means python2.5. Don't change this! When you installed the python.org python2.6, by default it installs symlinks in /usr/local/bin/ so one way to invoke it is /usr/local/bin/python2.6 or, most likely, just python2.6. Since json was added to the python library in python 2.6, you'll find the json module is there. One way to solve your orignal problem then is to change the shebang line to be: #!/usr/bin/env python2.6 Also by default, the python.org installer updates your shell profile to add its bin directory to your $PATH, which is why typing python probably now invokes python2.6. You shouldn't need to but if you really want to remove all traces of the python.org 2.6: Delete the extra lines at the end of your .bash_profile and/or .profile by reverting to .bash_profile.pysave and .profile.pysave. Remove the python2.6 framework directory: sudo rm -r /Library/Frameworks/Python.framework/Versions/2.6 Remove IDLE and the extras installed in /Applications: sudo rm -r /Applications/Python\ 2.6 Also there's nothing wrong with moving on to Python 3. For the moment, both Python 2 and Python 3 are being actively developed; search the archives for the various pros and cons. However, Python 3.0 should not be used. Not surprisingly for something that major, Python 3.0 had a number of serious first-time bugs so, with the release of Python 3.1, 3.0 support was immediately dropped.
Trying to upgrade Python to 3.0 on Mac OS 10.5.8
I'm having some problems upgrading Python on my Mac. For my first attempt, I downloaded and installed the 2.6.4 dmg MacPython installer from http://python.org/download/mac/. This did install 2.6.4, and when I ran 'python' from the terminal it says that version. However, I also had a test script where I am doing: import os, json But I get an error that the 'json' library was not found. In the script I included this shebang at the top to make it run from the terminal: #! /usr/bin/python I suspect that the symlinks that come directly from Apple that point to Python 2.5 were not updated by the 2.6.4 installer, so directly from the terminal 'python' is running the newer version, but my test.py file is executing 2.5. So at this point I read a couple of other SO pages on doing this upgrade, and people recommended using 3rd party packages that sit side-by-side so as not to break the OS-level dependencies on v2.5. I then found ActivePython offered a 3.x installer(that was also recommended on another SO page). I installed that, but 'python' still shows 2.6.4 and my script still can't find the json library. Finally, I'm baffled at how to safely remove MacPython( the Mac installer I mentioned above ). There's one sentence on the page that says to remove some things that seem pretty vital to Python on the Mac. Quote: A MacPython 2.5 folder in your Applications folder. In here you find IDLE, the development environment that is a standard part of official Python distributions; PythonLauncher, which handles double-clicking Python scripts from the Finder; and the “Build Applet” tool, which allows you to package Python scripts as standalone applications on your system. A framework /Library/Frameworks/Python.framework, which includes the Python executable and libraries. The installer adds this location to your shell path. To uninstall MacPython, you can simply remove these three things. A symlink to the Python executable is placed in /usr/local/bin/. So now I have 3 versions of Python installed and I'm not sure how to resolve this stupid mess.
[ "First, /usr/bin/python should always point to the Apple-supplied python and on 10.5 that means python2.5. Don't change this!\nWhen you installed the python.org python2.6, by default it installs symlinks in /usr/local/bin/ so one way to invoke it is /usr/local/bin/python2.6 or, most likely, just python2.6. Since json was added to the python library in python 2.6, you'll find the json module is there. One way to solve your orignal problem then is to change the shebang line to be:\n#!/usr/bin/env python2.6\n\nAlso by default, the python.org installer updates your shell profile to add its bin directory to your $PATH, which is why typing python probably now invokes python2.6.\nYou shouldn't need to but if you really want to remove all traces of the python.org 2.6:\n\nDelete the extra lines at the end of your .bash_profile and/or .profile by reverting to .bash_profile.pysave and .profile.pysave.\nRemove the python2.6 framework directory:\nsudo rm -r /Library/Frameworks/Python.framework/Versions/2.6\nRemove IDLE and the extras installed in /Applications:\nsudo rm -r /Applications/Python\\ 2.6\n\nAlso there's nothing wrong with moving on to Python 3. For the moment, both Python 2 and Python 3 are being actively developed; search the archives for the various pros and cons. However, Python 3.0 should not be used. Not surprisingly for something that major, Python 3.0 had a number of serious first-time bugs so, with the release of Python 3.1, 3.0 support was immediately dropped.\n" ]
[ 5 ]
[]
[]
[ "macos", "python" ]
stackoverflow_0001676831_macos_python.txt
Q: Pygtk StatusIcon not loading? I'm currently working on a small script that needs to use gtk.StatusIcon(). For some reason, I'm getting some weird behavior with it. If I go into the python interactive shell and type: >> import gtk >> statusIcon = gtk.status_icon_new_from_file("img/lin_idle.png") Pygtk does exactly what it should do, and shows an icon (lin_idle.png) in the system tray: However, if I try to do the same task in my script: def gtkInit(self): self.statusIcon = gtk.status_icon_new_from_file("img/lin_idle.png") When gtkInit() gets called, I see this instead: I made I ran the script in the same working directory as the interactive python shell, so I'm pretty sure it's finding the image, so I'm stumped... Any ideas anyone? Thanks in advance. Update: For some reason or another, after calling gtk.status_icon_new_from_file() a few times in the script, it does eventually create the icon, but this issue still remains unfortunately. Does anyone at all have any ideas as to what could be going wrong? As requested: Here's the full script. This is actually an application that I'm in the very early stages of making, but it does work at the moment if you get it setup correctly, so feel free to play around with it if you want (and also help me!), you just need to get an imgur developer key and put it in linup_control.py Linup.py # # Linup - A dropbox alternative for Linux! # Written by Nakedsteve # Released under the MIT License # import os import time import ConfigParser from linup_control import Linup cfg = ConfigParser.RawConfigParser() # See if we have a .linuprc file home = os.path.expanduser("~") if not os.path.exists(home+"/.linuprc"): # Nope, so let's make one cfg.add_section("paths") cfg.set("paths","watch_path", home+"/Desktop/screenshot1.png") # Now write it to the file with open(home+"/.linuprc","wb") as configfile: cfg.write(configfile) else: cfg.read(home+"/.linuprc") linup = Linup() # Create the GUI (status icon, menus, etc.) linup.gtkInit() # Enter the main loop, where we check to see if there's a shot to upload # every 1 second path = cfg.get("paths","watch_path") while 1: if(os.path.exists(path)): linup.uploadImage(path) url = linup.getURL() linup.toClipboard(url) linup.json = "" print "Screenshot uploaded!" os.remove(path) else: # If you're wondering why I'm using time.sleep() # it's because I found that without it, my CPU remained # at 50% at all times while running linup. If you have a better # method for doing this, please contact me about it (I'm relatively new at python) time.sleep(1) linup_control.py import gtk import json import time import pycurl import os class Linup: def __init__(self): self.json = "" def uploadImage(self, path): # Set the status icon to busy self.statusIcon.set_from_file("img/lin_busy.png") # Create new pycurl instance cu = pycurl.Curl() # Set the POST variables to the image and dev key vals = [ ("key","*************"), ("image", (cu.FORM_FILE, path)) ] # Set the URL to send to cu.setopt(cu.URL, "http://imgur.com/api/upload.json") # This lets us get the json returned by imgur cu.setopt(cu.WRITEFUNCTION, self.resp_callback) cu.setopt(cu.HTTPPOST, vals) # Do eet! cu.perform() cu.close() # Set the status icon to done... self.statusIcon.set_from_file("img/lin_done.png") # Wait 3 seconds time.sleep(3) # Set the icon to idle self.statusIcon.set_from_file("img/lin_idle.png") # Used for getting the response json from imgur def resp_callback(self, buff): self.json += buff # Extracts the image URL from the json data def getURL(self): js = json.loads(self.json) return js['rsp']['image']['original_image'] # Inserts the text variable into the clipboard def toClipboard(self, text): cb = gtk.Clipboard() cb.set_text(text) cb.store() # Initiates the GUI elements of Linup def gtkInit(self): self.statusIcon = gtk.StatusIcon() self.statusIcon.set_from_file("img/lin_idle.png") A: You need to call the gtk.main function like qba said, however the correct way to call a function every N milliseconds is to use the gobject.timeout_add function. In most cases you would want to have anything that could tie up the gui in a separate thread, however in your case where you just have an icon you don't need to. Unless you are planning on making the StatusIcon have a menu. Here is the part of Linup.py that I changed: # Enter the main loop, where we check to see if there's a shot to upload # every 1 second path = cfg.get("paths","watch_path") def check_for_new(): if(os.path.exists(path)): linup.uploadImage(path) url = linup.getURL() linup.toClipboard(url) linup.json = "" print "Screenshot uploaded!" os.remove(path) # Return True to keep calling this function, False to stop. return True if __name__ == "__main__": gobject.timeout_add(1000, check_for_new) gtk.main() You will have to import gobject somewhere too. I don't know for sure if this works because I can't get pycurl installed. EDIT: In linup_control.py, I would try changing # Wait 3 seconds time.sleep(3) # Set the icon to idle self.statusIcon.set_from_file("img/lin_idle.png") to gobject.timeout_add(3000, self.statusIcon.set_from_file, "img/lin_idle.png") A: You made two mistakes. One is important one is not. At first if you want to use stock icon use .set_from_stock( stock_id ) method. If you want to use your own icon then the .set_from_file(/path/to/img.png) is ok. The other think witch is the probably the main problem is that when you write gtk application you have to call gtk.main() function. This is main gtk loop where all signal handling/window drawing and all other gtk stuff is done. If you don't do this, simply your icon is not drawing. The solution in your case is to make two threads - one for gui, second for your app. In the first one you simply call gtk.main(). In second you put your main program loop. Of course when you call python program you have one thread already started:P If you aren't familiar whit threads there is other solution. Gtk have function which calls function specified by you with some delay: def call_me: print "Hello World!" gtk.timeout_add( 1000 , call_me ) gtk.timeout_add( 1000 , call_me ) gtk.main() But it seems to be deprecated now. Probably they have made a better solution.
Pygtk StatusIcon not loading?
I'm currently working on a small script that needs to use gtk.StatusIcon(). For some reason, I'm getting some weird behavior with it. If I go into the python interactive shell and type: >> import gtk >> statusIcon = gtk.status_icon_new_from_file("img/lin_idle.png") Pygtk does exactly what it should do, and shows an icon (lin_idle.png) in the system tray: However, if I try to do the same task in my script: def gtkInit(self): self.statusIcon = gtk.status_icon_new_from_file("img/lin_idle.png") When gtkInit() gets called, I see this instead: I made I ran the script in the same working directory as the interactive python shell, so I'm pretty sure it's finding the image, so I'm stumped... Any ideas anyone? Thanks in advance. Update: For some reason or another, after calling gtk.status_icon_new_from_file() a few times in the script, it does eventually create the icon, but this issue still remains unfortunately. Does anyone at all have any ideas as to what could be going wrong? As requested: Here's the full script. This is actually an application that I'm in the very early stages of making, but it does work at the moment if you get it setup correctly, so feel free to play around with it if you want (and also help me!), you just need to get an imgur developer key and put it in linup_control.py Linup.py # # Linup - A dropbox alternative for Linux! # Written by Nakedsteve # Released under the MIT License # import os import time import ConfigParser from linup_control import Linup cfg = ConfigParser.RawConfigParser() # See if we have a .linuprc file home = os.path.expanduser("~") if not os.path.exists(home+"/.linuprc"): # Nope, so let's make one cfg.add_section("paths") cfg.set("paths","watch_path", home+"/Desktop/screenshot1.png") # Now write it to the file with open(home+"/.linuprc","wb") as configfile: cfg.write(configfile) else: cfg.read(home+"/.linuprc") linup = Linup() # Create the GUI (status icon, menus, etc.) linup.gtkInit() # Enter the main loop, where we check to see if there's a shot to upload # every 1 second path = cfg.get("paths","watch_path") while 1: if(os.path.exists(path)): linup.uploadImage(path) url = linup.getURL() linup.toClipboard(url) linup.json = "" print "Screenshot uploaded!" os.remove(path) else: # If you're wondering why I'm using time.sleep() # it's because I found that without it, my CPU remained # at 50% at all times while running linup. If you have a better # method for doing this, please contact me about it (I'm relatively new at python) time.sleep(1) linup_control.py import gtk import json import time import pycurl import os class Linup: def __init__(self): self.json = "" def uploadImage(self, path): # Set the status icon to busy self.statusIcon.set_from_file("img/lin_busy.png") # Create new pycurl instance cu = pycurl.Curl() # Set the POST variables to the image and dev key vals = [ ("key","*************"), ("image", (cu.FORM_FILE, path)) ] # Set the URL to send to cu.setopt(cu.URL, "http://imgur.com/api/upload.json") # This lets us get the json returned by imgur cu.setopt(cu.WRITEFUNCTION, self.resp_callback) cu.setopt(cu.HTTPPOST, vals) # Do eet! cu.perform() cu.close() # Set the status icon to done... self.statusIcon.set_from_file("img/lin_done.png") # Wait 3 seconds time.sleep(3) # Set the icon to idle self.statusIcon.set_from_file("img/lin_idle.png") # Used for getting the response json from imgur def resp_callback(self, buff): self.json += buff # Extracts the image URL from the json data def getURL(self): js = json.loads(self.json) return js['rsp']['image']['original_image'] # Inserts the text variable into the clipboard def toClipboard(self, text): cb = gtk.Clipboard() cb.set_text(text) cb.store() # Initiates the GUI elements of Linup def gtkInit(self): self.statusIcon = gtk.StatusIcon() self.statusIcon.set_from_file("img/lin_idle.png")
[ "You need to call the gtk.main function like qba said, however the correct way to call a function every N milliseconds is to use the gobject.timeout_add function. In most cases you would want to have anything that could tie up the gui in a separate thread, however in your case where you just have an icon you don't need to. Unless you are planning on making the StatusIcon have a menu. Here is the part of Linup.py that I changed:\n# Enter the main loop, where we check to see if there's a shot to upload\n# every 1 second\npath = cfg.get(\"paths\",\"watch_path\")\ndef check_for_new():\n\n if(os.path.exists(path)):\n linup.uploadImage(path)\n url = linup.getURL()\n linup.toClipboard(url)\n linup.json = \"\"\n\n print \"Screenshot uploaded!\"\n os.remove(path)\n # Return True to keep calling this function, False to stop. \n return True\n\nif __name__ == \"__main__\":\n\n gobject.timeout_add(1000, check_for_new)\n\n gtk.main()\n\nYou will have to import gobject somewhere too.\nI don't know for sure if this works because I can't get pycurl installed.\nEDIT: In linup_control.py, I would try changing\n# Wait 3 seconds\ntime.sleep(3)\n# Set the icon to idle\nself.statusIcon.set_from_file(\"img/lin_idle.png\")\n\nto \ngobject.timeout_add(3000, self.statusIcon.set_from_file, \"img/lin_idle.png\")\n\n", "You made two mistakes. One is important one is not.\nAt first if you want to use stock icon use .set_from_stock( stock_id ) method. If you want to use your own icon then the .set_from_file(/path/to/img.png) is ok.\nThe other think witch is the probably the main problem is that when you write gtk application you have to call gtk.main() function. This is main gtk loop where all signal handling/window drawing and all other gtk stuff is done. If you don't do this, simply your icon is not drawing.\nThe solution in your case is to make two threads - one for gui, second for your app. In the first one you simply call gtk.main(). In second you put your main program loop. Of course when you call python program you have one thread already started:P\nIf you aren't familiar whit threads there is other solution. Gtk have function which calls function specified by you with some delay:\ndef call_me:\n print \"Hello World!\"\n gtk.timeout_add( 1000 , call_me )\n\ngtk.timeout_add( 1000 , call_me )\ngtk.main()\n\nBut it seems to be deprecated now. Probably they have made a better solution.\n" ]
[ 4, 1 ]
[]
[]
[ "gtk", "pygtk", "python" ]
stackoverflow_0001659085_gtk_pygtk_python.txt
Q: algorithm for list identification and parsing I have data which in theory is a list, but historically has been input by the user as a free form text field. Now I need to separate each item of the list so that each element can be analysed. Simplified examples of my data as input by users: one, two, three, four, five one. two. three, four. five. "I start with one, then do two, maybe three and four then five" one two three four five. one, two. three four five one two three four - five "not even a list, no list-elements here! but list item separators may appear. grrr" So, that's more or less what the data looks like. In reality a list item could be several words long. I need to process these lists (of which there are thousands) such that I end up arrays like this: array[0] = "one" array[1] = "two" array[n] = n I accept that sometimes my algorithm will completely fail to parse the list, I don't need a 100% success rate, 75% would be good. False positives are going to be very expensive for me so I would rather reject a list completely than generate a list that does not contain real data - assume some users type in meaningless gibberish. I have some ideas around trying to identify which separator(s) is being used and how regularly data is separated in relation to the size of the content. I prefer Java or Python, however any solution would be welcome :-) A: The first step to solving this problem is to analyze, in detail, how it is that humans solve this problem. I'd break the problem down into two parts. How do humans distinguish between lists and non-lists? For example, is it because non-lists are grammatical English sentences? If that's the case, you may be able to use one of the available natural language processing toolkits to distinguish between lists and non-lists. How do humans identify the delimiters and list elements in lists? For example, do they recognize the list elements because of some particular domain knowledge? Or do they just recognize one of a small set of common delimiters? Are list elements always single words? If not, in what cases are they multiple words? I'd also take a good look at several hundred samples, to see if there are any COMMON patterns that can be easily identified and parsed. If, for example, 30% of your entries are simple comma-separated lists then a regular expression will trivially identify and parse them. Perhaps a small set of regular expressions will address a large part of your corpus. Finally, I assume that currently the data is not only entered and recognized by humans but also consumed by humans. Is your reason for breaking items into lists so that humans can be removed from the loop, or just to make the work easier for them? If the latter, I'd recommend providing them with BOTH the broken-up list elements and, as a backup, the originally-entered text. In other words, hedge your bets in case you get it wrong. A: If you can't define your data ("The words can be anything, I there is no way to know beforehand a dictionary of what any individual list can contain. They will not be only numbers... it could be a list of anything") then you have serious problems. Specifically, if you can't define your data, your problem cannot be solved. You can try playing with nltk. You may be able to discard the "noise words" (",", ".", "i", "start", "with", "then", "do", etc.) What's left may be this undefinable "words can be anything" that's left over. Until you can better define your data, you're probably doomed to a lot of struggle. A: I don't know if I understand your problem. If you want to extract alpha-numeric strings from messed string in python it would be: >>> import re >>> re.split('\W+','abaa, asodf ?. poasid - paosfi sec') ['abaa', 'asodf', 'poasid', 'paosfi', 'sec'] Or if you know the seperators: >>> re.split('[,. -]+','abaa, asodf, poasid - paosfi sec') ['abaa', 'asodf', 'poasid', 'paosfi', 'sec'] A: Rather than focusing on the code, how about the method. Building a little on what swillden said... If your lists are consumed by human users, you could ask them to correct you when you make a mistake (this correction is visible either to the person entering the text or to a later user viewing the text). If a given input looks a lot like a list but not enough to be sure, you show them the list and the raw input and ask them to choose. To automatically categorise inputs as lists or as text you could create several metrics to base your decision on : Given the separators (i.e [' ', '\t', ',', '.', 'and']) how many does this phrase use? expect one or two. Which ones? Is the input composed of fragments (use some sort of grammar system) - fragments tend to indicate lists. Does this input field (or context in your input) tend to have list items The words in the list itself (some words might turn out to always mean a sentence or a list in your domain) You then pass this information into a Bayesian filter and train it using your user's suggestions. Most of the items I mention would boil into special "keywords" that you tag an item with before you pass it into the filter. If the filter has a clear answer either way, treat it as a list or string. If the filter is uncertain, ask the user and use their answer to train the filter. Edit You could always train the system manually (i.e without exposing your system to the users) by first classifying lists using your existing scripts and then checking them by hand. Take a list of 500 inputs, run a filter looking for , or other easy lists and classify them as lists. Train the Bayesian filter on those (with everything else non-list) and then check the output by hand for all 500 for further training. Each day someone could receive an email with all the edge cases for that day and could clink links in the email to correct the system if necessary. As a side issue (relating to OP comment), in general Bayesian filters are much easier to implement, debug, test, analyse and scale than Neural networks. A: In java, a string tokenizer will do this (i.e. StringTokenizer(inputString, delimiterList)) StringTokenizer st = new StringTokenizer( "A B|C-D", " |-" ); while ( st.hasMoreTokens() ) { System.out.println( st.nextToken() ); } prints A B C D A: The following will "parse" your input string into sequences of "word" characters separated by non-word characters. String input = ... String[] parts = input.split("[^\w]*"); I don't know how you are going to distinguish a list from gibberish. I think you will need to explain your problem domain some more ... EDIT: If you cannot define the rules which you (as a human) use to distinguish a list from gibberish, then this problem is essentially unsolvable. Computers cannot do magic you know ... Maybe you should just use the program to deal with the subset that are "definitely" lists, and classify the other ones by hand. A: Either you know your dictionary of words or you have a precedence order for list delimiters. Otherwise the problem is too ill defined for a computer to handle. I suppose your precedence could be commas, dots, hyphens, spaces. So, this means that you split by commas in preference to splitting by dots and so on. Alternatively you could keep on splitting by each successive delimiter, until you hit a delimiter that isn't in the text. A: I'm not sure what the best answer really is, But if you need to have few false positives, then perhaps what you should do is define a few patterns that are very likely to be lists, and strictly reject every other datum. patterns = [ re.compile(r'^\s*(\w+)(\s*,\s*(\w+))*\s*$'), re.compile(r'^\s*(\w+)(\s*\.\s*(\w+))*\s*$'), re.compile(r'^\s*(\w+)(\s*,\s*(\w+))*\s+and\s+(\w+)\s*^$') ] acceptSet = [ line for line in candidateSet if any(pattern.match(line) for pattern in patterns)] A: A pyparsing stab at sift wheat from the chaff... rawdata = """\ one, two, three, four, five one. two. three, four. five. "I start with one, then do two, maybe three and four then five" one two three four five. one, two. three four five one two three four - five "not even a list, no list-elements here! but list item separators may appear. grrr" a dog with a bone is a beautiful twosome""".splitlines() from pyparsing import oneOf, WordStart, CharsNotIn, alphas, LineEnd options = (WordStart() + oneOf("one two three four five") + (CharsNotIn(alphas)|LineEnd())) for userinput in rawdata: print userinput print [opt[0] for opt in options.searchString(userinput)] print Prints (note the added line with hidden 'one' and 'two' substrings that would not be desirable): one, two, three, four, five ['one', 'two', 'three', 'four', 'five'] one. two. three, four. five. ['one', 'two', 'three', 'four', 'five'] "I start with one, then do two, maybe three and four then five" ['one', 'two', 'three', 'four', 'five'] one ['one'] two ['two'] three ['three'] four ['four'] five. ['five'] one, two. three four five ['one', 'two', 'three', 'four', 'five'] one two three four - five ['one', 'two', 'three', 'four', 'five'] "not even a list, no list-elements here! but list item separators may appear. grrr" [] a dog with a bone is a beautiful twosome []
algorithm for list identification and parsing
I have data which in theory is a list, but historically has been input by the user as a free form text field. Now I need to separate each item of the list so that each element can be analysed. Simplified examples of my data as input by users: one, two, three, four, five one. two. three, four. five. "I start with one, then do two, maybe three and four then five" one two three four five. one, two. three four five one two three four - five "not even a list, no list-elements here! but list item separators may appear. grrr" So, that's more or less what the data looks like. In reality a list item could be several words long. I need to process these lists (of which there are thousands) such that I end up arrays like this: array[0] = "one" array[1] = "two" array[n] = n I accept that sometimes my algorithm will completely fail to parse the list, I don't need a 100% success rate, 75% would be good. False positives are going to be very expensive for me so I would rather reject a list completely than generate a list that does not contain real data - assume some users type in meaningless gibberish. I have some ideas around trying to identify which separator(s) is being used and how regularly data is separated in relation to the size of the content. I prefer Java or Python, however any solution would be welcome :-)
[ "The first step to solving this problem is to analyze, in detail, how it is that humans solve this problem. I'd break the problem down into two parts.\n\nHow do humans distinguish between lists and non-lists? For example, is it because non-lists are grammatical English sentences? If that's the case, you may be able to use one of the available natural language processing toolkits to distinguish between lists and non-lists.\nHow do humans identify the delimiters and list elements in lists? For example, do they recognize the list elements because of some particular domain knowledge? Or do they just recognize one of a small set of common delimiters? Are list elements always single words? If not, in what cases are they multiple words?\n\nI'd also take a good look at several hundred samples, to see if there are any COMMON patterns that can be easily identified and parsed. If, for example, 30% of your entries are simple comma-separated lists then a regular expression will trivially identify and parse them. Perhaps a small set of regular expressions will address a large part of your corpus.\nFinally, I assume that currently the data is not only entered and recognized by humans but also consumed by humans. Is your reason for breaking items into lists so that humans can be removed from the loop, or just to make the work easier for them? If the latter, I'd recommend providing them with BOTH the broken-up list elements and, as a backup, the originally-entered text. In other words, hedge your bets in case you get it wrong.\n", "If you can't define your data (\"The words can be anything, I there is no way to know beforehand a dictionary of what any individual list can contain. They will not be only numbers... it could be a list of anything\") then you have serious problems.\nSpecifically, if you can't define your data, your problem cannot be solved.\nYou can try playing with nltk.\nYou may be able to discard the \"noise words\" (\",\", \".\", \"i\", \"start\", \"with\", \"then\", \"do\", etc.) What's left may be this undefinable \"words can be anything\" that's left over. \nUntil you can better define your data, you're probably doomed to a lot of struggle.\n", "I don't know if I understand your problem. If you want to extract alpha-numeric strings from messed string in python it would be:\n>>> import re\n>>> re.split('\\W+','abaa, asodf ?. poasid - paosfi sec')\n['abaa', 'asodf', 'poasid', 'paosfi', 'sec']\n\nOr if you know the seperators:\n>>> re.split('[,. -]+','abaa, asodf, poasid - paosfi sec')\n['abaa', 'asodf', 'poasid', 'paosfi', 'sec']\n\n", "Rather than focusing on the code, how about the method. Building a little on what swillden said...\nIf your lists are consumed by human users, you could ask them to correct you when you make a mistake (this correction is visible either to the person entering the text or to a later user viewing the text). If a given input looks a lot like a list but not enough to be sure, you show them the list and the raw input and ask them to choose.\nTo automatically categorise inputs as lists or as text you could create several metrics to base your decision on :\n\nGiven the separators (i.e [' ', '\\t', ',', '.', 'and']) how many does this phrase use? expect one or two. Which ones?\nIs the input composed of fragments (use some sort of grammar system) - fragments tend to indicate lists.\nDoes this input field (or context in your input) tend to have list items\nThe words in the list itself (some words might turn out to always mean a sentence or a list in your domain)\n\nYou then pass this information into a Bayesian filter and train it using your user's suggestions. Most of the items I mention would boil into special \"keywords\" that you tag an item with before you pass it into the filter. If the filter has a clear answer either way, treat it as a list or string. If the filter is uncertain, ask the user and use their answer to train the filter.\nEdit\nYou could always train the system manually (i.e without exposing your system to the users) by first classifying lists using your existing scripts and then checking them by hand. Take a list of 500 inputs, run a filter looking for , or other easy lists and classify them as lists. Train the Bayesian filter on those (with everything else non-list) and then check the output by hand for all 500 for further training. \nEach day someone could receive an email with all the edge cases for that day and could clink links in the email to correct the system if necessary.\nAs a side issue (relating to OP comment), in general Bayesian filters are much easier to implement, debug, test, analyse and scale than Neural networks. \n", "In java, a string tokenizer will do this (i.e. StringTokenizer(inputString, delimiterList))\nStringTokenizer st = new StringTokenizer( \"A B|C-D\", \" |-\" );\nwhile ( st.hasMoreTokens() ) {\n System.out.println( st.nextToken() );\n}\n\nprints \nA\nB\nC\nD\n", "The following will \"parse\" your input string into sequences of \"word\" characters separated by non-word characters. \nString input = ...\nString[] parts = input.split(\"[^\\w]*\");\n\nI don't know how you are going to distinguish a list from gibberish. I think you will need to explain your problem domain some more ... \nEDIT: If you cannot define the rules which you (as a human) use to distinguish a list from gibberish, then this problem is essentially unsolvable. Computers cannot do magic you know ...\nMaybe you should just use the program to deal with the subset that are \"definitely\" lists, and classify the other ones by hand.\n", "Either you know your dictionary of words or you have a precedence order for list delimiters. Otherwise the problem is too ill defined for a computer to handle. \nI suppose your precedence could be commas, dots, hyphens, spaces. So, this means that you split by commas in preference to splitting by dots and so on.\nAlternatively you could keep on splitting by each successive delimiter, until you hit a delimiter that isn't in the text.\n", "I'm not sure what the best answer really is, But if you need to have few false positives, then perhaps what you should do is define a few patterns that are very likely to be lists, and strictly reject every other datum. \npatterns = [\n re.compile(r'^\\s*(\\w+)(\\s*,\\s*(\\w+))*\\s*$'), \n re.compile(r'^\\s*(\\w+)(\\s*\\.\\s*(\\w+))*\\s*$'), \n re.compile(r'^\\s*(\\w+)(\\s*,\\s*(\\w+))*\\s+and\\s+(\\w+)\\s*^$')\n]\nacceptSet = [ line for line in candidateSet if \n any(pattern.match(line) for pattern in patterns)] \n\n", "A pyparsing stab at sift wheat from the chaff...\nrawdata = \"\"\"\\\none, two, three, four, five\none. two. three, four. five.\n\"I start with one, then do two, maybe three and four then five\"\none \ntwo \nthree \nfour \nfive. \none, two. three four five\none two three four - five\n\"not even a list, no list-elements here! but list item separators may appear. grrr\"\na dog with a bone is a beautiful twosome\"\"\".splitlines()\n\nfrom pyparsing import oneOf, WordStart, CharsNotIn, alphas, LineEnd\noptions = (WordStart() + oneOf(\"one two three four five\") + (CharsNotIn(alphas)|LineEnd()))\n\nfor userinput in rawdata:\n print userinput\n print [opt[0] for opt in options.searchString(userinput)]\n print\n\nPrints (note the added line with hidden 'one' and 'two' substrings that would not be desirable):\none, two, three, four, five\n['one', 'two', 'three', 'four', 'five']\n\none. two. three, four. five.\n['one', 'two', 'three', 'four', 'five']\n\n\"I start with one, then do two, maybe three and four then five\"\n['one', 'two', 'three', 'four', 'five']\n\none \n['one']\n\ntwo \n['two']\n\nthree \n['three']\n\nfour \n['four']\n\nfive. \n['five']\n\none, two. three four five\n['one', 'two', 'three', 'four', 'five']\n\none two three four - five\n['one', 'two', 'three', 'four', 'five']\n\n\"not even a list, no list-elements here! but list item separators may appear. grrr\"\n[]\n\na dog with a bone is a beautiful twosome\n[]\n\n" ]
[ 3, 2, 1, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "java", "list", "parsing", "python" ]
stackoverflow_0001673729_java_list_parsing_python.txt
Q: How to enforce unicode arguments for methods? I have a model class with getter and setter methods, and the occasional static methods. I would like to enforce the usage of unicode strings as arguments for specific methods and using decorators was the first idea I had. Now I have something like this: import types class require_unicode(object): def __init__(self, function): self.f = function def __call__(self, string): if not isinstance(string, types.UnicodeType): raise ValueError('String is not unicode') return self.f(string) class Foo(object): something = 'bar' @staticmethod @require_unicode def do_another(self, string): return ' '.join(['baz', string]) @require_unicode def set_something(self, string): self.something = string foo = Foo() foo.set_something('ValueError is raised') foo.set_something(u'argument count error') foo.do_another('ValueError is raised') foo.do_another(u'argument count error') In the above code the method call inside decorator's __call__ fails due to wrong argument count (because the 'foo' object ref is missing?). Before doing something stupid I wanted to ask you guys. How this should be done? A: I think, your problem is with the @staticmethod decorator, not with your require_unicode decorator. Staticmethods, unlike classmethods don't receive the reference to the class as the first argument, so your argument signature is wrong. You must either change do_another to be a @classmethod, or remove self from the arguments. EDIT: and, mind you, - @classmethod-decorated methods receive the class as the first argument, while instance methods receive the reference to the instance of the class (self). So it is a good idea to name the first argument to a classmethod "cls" or something, not "self" so it doesn't confuse anyone. A: I would think that this was unpythonic - you should never check the type of your arguments, but instead check that they have the necessary methods and attributes. The simplest way to do this is by assuming they are there and getting an exception otherwise, but I guess you could do getattr too. Just don't check the type of things. A: Another option is to use assertions. It depends on whether passing a non-unicode type into your methods should be considered a programming error that should be evident during development. import types class Foo: def set_something(self, string): assert isinstance(string, types.UnicodeType), 'String is not unicode' self.something = string This will raise an AssertionError exception whenever string is not of type unicode, but only when the Python interpretter is run in "deubg" mode. If you run Python with the -O option, the assert is efficiently ignored by the interpretter.
How to enforce unicode arguments for methods?
I have a model class with getter and setter methods, and the occasional static methods. I would like to enforce the usage of unicode strings as arguments for specific methods and using decorators was the first idea I had. Now I have something like this: import types class require_unicode(object): def __init__(self, function): self.f = function def __call__(self, string): if not isinstance(string, types.UnicodeType): raise ValueError('String is not unicode') return self.f(string) class Foo(object): something = 'bar' @staticmethod @require_unicode def do_another(self, string): return ' '.join(['baz', string]) @require_unicode def set_something(self, string): self.something = string foo = Foo() foo.set_something('ValueError is raised') foo.set_something(u'argument count error') foo.do_another('ValueError is raised') foo.do_another(u'argument count error') In the above code the method call inside decorator's __call__ fails due to wrong argument count (because the 'foo' object ref is missing?). Before doing something stupid I wanted to ask you guys. How this should be done?
[ "I think, your problem is with the @staticmethod decorator, not with your require_unicode decorator. Staticmethods, unlike classmethods don't receive the reference to the class as the first argument, so your argument signature is wrong.\nYou must either change do_another to be a @classmethod, or remove self from the arguments.\nEDIT: and, mind you, - @classmethod-decorated methods receive the class as the first argument, while instance methods receive the reference to the instance of the class (self). So it is a good idea to name the first argument to a classmethod \"cls\" or something, not \"self\" so it doesn't confuse anyone.\n", "I would think that this was unpythonic - you should never check the type of your arguments, but instead check that they have the necessary methods and attributes. The simplest way to do this is by assuming they are there and getting an exception otherwise, but I guess you could do getattr too. Just don't check the type of things.\n", "Another option is to use assertions. It depends on whether passing a non-unicode type into your methods should be considered a programming error that should be evident during development.\nimport types\nclass Foo:\n def set_something(self, string):\n assert isinstance(string, types.UnicodeType), 'String is not unicode'\n self.something = string\n\nThis will raise an AssertionError exception whenever string is not of type unicode, but only when the Python interpretter is run in \"deubg\" mode. If you run Python with the -O option, the assert is efficiently ignored by the interpretter.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "arguments", "decorator", "python", "unicode" ]
stackoverflow_0001675154_arguments_decorator_python_unicode.txt
Q: How does a classmethod object work? I'm having trouble to understand how a classmethod object works in Python, especially in the context of metaclasses and in __new__. In my special case I would like to get the name of a classmethod member, when I iterate through the members that were given to __new__. For normal methods the name is simply stored in a __name__ attribute, but for a classmethod there is apparently no such attribute. I don't even see how the classmethod is invoked, as there is no __call__ attribute either. Can someone explain to me how a classmethod works or point me to some documentation? Googling led me nowhere. Thanks! A: A classmethod object is a descriptor. You need to understand how descriptors work. In a nutshell, a descriptor is an object which has a method __get__, which takes three arguments: self, an instance, and an instance type. During normal attribute lookup, if a looked-up object A has a method __get__, that method gets called and what it returns is substituted in place for the object A. This is how functions (which are also descriptors) become bound methods when you call a method on an object. class Foo(object): def bar(self, arg1, arg2): print arg1, arg2 foo = Foo() # this: foo.bar(1,2) # prints '1 2' # does about the same thing as this: Foo.__dict__['bar'].__get__(foo, type(foo))(1,2) # prints '1 2' A classmethod object works the same way. When it gets looked up, its __get__ method gets called. The __get__ of a classmethod discards the argument corresponding to the instance (if there was one) and only passes along the instance_type when it calls __get__ on the wrapped function. A illustrative doodle: In [14]: def foo(cls): ....: print cls ....: In [15]: classmethod(foo) Out[15]: <classmethod object at 0x756e50> In [16]: cm = classmethod(foo) In [17]: cm.__get__(None, dict) Out[17]: <bound method type.foo of <type 'dict'>> In [18]: cm.__get__(None, dict)() <type 'dict'> In [19]: cm.__get__({}, dict) Out[19]: <bound method type.foo of <type 'dict'>> In [20]: cm.__get__({}, dict)() <type 'dict'> In [21]: cm.__get__("Some bogus unused string", dict)() <type 'dict'> More info on descriptors can be found here (among other places): http://users.rcn.com/python/download/Descriptor.htm For the specific task of getting the name of the function wrapped by a classmethod: In [29]: cm.__get__(None, dict).im_func.__name__ Out[29]: 'foo'
How does a classmethod object work?
I'm having trouble to understand how a classmethod object works in Python, especially in the context of metaclasses and in __new__. In my special case I would like to get the name of a classmethod member, when I iterate through the members that were given to __new__. For normal methods the name is simply stored in a __name__ attribute, but for a classmethod there is apparently no such attribute. I don't even see how the classmethod is invoked, as there is no __call__ attribute either. Can someone explain to me how a classmethod works or point me to some documentation? Googling led me nowhere. Thanks!
[ "A classmethod object is a descriptor. You need to understand how descriptors work.\nIn a nutshell, a descriptor is an object which has a method __get__, which takes three arguments: self, an instance, and an instance type.\nDuring normal attribute lookup, if a looked-up object A has a method __get__, that method gets called and what it returns is substituted in place for the object A. This is how functions (which are also descriptors) become bound methods when you call a method on an object.\nclass Foo(object):\n def bar(self, arg1, arg2):\n print arg1, arg2\n\nfoo = Foo()\n# this:\nfoo.bar(1,2) # prints '1 2'\n# does about the same thing as this:\nFoo.__dict__['bar'].__get__(foo, type(foo))(1,2) # prints '1 2'\n\nA classmethod object works the same way. When it gets looked up, its __get__ method gets called. The __get__ of a classmethod discards the argument corresponding to the instance (if there was one) and only passes along the instance_type when it calls __get__ on the wrapped function. \nA illustrative doodle:\nIn [14]: def foo(cls):\n ....: print cls\n ....: \nIn [15]: classmethod(foo)\nOut[15]: <classmethod object at 0x756e50>\nIn [16]: cm = classmethod(foo)\nIn [17]: cm.__get__(None, dict)\nOut[17]: <bound method type.foo of <type 'dict'>>\nIn [18]: cm.__get__(None, dict)()\n<type 'dict'>\nIn [19]: cm.__get__({}, dict)\nOut[19]: <bound method type.foo of <type 'dict'>>\nIn [20]: cm.__get__({}, dict)()\n<type 'dict'>\nIn [21]: cm.__get__(\"Some bogus unused string\", dict)()\n<type 'dict'>\n\nMore info on descriptors can be found here (among other places):\nhttp://users.rcn.com/python/download/Descriptor.htm\nFor the specific task of getting the name of the function wrapped by a classmethod:\nIn [29]: cm.__get__(None, dict).im_func.__name__\nOut[29]: 'foo'\n\n" ]
[ 22 ]
[]
[]
[ "class_method", "metaclass", "python" ]
stackoverflow_0001677468_class_method_metaclass_python.txt
Q: How to sign a document in python with M2Crypto using particular padding technique? I need to digitally sign some text in python using a private key stored in a .pem file. It seems like M2Crypto is the preferred way to do that these days, so that's what I'm using. I think I get most of it, but I'm confused about how to configure padding. To be specific, I need to verify the signature in an iPhone app, using a padding scheme called kSecPaddingPKCS1SHA1 and described like this: Data to be signed is a SHA1 hash. Standard ASN.1 padding will be done, as well as PKCS1 padding of the underlying RSA operation. Not being a crypto expert, I have only a fuzzy idea what this means. I've tried to look at some of the RFCs but found them impenetrable. I see that the encryption/decryption methods of RSA objects take padding types, but I don't see anything similar related to signature verification. Any help, especially with code, will be appreciated. (In some sense this is the converse of this question.) Ok, the answer given below is correct AFAICT. The following code generates a signature for text that validates on the iPhone using the kSecPaddingPKCS1SHA1 padding scheme. from M2Crypto import EVP privkey = EVP.load_key("privkey.pem") privkey.sign_init() privkey.sign_update(text) signature = privkey.sign_final() (Sorry to editorialize, but can I just say that crypto hackers are some of the lousiest documentation writers in the universe?) A: AFAIK M2Crypto adds padding where it's required. PKCS1 padding is the default. But, (again only AFAIK), signatures don't have padding, padding is only added to encrypted data to prevent a possible attack. EDIT: user caf, in a comment says that a padding is essnetial to a good signature. I'm still recommending you try it with the default M2Crypto behavior, it might add it. On M2Crypto's generated docs you can see that the {public,private}_{encrypt,decrypt} methods have a padding option, which is PKCS1 by default, while the sign menthod has none. IMO just give it a shot with the default M2Crypto params, it will probably work.
How to sign a document in python with M2Crypto using particular padding technique?
I need to digitally sign some text in python using a private key stored in a .pem file. It seems like M2Crypto is the preferred way to do that these days, so that's what I'm using. I think I get most of it, but I'm confused about how to configure padding. To be specific, I need to verify the signature in an iPhone app, using a padding scheme called kSecPaddingPKCS1SHA1 and described like this: Data to be signed is a SHA1 hash. Standard ASN.1 padding will be done, as well as PKCS1 padding of the underlying RSA operation. Not being a crypto expert, I have only a fuzzy idea what this means. I've tried to look at some of the RFCs but found them impenetrable. I see that the encryption/decryption methods of RSA objects take padding types, but I don't see anything similar related to signature verification. Any help, especially with code, will be appreciated. (In some sense this is the converse of this question.) Ok, the answer given below is correct AFAICT. The following code generates a signature for text that validates on the iPhone using the kSecPaddingPKCS1SHA1 padding scheme. from M2Crypto import EVP privkey = EVP.load_key("privkey.pem") privkey.sign_init() privkey.sign_update(text) signature = privkey.sign_final() (Sorry to editorialize, but can I just say that crypto hackers are some of the lousiest documentation writers in the universe?)
[ "AFAIK M2Crypto adds padding where it's required. \nPKCS1 padding is the default.\nBut, (again only AFAIK), signatures don't have padding, padding is only added to encrypted data to prevent a possible attack.\nEDIT: user caf, in a comment says that a padding is essnetial to a good signature. I'm still recommending you try it with the default M2Crypto behavior, it might add it.\nOn M2Crypto's generated docs you can see that the {public,private}_{encrypt,decrypt} methods have a padding option, which is PKCS1 by default, while the sign menthod has none.\nIMO just give it a shot with the default M2Crypto params, it will probably work.\n" ]
[ 2 ]
[]
[]
[ "cryptography", "digital_signature", "m2crypto", "python" ]
stackoverflow_0001677594_cryptography_digital_signature_m2crypto_python.txt
Q: Building python 2.6 w/ sqlite3 module if sqlite is installed in non-standard location I am trying to build python2.6 with support for the sqlite3 module. I have successfully built and installed the sqlite-amalgamation to a non standard path: ./configure --prefix=/my/non/standard/install/path/sqlite/3.6.20/ make make install I would like the python2.6 build to use this path & build the sqlite3 module. I checked './configure --help' but did not see any type of "--with-sqlite-dir=path" option. How can I let python's configure know where sqlite lives? A: Rather than rebuilding python, the simplest way to get the most recent sqlite3 is to install the pysqlite package which is the more up-to-date version of the standard library's sqlite3 module. It includes support for more recent sqlite3 features and is upwards compatible. More details are here.
Building python 2.6 w/ sqlite3 module if sqlite is installed in non-standard location
I am trying to build python2.6 with support for the sqlite3 module. I have successfully built and installed the sqlite-amalgamation to a non standard path: ./configure --prefix=/my/non/standard/install/path/sqlite/3.6.20/ make make install I would like the python2.6 build to use this path & build the sqlite3 module. I checked './configure --help' but did not see any type of "--with-sqlite-dir=path" option. How can I let python's configure know where sqlite lives?
[ "Rather than rebuilding python, the simplest way to get the most recent sqlite3 is to install the pysqlite package which is the more up-to-date version of the standard library's sqlite3 module. It includes support for more recent sqlite3 features and is upwards compatible. More details are here.\n" ]
[ 7 ]
[]
[]
[ "python", "sqlite" ]
stackoverflow_0001677666_python_sqlite.txt
Q: Python decorators on class members fail when decorator mechanism is a class When creating decorators for use on class methods, I'm having trouble when the decorator mechanism is a class rather than a function/closure. When the class form is used, my decorator doesn't get treated as a bound method. Generally I prefer to use the function form for decorators but in this case I have to use an existing class to implement what I need. This seems as though it might be related to python-decorator-makes-function-forget-that-it-belongs-to-a-class but why does it work just fine for the function form? Here is the simplest example I could make to show all goings on. Sorry about the amount of code: def decorator1(dec_param): def decorator(function): print 'decorator1 decoratoring:', function def wrapper(*args): print 'wrapper(%s) dec_param=%s' % (args, dec_param) function(*args) return wrapper return decorator class WrapperClass(object): def __init__(self, function, dec_param): print 'WrapperClass.__init__ function=%s dec_param=%s' % (function, dec_param) self.function = function self.dec_param = dec_param def __call__(self, *args): print 'WrapperClass.__call__(%s, %s) dec_param=%s' % (self, args, self.dec_param) self.function(*args) def decorator2(dec_param): def decorator(function): print 'decorator2 decoratoring:', function return WrapperClass(function, dec_param) return decorator class Test(object): @decorator1(dec_param=123) def member1(self, value=1): print 'Test.member1(%s, %s)' % (self, value) @decorator2(dec_param=456) def member2(self, value=2): print 'Test.member2(%s, %s)' % (self, value) @decorator1(dec_param=123) def free1(value=1): print 'free1(%s)' % (value) @decorator2(dec_param=456) def free2(value=2): print 'free2(%s)' % (value) test = Test() print '\n====member1====' test.member1(11) print '\n====member2====' test.member2(22) print '\n====free1====' free1(11) print '\n====free2====' free2(22) Output: decorator1 decoratoring: <function member1 at 0x3aba30> decorator2 decoratoring: <function member2 at 0x3ab8b0> WrapperClass.__init__ function=<function member2 at 0x3ab8b0> dec_param=456 decorator1 decoratoring: <function free1 at 0x3ab9f0> decorator2 decoratoring: <function free2 at 0x3ab970> WrapperClass.__init__ function=<function free2 at 0x3ab970> dec_param=456 ====member1==== wrapper((<__main__.Test object at 0x3af5f0>, 11)) dec_param=123 Test.member1(<__main__.Test object at 0x3af5f0>, 11) ====member2==== WrapperClass.__call__(<__main__.WrapperClass object at 0x3af590>, (22,)) dec_param=456 Test.member2(22, 2) <<<- Badness HERE! ====free1==== wrapper((11,)) dec_param=123 free1(11) ====free2==== WrapperClass.__call__(<__main__.WrapperClass object at 0x3af630>, (22,)) dec_param=456 free2(22) A: Your WrapperClass needs to be a descriptor (just like a function is!), i.e., supply appropriate special methods __get__ and __set__. This how-to guide teaches all you need to know about that!-)
Python decorators on class members fail when decorator mechanism is a class
When creating decorators for use on class methods, I'm having trouble when the decorator mechanism is a class rather than a function/closure. When the class form is used, my decorator doesn't get treated as a bound method. Generally I prefer to use the function form for decorators but in this case I have to use an existing class to implement what I need. This seems as though it might be related to python-decorator-makes-function-forget-that-it-belongs-to-a-class but why does it work just fine for the function form? Here is the simplest example I could make to show all goings on. Sorry about the amount of code: def decorator1(dec_param): def decorator(function): print 'decorator1 decoratoring:', function def wrapper(*args): print 'wrapper(%s) dec_param=%s' % (args, dec_param) function(*args) return wrapper return decorator class WrapperClass(object): def __init__(self, function, dec_param): print 'WrapperClass.__init__ function=%s dec_param=%s' % (function, dec_param) self.function = function self.dec_param = dec_param def __call__(self, *args): print 'WrapperClass.__call__(%s, %s) dec_param=%s' % (self, args, self.dec_param) self.function(*args) def decorator2(dec_param): def decorator(function): print 'decorator2 decoratoring:', function return WrapperClass(function, dec_param) return decorator class Test(object): @decorator1(dec_param=123) def member1(self, value=1): print 'Test.member1(%s, %s)' % (self, value) @decorator2(dec_param=456) def member2(self, value=2): print 'Test.member2(%s, %s)' % (self, value) @decorator1(dec_param=123) def free1(value=1): print 'free1(%s)' % (value) @decorator2(dec_param=456) def free2(value=2): print 'free2(%s)' % (value) test = Test() print '\n====member1====' test.member1(11) print '\n====member2====' test.member2(22) print '\n====free1====' free1(11) print '\n====free2====' free2(22) Output: decorator1 decoratoring: <function member1 at 0x3aba30> decorator2 decoratoring: <function member2 at 0x3ab8b0> WrapperClass.__init__ function=<function member2 at 0x3ab8b0> dec_param=456 decorator1 decoratoring: <function free1 at 0x3ab9f0> decorator2 decoratoring: <function free2 at 0x3ab970> WrapperClass.__init__ function=<function free2 at 0x3ab970> dec_param=456 ====member1==== wrapper((<__main__.Test object at 0x3af5f0>, 11)) dec_param=123 Test.member1(<__main__.Test object at 0x3af5f0>, 11) ====member2==== WrapperClass.__call__(<__main__.WrapperClass object at 0x3af590>, (22,)) dec_param=456 Test.member2(22, 2) <<<- Badness HERE! ====free1==== wrapper((11,)) dec_param=123 free1(11) ====free2==== WrapperClass.__call__(<__main__.WrapperClass object at 0x3af630>, (22,)) dec_param=456 free2(22)
[ "Your WrapperClass needs to be a descriptor (just like a function is!), i.e., supply appropriate special methods __get__ and __set__. This how-to guide teaches all you need to know about that!-)\n" ]
[ 10 ]
[]
[]
[ "decorator", "python" ]
stackoverflow_0001677747_decorator_python.txt
Q: Can you only communicate once with a subprocess? communicate's documentation says: Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. What do you do if you need to send input to a process more than once ? For example, I spawn a process, send it some data, the process does something with that, returns some output, and then I have to send input again? How do I handle that? A: Then you can't use .communicate(). You can either poll the streams, use select or some other way that allows you to listen to FD changes (both gtk and Qt have tools for that, for example). A: Take a look at Doug Hellman's Python Module of the Week writeup about subprocess. Search down until you see "repeater.py". There you will find an example of how to send and receive input/output to a process. A: Don't use subprocess for this, you'll enter a world of pain to do with buffering. I recommend pexpect for this purpose - it works very well. Unfortunately it doesn't work under windows at the moment, though I did hear about a port (which I can't find any more). A: Here's a module I wrote. Make sure to use that -u argument to avoid buffering problems: import os import pickle import subprocess from subprocess import PIPE import struct import builtins def call_thru_stream(stream,funcname,*args,**kwargs): """Used for calling a function through a stream and no return value is required. It is assumed the receiving program is in the state where it is expecting a function.""" transmit_object(stream,(funcname,args,kwargs)) def function_thru_stream(in_stream,out_stream,funcname,*args,**kwargs): """Used for calling a function through a stream where a return value is required. It is assumed the receiving program is in the state where it is expecting a function.""" transmit_object(in_stream,(funcname,args,kwargs)) return receive_object(out_stream) #--------------------- Object layer ------------------------------------------------------------ def transmit_object(stream,obj): """Transmits an object through a binary stream""" data=pickle.dumps(obj,2)#Uses pickle version 2 for compatibility with 2x transmit_atom(stream,data) def receive_object(stream): """Receive an object through a binary stream""" atom=receive_atom(stream) return pickle.loads(atom) #--------------------- Atom layer -------------------------------------------------------------- def transmit_atom(stream, atom_bytes): """Used for transmitting a some bytes which should be treated as an atom through a stream. An integer indicating the size of the atom is appended to the start.""" header=struct.pack("=L",len(atom_bytes)) stream.write(header) stream.write(atom_bytes) stream.flush() def receive_atom(stream): """Read an atom from a binary stream and return the bytes.""" input_len=stream.read(4) l=struct.unpack("=L",input_len)[0] return stream.read(l)
Can you only communicate once with a subprocess?
communicate's documentation says: Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. What do you do if you need to send input to a process more than once ? For example, I spawn a process, send it some data, the process does something with that, returns some output, and then I have to send input again? How do I handle that?
[ "Then you can't use .communicate(). You can either poll the streams, use select or some other way that allows you to listen to FD changes (both gtk and Qt have tools for that, for example).\n", "Take a look at Doug Hellman's Python Module of the Week writeup about subprocess. Search down until you see \"repeater.py\".\nThere you will find an example of how to send and receive input/output to a process.\n", "Don't use subprocess for this, you'll enter a world of pain to do with buffering.\nI recommend pexpect for this purpose - it works very well. Unfortunately it doesn't work under windows at the moment, though I did hear about a port (which I can't find any more).\n", "Here's a module I wrote. Make sure to use that -u argument to avoid buffering problems:\nimport os\nimport pickle\nimport subprocess\nfrom subprocess import PIPE\nimport struct\nimport builtins\ndef call_thru_stream(stream,funcname,*args,**kwargs):\n \"\"\"Used for calling a function through a stream and no return value is required. It is assumed\n the receiving program is in the state where it is expecting a function.\"\"\"\n transmit_object(stream,(funcname,args,kwargs))\n\n\ndef function_thru_stream(in_stream,out_stream,funcname,*args,**kwargs):\n \"\"\"Used for calling a function through a stream where a return value is required. It is assumed\n the receiving program is in the state where it is expecting a function.\"\"\"\n transmit_object(in_stream,(funcname,args,kwargs))\n return receive_object(out_stream)\n\n#--------------------- Object layer ------------------------------------------------------------\n\ndef transmit_object(stream,obj):\n \"\"\"Transmits an object through a binary stream\"\"\"\n data=pickle.dumps(obj,2)#Uses pickle version 2 for compatibility with 2x\n transmit_atom(stream,data)\n\n\ndef receive_object(stream):\n \"\"\"Receive an object through a binary stream\"\"\"\n atom=receive_atom(stream)\n return pickle.loads(atom)\n\n#--------------------- Atom layer --------------------------------------------------------------\ndef transmit_atom(stream, atom_bytes):\n \"\"\"Used for transmitting a some bytes which should be treated as an atom through\n a stream. An integer indicating the size of the atom is appended to the start.\"\"\"\n header=struct.pack(\"=L\",len(atom_bytes))\n stream.write(header)\n stream.write(atom_bytes)\n stream.flush()\n\n\ndef receive_atom(stream):\n \"\"\"Read an atom from a binary stream and return the bytes.\"\"\"\n input_len=stream.read(4)\n l=struct.unpack(\"=L\",input_len)[0]\n return stream.read(l) \n\n" ]
[ 3, 3, 2, 1 ]
[]
[]
[ "python", "subprocess" ]
stackoverflow_0001676340_python_subprocess.txt
Q: Convert int64 to uint64 I want to convert an int64 numpy array to a uint64 numpy array, adding 2**63 to the values in the process so that they are still within the valid range allowed by the arrays. So for example if I start from a = np.array([-2**63,2**63-1], dtype=np.int64) I want to end up with np.array([0.,2**64], dtype=np.uint64) Sounds simple at first, but how would you actually do it? A: Use astype() to convert the values to another dtype: import numpy as np (a+2**63).astype(np.uint64) # array([ 0, 18446744073709551615], dtype=uint64) A: I'm not a real numpy expert, but this: >>> a = np.array([-2**63,2**63-1], dtype=np.int64) >>> b = np.array([x+2**63 for x in a], dtype=np.uint64) >>> b array([ 0, 18446744073709551615], dtype=uint64) works for me with Python 2.6 and numpy 1.3.0 I assume you meant 2**64-1, not 2**64, in your expected output, since 2**64 won't fit in a uint64. (18446744073709551615 is 2**64-1)
Convert int64 to uint64
I want to convert an int64 numpy array to a uint64 numpy array, adding 2**63 to the values in the process so that they are still within the valid range allowed by the arrays. So for example if I start from a = np.array([-2**63,2**63-1], dtype=np.int64) I want to end up with np.array([0.,2**64], dtype=np.uint64) Sounds simple at first, but how would you actually do it?
[ "Use astype() to convert the values to another dtype:\nimport numpy as np\n(a+2**63).astype(np.uint64)\n# array([ 0, 18446744073709551615], dtype=uint64)\n\n", "I'm not a real numpy expert, but this:\n>>> a = np.array([-2**63,2**63-1], dtype=np.int64)\n>>> b = np.array([x+2**63 for x in a], dtype=np.uint64)\n>>> b\narray([ 0, 18446744073709551615], dtype=uint64)\n\nworks for me with Python 2.6 and numpy 1.3.0\nI assume you meant 2**64-1, not 2**64, in your expected output, since 2**64 won't fit in a uint64. (18446744073709551615 is 2**64-1)\n" ]
[ 3, 1 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0001677639_numpy_python.txt
Q: Convert Python for -loop to PHP How can you convert the following code to PHP? summat = [sum(arra[i:i+4]) for i in range(0,len(arra),4)] My attempt $summat = array() foreach ( range(0, $arra.length, 4) as $i) { $summat = array ( array_sum( array_slice( $array, $i, $i+5) ) ) // don't know how to append the sums the array A: $sum = array(); foreach(range(0, count($a), 4) as $i) $sum []= array_sum(array_slice($a, $i, 4)); "[]=" is an append-to-array operator slice's second parameter is slice length, not the last index or even simpler $sum = array_map('array_sum', array_chunk($a, 4)); A: To append a value to an array, use: $summat[] = array_sum(...); The PHP way of doing ranges is similar to the C way: for($i = 0; $i < count($arra); $i += 4) { // ... }
Convert Python for -loop to PHP
How can you convert the following code to PHP? summat = [sum(arra[i:i+4]) for i in range(0,len(arra),4)] My attempt $summat = array() foreach ( range(0, $arra.length, 4) as $i) { $summat = array ( array_sum( array_slice( $array, $i, $i+5) ) ) // don't know how to append the sums the array
[ "$sum = array();\nforeach(range(0, count($a), 4) as $i)\n $sum []= array_sum(array_slice($a, $i, 4));\n\n\"[]=\" is an append-to-array operator\nslice's second parameter is slice length, not the last index\nor even simpler\n$sum = array_map('array_sum', array_chunk($a, 4));\n\n", "To append a value to an array, use:\n$summat[] = array_sum(...);\n\nThe PHP way of doing ranges is similar to the C way:\nfor($i = 0; $i < count($arra); $i += 4) {\n // ...\n}\n\n" ]
[ 4, 1 ]
[]
[]
[ "php", "python" ]
stackoverflow_0001678342_php_python.txt
Q: Proper way of having a unique identifier in Python? Basically, I have a list like: [START, 'foo', 'bar', 'spam', eggs', END] and the START/END identifiers are necessary for later so I can compare later on. Right now, I have it set up like this: START = object() END = object() This works fine, but it suffers from the problem of not working with pickling. I tried doing it the following way, but it seems like a terrible method of accomplishing this: class START(object):pass class END(object):pass Could anybody share a better means of doing this? Also, the example I have set up above is just an oversimplification of a different problem. A: If you want an object that's guaranteed to be unique and can also be guaranteed to get restored to exactly the same identify if pickled and unpickled right back, top-level functions, classes, class instances, and if you care about is rather than == also lists (and other mutables), are all fine. I.e., any of: # work for == as well as is class START(object): pass def START(): pass class Whatever(object): pass START = Whatever() # if you don't care for "accidental" == and only check with `is` START = [] START = {} START = set() None of these is terrible, none has any special advantage (depending if you care about == or just is). Probably def wins by dint of generality, conciseness, and lighter weight. A: You can define a Symbol class for handling START and END. class Symbol: def __init__(self, value): self.value = value def __eq__(self, other): return isinstance(other, Symbol) and other.value == self.value def __repr__(self): return "<sym: %r>" % self.value def __str__(self): return str(self.value) START = Symbol("START") END = Symbol("END") # test pickle import pickle assert START == pickle.loads(pickle.dumps(START)) assert END == pickle.loads(pickle.dumps(END)) A: If your list didn't have strings, I'd just use "start", "end" as Python makes the comparison O(1) due to interning. If you do need strings, but not tuples, the complete cheapskate method is: [("START",), 'foo', 'bar', 'spam', eggs', ("END",)] PS: I was sure your list was numbers before, not strings, but I can't see any revisions so I must have imagined it A: Actually, I like your solution. A while back I was hacking on a Python module, and I wanted to have a special magical value that could not appear anywhere else. I spent some time thinking about it and the best I came up with is the same trick you used: declare a class, and use the class object as the special magical value. When you are checking for the sentinel, you should of course use the is operator, for object identity: for x in my_list: if x is START: # handle start of list elif x is END: # handle end of list else: # handle item from list A: I think maybe this would be easier to answer if you were more explicit about what you need this for, but my inclination if faced with a problem like this would be something like: >>> START = os.urandom(16).encode('hex') >>> END = os.urandom(16).encode('hex') Pros of this approach, as I'm seeing it Your markers are strings (can pickle or otherwise easily serialize, eg to JSON or a DB, without any special effort) Very unlikely to collide either accidentally or on purpose Will serialize and deserialize to identical values, even across process restarts, which (I think) would not be the case for object() or an empty class. Cons(?) Each time they are newly chosen they will be completely different. (This being good or bad depends on details you have not provided, I would think).
Proper way of having a unique identifier in Python?
Basically, I have a list like: [START, 'foo', 'bar', 'spam', eggs', END] and the START/END identifiers are necessary for later so I can compare later on. Right now, I have it set up like this: START = object() END = object() This works fine, but it suffers from the problem of not working with pickling. I tried doing it the following way, but it seems like a terrible method of accomplishing this: class START(object):pass class END(object):pass Could anybody share a better means of doing this? Also, the example I have set up above is just an oversimplification of a different problem.
[ "If you want an object that's guaranteed to be unique and can also be guaranteed to get restored to exactly the same identify if pickled and unpickled right back, top-level functions, classes, class instances, and if you care about is rather than == also lists (and other mutables), are all fine. I.e., any of:\n# work for == as well as is\nclass START(object): pass\ndef START(): pass\nclass Whatever(object): pass\nSTART = Whatever()\n\n# if you don't care for \"accidental\" == and only check with `is`\nSTART = []\nSTART = {}\nSTART = set()\n\nNone of these is terrible, none has any special advantage (depending if you care about == or just is). Probably def wins by dint of generality, conciseness, and lighter weight.\n", "You can define a Symbol class for handling START and END.\nclass Symbol:\n def __init__(self, value):\n self.value = value\n\n def __eq__(self, other):\n return isinstance(other, Symbol) and other.value == self.value\n\n def __repr__(self):\n return \"<sym: %r>\" % self.value\n\n def __str__(self):\n return str(self.value)\n\nSTART = Symbol(\"START\")\nEND = Symbol(\"END\")\n\n# test pickle\nimport pickle\nassert START == pickle.loads(pickle.dumps(START))\nassert END == pickle.loads(pickle.dumps(END))\n\n", "If your list didn't have strings, I'd just use \"start\", \"end\" as Python makes the comparison O(1) due to interning.\nIf you do need strings, but not tuples, the complete cheapskate method is:\n[(\"START\",), 'foo', 'bar', 'spam', eggs', (\"END\",)]\n\nPS: I was sure your list was numbers before, not strings, but I can't see any revisions so I must have imagined it\n", "Actually, I like your solution.\nA while back I was hacking on a Python module, and I wanted to have a special magical value that could not appear anywhere else. I spent some time thinking about it and the best I came up with is the same trick you used: declare a class, and use the class object as the special magical value.\nWhen you are checking for the sentinel, you should of course use the is operator, for object identity:\nfor x in my_list:\n if x is START:\n # handle start of list\n elif x is END:\n # handle end of list\n else:\n # handle item from list\n\n", "I think maybe this would be easier to answer if you were more explicit about what you need this for, but my inclination if faced with a problem like this would be something like:\n>>> START = os.urandom(16).encode('hex')\n>>> END = os.urandom(16).encode('hex')\n\nPros of this approach, as I'm seeing it\n\nYour markers are strings (can pickle or otherwise easily serialize, eg to JSON or a DB, without any special effort)\nVery unlikely to collide either accidentally or on purpose\nWill serialize and deserialize to identical values, even across process restarts, which (I think) would not be the case for object() or an empty class.\n\nCons(?)\n\nEach time they are newly chosen they will be completely different. (This being good or bad depends on details you have not provided, I would think).\n\n" ]
[ 10, 2, 1, 1, 0 ]
[]
[]
[ "identifier", "python" ]
stackoverflow_0001677726_identifier_python.txt
Q: accessing memcached stats via cmemcache or django returns warning My Django application uses memcached via cmemcache. An issue sprung up when I was trying to monitor its usage: I tried to access stats memcached provides through both Django and cmemcache: django: from django.core.cache import cache cache._cache.get_stats() [[email protected]] mcm_server_stats():3027: unknown stat variable: pointer_size cmemcache: import cmemcache client=cmemcache.Client(['127.0.0.1:62656',]) client.get_stats() [[email protected]] mcm_server_stats():3027: unknown stat variable: pointer_size I can get nothing more than a warning. However, memcached itself provides stats without problems: telnet 127.0.0.1 62656 stats ... The web page of cmemcache mentions that "libmemcache-1.4.0.rc2 is not compatible with memcached 1.2.1, this results in get_stats returning no stats". The app is running on Debian. memcached's version is 1.2.2. I have no idea if there is still an incompatibility problem. Is there anyone who has encountered this issue and has a solution? A: First, you should not run those versions of memcached. They have lots and lots of known bugs and are many years old. Secondly, we add stats to memcached quite frequently, so if these libraries are complaining when they encounter new stats, you should complain to their authors. Also, I don't believe cmemcache is maintained. It's based on a deprecated memcached C library that has several known bugs. Users of that library are encouraged to migrate to libmemcached.
accessing memcached stats via cmemcache or django returns warning
My Django application uses memcached via cmemcache. An issue sprung up when I was trying to monitor its usage: I tried to access stats memcached provides through both Django and cmemcache: django: from django.core.cache import cache cache._cache.get_stats() [[email protected]] mcm_server_stats():3027: unknown stat variable: pointer_size cmemcache: import cmemcache client=cmemcache.Client(['127.0.0.1:62656',]) client.get_stats() [[email protected]] mcm_server_stats():3027: unknown stat variable: pointer_size I can get nothing more than a warning. However, memcached itself provides stats without problems: telnet 127.0.0.1 62656 stats ... The web page of cmemcache mentions that "libmemcache-1.4.0.rc2 is not compatible with memcached 1.2.1, this results in get_stats returning no stats". The app is running on Debian. memcached's version is 1.2.2. I have no idea if there is still an incompatibility problem. Is there anyone who has encountered this issue and has a solution?
[ "First, you should not run those versions of memcached. They have lots and lots of known bugs and are many years old.\nSecondly, we add stats to memcached quite frequently, so if these libraries are complaining when they encounter new stats, you should complain to their authors.\nAlso, I don't believe cmemcache is maintained. It's based on a deprecated memcached C library that has several known bugs. Users of that library are encouraged to migrate to libmemcached.\n" ]
[ 1 ]
[]
[]
[ "django", "memcached", "python" ]
stackoverflow_0001678848_django_memcached_python.txt
Q: Is there a python library that implements both sides of the AWS authentication protocol? I am writing a REST service in python and django and wanted to use Amazon's AWS authentication protocol. I was wondering if anyone knew of a python library that implemented formation of the header for sending and the validation of the header for recieving? A: Try this Library. I think it is the library you are searching for.. CalNet You can find some Python Code Samples Here A: boto is a Python library for AWS. I don't know however if it supports what you are asking for. A: I think this code does exactly what you want :) I'll be happy to get comments for improvement if needed.
Is there a python library that implements both sides of the AWS authentication protocol?
I am writing a REST service in python and django and wanted to use Amazon's AWS authentication protocol. I was wondering if anyone knew of a python library that implemented formation of the header for sending and the validation of the header for recieving?
[ "Try this Library. I think it is the library you are searching for..\nCalNet\nYou can find some Python Code Samples Here\n", "boto is a Python library for AWS. I don't know however if it supports what you are asking for.\n", "I think this code does exactly what you want :)\nI'll be happy to get comments for improvement if needed. \n" ]
[ 2, 1, 0 ]
[]
[]
[ "amazon_web_services", "python" ]
stackoverflow_0000701789_amazon_web_services_python.txt
Q: Selenium RC: how to capture/handle error? My test uses Selenium to loop through a CSV list of URLs via an HTTP proxy (working script below). As I watch the script run I can see about 10% of the calls produce "Proxy error: 502" ("Bad_Gateway"); however, the errors are not captured by my catch-all "except Exception" clause -- ie: instead of writing 'error' in the appropriate row of the "output.csv", they get passed to the else clause and produce a short piece of html that starts: "Proxy error: 502 Read from server failed: Unknown error." Also, if I collect all the URLs which returned 502s and re-run the script, they all pass, which leads me to believe that this is a sporadic network path issue. Question: Can the script be made to recognize the the 502 errors, sleep a minute, and then retry the URL instead of moving on to the next URL in the list? The only alternative that I can think of is to apply re.search("Proxy error: 502") after "get_html_source" as a way to catch the bad calls. Then, if the RE matches, put the script to sleep for a minute and then retry 'sel.open(row[0]' on the URL which produced the 502. Any advice would be much appreciated. Thanks! #python 2.6 from selenium import selenium import unittest, time, re, csv, logging class Untitled(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 4444, "*firefox", "http://baseDomain.com") self.selenium.start() self.selenium.set_timeout("60000") def test_untitled(self): sel = self.selenium spamReader = csv.reader(open('ListOfSubDomains.csv', 'rb')) for row in spamReader: try: sel.open(row[0]) except Exception: ofile = open('output.csv', 'ab') ofile.write("error" + '\n') ofile.close() else: time.sleep(5) html = sel.get_html_source() ofile = open('output.csv', 'ab') ofile.write(html.encode('utf-8') + '\n') ofile.close() def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main() A: I think that the alternative you propose is ok. rather than the get_html_source, You can use the captureNetworkTraffic function to get the HTTP header. That would be safer because the 502 page can change. Be careful, there is a bug in the captureNetworkTraffic of the selenium python wrapper that can be hacked. See: http://jira.openqa.org/browse/SRC-758
Selenium RC: how to capture/handle error?
My test uses Selenium to loop through a CSV list of URLs via an HTTP proxy (working script below). As I watch the script run I can see about 10% of the calls produce "Proxy error: 502" ("Bad_Gateway"); however, the errors are not captured by my catch-all "except Exception" clause -- ie: instead of writing 'error' in the appropriate row of the "output.csv", they get passed to the else clause and produce a short piece of html that starts: "Proxy error: 502 Read from server failed: Unknown error." Also, if I collect all the URLs which returned 502s and re-run the script, they all pass, which leads me to believe that this is a sporadic network path issue. Question: Can the script be made to recognize the the 502 errors, sleep a minute, and then retry the URL instead of moving on to the next URL in the list? The only alternative that I can think of is to apply re.search("Proxy error: 502") after "get_html_source" as a way to catch the bad calls. Then, if the RE matches, put the script to sleep for a minute and then retry 'sel.open(row[0]' on the URL which produced the 502. Any advice would be much appreciated. Thanks! #python 2.6 from selenium import selenium import unittest, time, re, csv, logging class Untitled(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 4444, "*firefox", "http://baseDomain.com") self.selenium.start() self.selenium.set_timeout("60000") def test_untitled(self): sel = self.selenium spamReader = csv.reader(open('ListOfSubDomains.csv', 'rb')) for row in spamReader: try: sel.open(row[0]) except Exception: ofile = open('output.csv', 'ab') ofile.write("error" + '\n') ofile.close() else: time.sleep(5) html = sel.get_html_source() ofile = open('output.csv', 'ab') ofile.write(html.encode('utf-8') + '\n') ofile.close() def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main()
[ "I think that the alternative you propose is ok. rather than the get_html_source, You can use the captureNetworkTraffic function to get the HTTP header. That would be safer because the 502 page can change.\nBe careful, there is a bug in the captureNetworkTraffic of the selenium python wrapper that can be hacked. See: http://jira.openqa.org/browse/SRC-758\n" ]
[ 1 ]
[]
[]
[ "csv", "error_handling", "loops", "python", "selenium" ]
stackoverflow_0001678195_csv_error_handling_loops_python_selenium.txt
Q: template fragment caching doesn't seem to work for some custom template tags I've been implementing caching in my django application, and used per view caching via the cache API and template fragment caching. On some of my pages I use a custom django template tag, this tag is provided via a third party developer, it takes some arguments in its template tags, and then make a request to a remote server, gets the response back over XML, and then renders the result in my page. Great - I thought I could easily cache this using fragment caching, so I : {% load cache %} {% cache 500 request.user.username %} {% load third party custom tags %} {% expensive custom tag set that gets stuff from a third party server via xml %} {{ some.stuff}} {% endcache %} Trouble is no matter what I do, the requests still get fired off to that remote server, it seems Django doesn't like to cache these custom template tags. I know memcached is working great, for other views and templates it all works just fine. Am I doing something that is incompatible with the fragment caching? Is there a way round it? A: If the template fragment you're trying to cache can't be pickled, memcached won't be able to store it and will raise an exception. From what I can gather, exceptions generated when rendering Django templates are suppressed. Since your custom tag is doing HTTP requests, maybe socket objects (which can't be pickled) are getting stored to the template fragment somehow. If this is the case, the only way around it I can think of would be to modify the custom tag to get rid of any leftover socket objects. A: Have you tried to use a different name for the cache fragment? There could be a problem with using request.user.username for a couple of reasons: If a user is not signed in, request.user.username could be empty, resulting in a non-named cache fragment If a user is signed in, this will call the 3rd party template tag at least once for each user every 3 mintues Maybe it's worth trying to rename the cache fragment name to test: {% cache 500 customxml %} I'd also try loading of the 3rd party template tag outside the cache tag like so: {% load cache third_party_custom_tags %} {% cache 500 request.user.username %} {% expensive custom tag set that gets stuff from a third party server via xml %} {{ some.stuff}} {% endcache %} What I'm not sure of is if the cache framework caches the results of a template tag. If that doesn't work, I'd take a look at what the template tag is doing under the hood and re-implement the template tag using Django's low-level cache. A: I think the problem is the custom tag, as you suggested. I disagree that the request.user.username is a problem, as the documentation for the subject actually gives that as an example, and I have used it with internal caching (number of posts, for instance), in testing, and it worked fine. The low level cache is potentially useful, but I would take a look at your custom tag to see what wouldn't be caching. Without the code it's hard to guess, but my guess would be something like the time, or some other variable, is being returned whic his causing it to force an update (if the XML pulls any data that changes Django may force an update, depending on other settings). I've had mixed results with Django's caching, so I would have a look at your XML feed(s) to see if it's causing anything to stop caching. A: I don't think this has anything to do with the custom tag. We endded up rewriting the Django caching tag because we needed more control than was possible with the one that was supplied. You might make a copy of it yourself and stick some debugging print statements into it. In particular, check the filename (assuming you are caching to files) and see what is being generated. It could be that it is changing when it shouldn't (for some unknown reason) and that would mean that it is always needing to re-render then enclosed block. Look in django/templatetags/cache.py. It's only 63 lines of code.
template fragment caching doesn't seem to work for some custom template tags
I've been implementing caching in my django application, and used per view caching via the cache API and template fragment caching. On some of my pages I use a custom django template tag, this tag is provided via a third party developer, it takes some arguments in its template tags, and then make a request to a remote server, gets the response back over XML, and then renders the result in my page. Great - I thought I could easily cache this using fragment caching, so I : {% load cache %} {% cache 500 request.user.username %} {% load third party custom tags %} {% expensive custom tag set that gets stuff from a third party server via xml %} {{ some.stuff}} {% endcache %} Trouble is no matter what I do, the requests still get fired off to that remote server, it seems Django doesn't like to cache these custom template tags. I know memcached is working great, for other views and templates it all works just fine. Am I doing something that is incompatible with the fragment caching? Is there a way round it?
[ "If the template fragment you're trying to cache can't be pickled, memcached won't be able to store it and will raise an exception. From what I can gather, exceptions generated when rendering Django templates are suppressed. Since your custom tag is doing HTTP requests, maybe socket objects (which can't be pickled) are getting stored to the template fragment somehow.\nIf this is the case, the only way around it I can think of would be to modify the custom tag to get rid of any leftover socket objects.\n", "Have you tried to use a different name for the cache fragment? There could be a problem with using request.user.username for a couple of reasons:\n\nIf a user is not signed in,\nrequest.user.username could be empty,\nresulting in a non-named cache\nfragment\nIf a user is signed in, this will\ncall the 3rd party template tag at\nleast once for each user every 3\nmintues\n\nMaybe it's worth trying to rename the cache fragment name to test:\n{% cache 500 customxml %}\n\nI'd also try loading of the 3rd party template tag outside the cache tag like so:\n{% load cache third_party_custom_tags %}\n{% cache 500 request.user.username %}\n{% expensive custom tag set that gets stuff from a third party server via xml %}\n{{ some.stuff}}\n{% endcache %}\n\nWhat I'm not sure of is if the cache framework caches the results of a template tag. If that doesn't work, I'd take a look at what the template tag is doing under the hood and re-implement the template tag using Django's low-level cache.\n", "I think the problem is the custom tag, as you suggested.\nI disagree that the request.user.username is a problem, as the documentation for the subject actually gives that as an example, and I have used it with internal caching (number of posts, for instance), in testing, and it worked fine.\nThe low level cache is potentially useful, but I would take a look at your custom tag to see what wouldn't be caching. Without the code it's hard to guess, but my guess would be something like the time, or some other variable, is being returned whic his causing it to force an update (if the XML pulls any data that changes Django may force an update, depending on other settings). I've had mixed results with Django's caching, so I would have a look at your XML feed(s) to see if it's causing anything to stop caching.\n", "I don't think this has anything to do with the custom tag.\nWe endded up rewriting the Django caching tag because we needed more control than was possible with the one that was supplied. You might make a copy of it yourself and stick some debugging print statements into it. In particular, check the filename (assuming you are caching to files) and see what is being generated. It could be that it is changing when it shouldn't (for some unknown reason) and that would mean that it is always needing to re-render then enclosed block.\nLook in django/templatetags/cache.py. It's only 63 lines of code.\n" ]
[ 3, 0, 0, 0 ]
[]
[]
[ "django", "fragment_caching", "memcached", "python" ]
stackoverflow_0001627131_django_fragment_caching_memcached_python.txt
Q: Overriding the newline generation behaviour of Python's print statement I have a bunch of legacy code for encoding raw emails that contains a lot of print statements such as print >>f, "Content-Type: text/plain" This is all well and good for emails, but we're now leveraging the same code for outputting HTTP request. The problem is that the Python print statement outputs '\n' whilst HTTP requires '\r\n'. It looks like Python (2.6.4 at least) generates a trailing PRINT_NEWLINE byte code for a print statement which is implemented as ceval.c:1582: err = PyFile_WriteString("\n", w); Thus it appears there's no easy way to override the default newline behaviour of print. I have considered the following solutions After writing the output simply do a .replace('\n', '\r\n'). This will interfere with HTTP messages that use multipart encoding. Create a wrapper around the destination file object and proxy the .write method def write(self, data): if data == '\n': data = '\r\n' return self._file.write(data) Write a regular expression that translates print >>f, text to f.write(text + line_end) where line_end can be '\n' or '\r\n'. I believe the third option would be the most appropriate. It would be interesting to hear what your Pythonic approach to the problem would be. A: You should solve your problem now and for forever by defining a new output function. Were print a function, this would have been much easier. I suggest writing a new output function, mimicing as much of the modern print function signature as possible (because reusing a good interface is good), for example: def output(*items, end="\n", file=sys.stdout): pass Once you have replaced all prints in question, you no longer have this problem -- you can always change the behavior of your function instead! This is a big reason why print was made a function in Python 3 -- because in Python 2.x, "all" projects invariably go through the stage where all the print statements are no longer flexible, and there is no easy way out. A: (Not sure how/if this fits with the wrapper you intend to use, but in case...) In Python 2.6 (and many preceding versions), you can suppress the newline by adding a comma at the end of the print statement, as in: data = 'some msg\r\n' print data, # note the comma The downside of using this approach however is that the print syntax and behavior is changed in Python3. A: In python2.x, I think you can do: print >>f "some msg\r\n", to supress the trailing new line. In python3.x, it's a lot simpler: print("some msg", end = "\r\n", file = f) A: I think I would define a new function writeline in an inherited file/stream class and update the code to use writeline instead of print. The file object itself can hold the line ending style as a member. That should give you some flexibility in behavior and also make the code a little clearer i.e. f.writeline(text) as opposed to f.write(text+line_end). A: I also prefer your third solution, but no need to use f.write, any user written function/callable would do. Thus the next changes would become easy. If you use an object you may even hide target file inside it thus removing some syntaxic noise like file or kind of newline. Too bad print is a statement in python 2.x, with python 3.x print could simply be overloaded by something user defined. A: Python has modules both to handle email and http headers in an easy compliant way. I suggest you use them instead of solving already solved problems again.
Overriding the newline generation behaviour of Python's print statement
I have a bunch of legacy code for encoding raw emails that contains a lot of print statements such as print >>f, "Content-Type: text/plain" This is all well and good for emails, but we're now leveraging the same code for outputting HTTP request. The problem is that the Python print statement outputs '\n' whilst HTTP requires '\r\n'. It looks like Python (2.6.4 at least) generates a trailing PRINT_NEWLINE byte code for a print statement which is implemented as ceval.c:1582: err = PyFile_WriteString("\n", w); Thus it appears there's no easy way to override the default newline behaviour of print. I have considered the following solutions After writing the output simply do a .replace('\n', '\r\n'). This will interfere with HTTP messages that use multipart encoding. Create a wrapper around the destination file object and proxy the .write method def write(self, data): if data == '\n': data = '\r\n' return self._file.write(data) Write a regular expression that translates print >>f, text to f.write(text + line_end) where line_end can be '\n' or '\r\n'. I believe the third option would be the most appropriate. It would be interesting to hear what your Pythonic approach to the problem would be.
[ "You should solve your problem now and for forever by defining a new output function. Were print a function, this would have been much easier.\nI suggest writing a new output function, mimicing as much of the modern print function signature as possible (because reusing a good interface is good), for example:\ndef output(*items, end=\"\\n\", file=sys.stdout):\n pass\n\nOnce you have replaced all prints in question, you no longer have this problem -- you can always change the behavior of your function instead! This is a big reason why print was made a function in Python 3 -- because in Python 2.x, \"all\" projects invariably go through the stage where all the print statements are no longer flexible, and there is no easy way out.\n", "(Not sure how/if this fits with the wrapper you intend to use, but in case...)\nIn Python 2.6 (and many preceding versions), you can suppress the newline by adding a comma at the end of the print statement, as in:\ndata = 'some msg\\r\\n'\nprint data, # note the comma\n\nThe downside of using this approach however is that the print syntax and behavior is changed in Python3.\n", "In python2.x, I think you can do:\nprint >>f \"some msg\\r\\n\",\n\nto supress the trailing new line.\nIn python3.x, it's a lot simpler:\nprint(\"some msg\", end = \"\\r\\n\", file = f)\n\n", "I think I would define a new function writeline in an inherited file/stream class and update the code to use writeline instead of print. The file object itself can hold the line ending style as a member. That should give you some flexibility in behavior and also make the code a little clearer i.e. f.writeline(text) as opposed to f.write(text+line_end).\n", "I also prefer your third solution, but no need to use f.write, any user written function/callable would do. Thus the next changes would become easy. If you use an object you may even hide target file inside it thus removing some syntaxic noise like file or kind of newline.\nToo bad print is a statement in python 2.x, with python 3.x print could simply be overloaded by something user defined.\n", "Python has modules both to handle email and http headers in an easy compliant way. I suggest you use them instead of solving already solved problems again.\n" ]
[ 10, 8, 4, 0, 0, 0 ]
[]
[]
[ "cpython", "printing", "python" ]
stackoverflow_0001677424_cpython_printing_python.txt
Q: How do I make Python pick the correct module without manually modifying sys.path? I have made some changes in a python module in my checked out copy of a repository, and need to test them. However, when I try to run a script that uses the module, it keeps importing the module from the trunk of the repository, which is of no use to me. I tried setting PYTHONPATH, which did nothing at all. After some searching around, I found that anything in the .pth files under site-packages directory will be put in even before PYTHONPATH (which to me defeats the purpose of having it). I believe this is the cause for my module not being picked. Am I correct? If so, what is the way to override this (without modifying the script to have a sys.path.insert(0,path) )? Edit: In reply to NicDumz - the original repository was under /projects/spam. The python modules were part of this in /projects/spam/sources/python/a/b/. However, these are 'built' every night using a homegrown make variant which then puts them into /projects/spam/build/lib/python/a/b/. The script is using the module under this last path only. I have checked out the entire repository to under /home/sundar/spam, and made changes in /home/sundar/spam/sources/python/a/b/mymodule.py. I've set my PYTHONPATH to /home/sundar/spam/sources/python and tried to import a.b.mymodule with no success. A: It sounds like you need to install virtualenv and use it to set up different environments for different purposes. In one environment, you would import modules from the trunk of the repository, but in another environment you would have a mixture of trunk modules and test modules. By keeping everything separate like this you make it easier to rollback changes (just delete the whole virtual environment folder) and you greatly reduce the risk that your test rigging will end up being committed to the repository. A: You could write a small script, such as the one below, that prefixes sys.path, then set PYTHONSTARTUP to use that script. import sys sys.path.insert(0, 'c:/temp') For example... C:\temp>set PYTHONSTARTUP=c:\temp\tst.py C:\temp>C:\Python26\python Python 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> sys.path ['c:/temp', '', 'C:\\Python26\\lib\\site-packages\\setuptools-0.6c9-py2.6.egg', 'C:\\Python26\\lib\\site-packages\\pyyaml-3.08-py2.6-win32.egg', 'C:\\Python26\\ lib\\site-packages\\pyglet-1.1.3-py2.6.egg', 'C:\\Python26\\lib\\site-packages\\ simpy-2.0.1-py2.6.egg', 'C:\\Python26\\lib\\site-packages\\nose-0.11.1-py2.6.egg ', 'C:\\Python26\\lib\\site-packages\\mercurial-unknown-py2.6-win32.egg', 'c:\\t emp', 'C:\\WINDOWS\\system32\\python26.zip', 'C:\\Python26\\DLLs', 'C:\\Python26 \\lib', 'C:\\Python26\\lib\\plat-win', 'C:\\Python26\\lib\\lib-tk', 'C:\\Python2 6', 'C:\\Python26\\lib\\site-packages', 'C:\\Python26\\lib\\site-packages\\win32 ', 'C:\\Python26\\lib\\site-packages\\win32\\lib', 'C:\\Python26\\lib\\site-pack ages\\Pythonwin'] A: Your current working directory is first in the sys.path. Anything there trumps anything else on the path. Copy the "test version" to some place closer to the front of the list of directories in sys.path, like your current working directory. A: You can create a setup script with setuptools or distribute, then do a python setup.py develop. It will add a link to your working copy in a .pth file, overriding any installed version. When you are done, you can simply delete the link in the .pth file.
How do I make Python pick the correct module without manually modifying sys.path?
I have made some changes in a python module in my checked out copy of a repository, and need to test them. However, when I try to run a script that uses the module, it keeps importing the module from the trunk of the repository, which is of no use to me. I tried setting PYTHONPATH, which did nothing at all. After some searching around, I found that anything in the .pth files under site-packages directory will be put in even before PYTHONPATH (which to me defeats the purpose of having it). I believe this is the cause for my module not being picked. Am I correct? If so, what is the way to override this (without modifying the script to have a sys.path.insert(0,path) )? Edit: In reply to NicDumz - the original repository was under /projects/spam. The python modules were part of this in /projects/spam/sources/python/a/b/. However, these are 'built' every night using a homegrown make variant which then puts them into /projects/spam/build/lib/python/a/b/. The script is using the module under this last path only. I have checked out the entire repository to under /home/sundar/spam, and made changes in /home/sundar/spam/sources/python/a/b/mymodule.py. I've set my PYTHONPATH to /home/sundar/spam/sources/python and tried to import a.b.mymodule with no success.
[ "It sounds like you need to install virtualenv and use it to set up different environments for different purposes. In one environment, you would import modules from the trunk of the repository, but in another environment you would have a mixture of trunk modules and test modules. \nBy keeping everything separate like this you make it easier to rollback changes (just delete the whole virtual environment folder) and you greatly reduce the risk that your test rigging will end up being committed to the repository.\n", "You could write a small script, such as the one below, that prefixes sys.path, then set PYTHONSTARTUP to use that script.\nimport sys\nsys.path.insert(0, 'c:/temp')\n\nFor example...\nC:\\temp>set PYTHONSTARTUP=c:\\temp\\tst.py\nC:\\temp>C:\\Python26\\python\nPython 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)] on\nwin32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import sys\n>>> sys.path\n['c:/temp', '', 'C:\\\\Python26\\\\lib\\\\site-packages\\\\setuptools-0.6c9-py2.6.egg',\n'C:\\\\Python26\\\\lib\\\\site-packages\\\\pyyaml-3.08-py2.6-win32.egg', 'C:\\\\Python26\\\\\nlib\\\\site-packages\\\\pyglet-1.1.3-py2.6.egg', 'C:\\\\Python26\\\\lib\\\\site-packages\\\\\nsimpy-2.0.1-py2.6.egg', 'C:\\\\Python26\\\\lib\\\\site-packages\\\\nose-0.11.1-py2.6.egg\n', 'C:\\\\Python26\\\\lib\\\\site-packages\\\\mercurial-unknown-py2.6-win32.egg', 'c:\\\\t\nemp', 'C:\\\\WINDOWS\\\\system32\\\\python26.zip', 'C:\\\\Python26\\\\DLLs', 'C:\\\\Python26\n\\\\lib', 'C:\\\\Python26\\\\lib\\\\plat-win', 'C:\\\\Python26\\\\lib\\\\lib-tk', 'C:\\\\Python2\n6', 'C:\\\\Python26\\\\lib\\\\site-packages', 'C:\\\\Python26\\\\lib\\\\site-packages\\\\win32\n', 'C:\\\\Python26\\\\lib\\\\site-packages\\\\win32\\\\lib', 'C:\\\\Python26\\\\lib\\\\site-pack\nages\\\\Pythonwin']\n\n", "Your current working directory is first in the sys.path. Anything there trumps anything else on the path.\nCopy the \"test version\" to some place closer to the front of the list of directories in sys.path, like your current working directory.\n", "You can create a setup script with setuptools or distribute, then do a python setup.py develop. It will add a link to your working copy in a .pth file, overriding any installed version.\nWhen you are done, you can simply delete the link in the .pth file.\n" ]
[ 3, 2, 1, 0 ]
[]
[]
[ "import", "module", "path", "python" ]
stackoverflow_0001679673_import_module_path_python.txt
Q: Error while exiting cherrypy server Guys, I am getting following error while exiting cherrypy server. What is this error about? 2009-11-04 09:32:35,015 WARNING Error in atexit._run_exitfuncs: 2009-11-04 09:32:35,015 WARNING 2009-11-04 09:32:35,015 WARNING Traceback (most recent call last): 2009-11-04 09:32:35,015 WARNING File "atexit.pyc", line 24, in _run_exitfuncs 2009-11-04 09:32:35,015 WARNING File "logging\__init__.pyc", line 1486, in shutdown 2009-11-04 09:32:35,015 WARNING File "logging\__init__.pyc", line 746, in flush 2009-11-04 09:32:35,015 WARNING IOError: [Errno 9] Bad file descriptor 2009-11-04 09:32:35,015 WARNING Error in sys.exitfunc: 2009-11-04 09:32:35,015 WARNING Traceback (most recent call last): 2009-11-04 09:32:35,015 WARNING File "atexit.pyc", line 24, in _run_exitfuncs 2009-11-04 09:32:35,015 WARNING File "logging\__init__.pyc", line 1486, in shutdown 2009-11-04 09:32:35,015 WARNING File "logging\__init__.pyc", line 746, in flush 2009-11-04 09:32:35,015 WARNING IOError 2009-11-04 09:32:35,015 WARNING : 2009-11-04 09:32:35,015 WARNING [Errno 9] Bad file descriptor 2009-11-04 09:32:35,015 WARNING A: You probably log to console and then close it. A: You closed your log file before exiting. The logging shutdown code wants to flush the log file before exiting. What you see here looks like bug #3126 in Python's logging module. It was fixed with: r64338 | vinay.sajip | 2008-06-17 13:02:14 +0200 (Tue, 17 Jun 2008) | 1 line Bug #3126: StreamHandler and FileHandler check before calling "flush" and "close " that the stream object has these, using hasattr (thanks to bobf for the patch) . Which version of Python do you have? Looks like 2.4.6 and 2.5.3 or newer should have the correct code, if this is really the problem.
Error while exiting cherrypy server
Guys, I am getting following error while exiting cherrypy server. What is this error about? 2009-11-04 09:32:35,015 WARNING Error in atexit._run_exitfuncs: 2009-11-04 09:32:35,015 WARNING 2009-11-04 09:32:35,015 WARNING Traceback (most recent call last): 2009-11-04 09:32:35,015 WARNING File "atexit.pyc", line 24, in _run_exitfuncs 2009-11-04 09:32:35,015 WARNING File "logging\__init__.pyc", line 1486, in shutdown 2009-11-04 09:32:35,015 WARNING File "logging\__init__.pyc", line 746, in flush 2009-11-04 09:32:35,015 WARNING IOError: [Errno 9] Bad file descriptor 2009-11-04 09:32:35,015 WARNING Error in sys.exitfunc: 2009-11-04 09:32:35,015 WARNING Traceback (most recent call last): 2009-11-04 09:32:35,015 WARNING File "atexit.pyc", line 24, in _run_exitfuncs 2009-11-04 09:32:35,015 WARNING File "logging\__init__.pyc", line 1486, in shutdown 2009-11-04 09:32:35,015 WARNING File "logging\__init__.pyc", line 746, in flush 2009-11-04 09:32:35,015 WARNING IOError 2009-11-04 09:32:35,015 WARNING : 2009-11-04 09:32:35,015 WARNING [Errno 9] Bad file descriptor 2009-11-04 09:32:35,015 WARNING
[ "You probably log to console and then close it.\n", "You closed your log file before exiting. The logging shutdown code wants to flush the log file before exiting. What you see here looks like bug #3126 in Python's logging module. It was fixed with:\n\nr64338 | vinay.sajip | 2008-06-17\n 13:02:14 +0200 (Tue, 17 Jun 2008) | 1\n line\nBug #3126: StreamHandler and\n FileHandler check before calling\n \"flush\" and \"close \" that the stream\n object has these, using hasattr\n (thanks to bobf for the patch) .\n\nWhich version of Python do you have? Looks like 2.4.6 and 2.5.3 or newer should have the correct code, if this is really the problem.\n" ]
[ 0, 0 ]
[]
[]
[ "cherrypy", "logging", "python" ]
stackoverflow_0001675441_cherrypy_logging_python.txt
Q: copy files from IIs6.0 server to client machine without showing file dialog window on click of button in ASP.NET 3.5 File.VBS file should be copied from IIS6.0(File.VBS file will be deployed in IIS along the ASP.NET3.5 application) server to Client “TEMP” folder with out opening the file download dialog box. Thanks! A: As indicated in the comment by Cheeso, this is not possible! This would constitute a very dangerous security hole! Although brief on this topic, the RFC 2616 is none the less explicit on this point, in particular with regards to the User Agent's (read the "Web Browser") duties in that regard. The receiving user agent SHOULD NOT respect any directory path information present in the filename-parm parameter, which is the only parameter believed to apply to HTTP implementations at this time. The filename SHOULD be treated as a terminal component only. If this header is used in a response with the application/octet- stream content-type, the implied suggestion is that the user agent should not display the response, but directly enter a `save response as...' dialog.
copy files from IIs6.0 server to client machine without showing file dialog window on click of button in ASP.NET 3.5
File.VBS file should be copied from IIS6.0(File.VBS file will be deployed in IIS along the ASP.NET3.5 application) server to Client “TEMP” folder with out opening the file download dialog box. Thanks!
[ "As indicated in the comment by Cheeso,\n this is not possible!\nThis would constitute a very dangerous security hole!\nAlthough brief on this topic, the RFC 2616 is none the less explicit on this point, in particular with regards to the User Agent's (read the \"Web Browser\") duties in that regard.\n\nThe receiving user agent SHOULD NOT respect any directory path information \npresent in the filename-parm parameter, which is the only parameter believed\nto apply to HTTP implementations at this time. The filename SHOULD be treated\nas a terminal component only.\n\nIf this header is used in a response with the application/octet- stream \ncontent-type, the implied suggestion is that the user agent should not display\nthe response, but directly enter a `save response as...' dialog.\n\n" ]
[ 1 ]
[]
[]
[ "c#", "python", "ruby" ]
stackoverflow_0001680233_c#_python_ruby.txt
Q: Python: How much space does each element of a list take? I need a very large list, and am trying to figure out how big I can make it so that it still fits in 1-2GB of RAM. I am using the CPython implementation, on 64 bit (x86_64). Edit: thanks to bua's answer, I have filled in some of the more concrete answers. What is the space (memory) usage of (in bytes): the list itself sys.getsizeof([]) == 72 each list entry (not including the data) sys.getsizeof([0, 1, 2, 3]) == 104, so 8 bytes overhead per entry. the data if it is an integer sys.getsizeof(2**62) == 24 (but varies according to integer size) sys.getsizeof(2**63) == 40 sys.getsizeof(2**128) == 48 sys.getsizeof(2**256) == 66 the data if it is an object (sizeof(Pyobject) I guess)) sys.getsizeof(C()) == 72 (C is an empty user-space object) If you can share more general data about the observed sizes, that would be great. For example: Are there special cases (I think immutable values might be shared, so maybe a list of bools doesn't take any extra space for the data)? Perhaps small lists take X bytes overhead but large lists take Y bytes overhead? A: point to start: >>> import sys >>> a=list() >>> type(a) <type 'list'> >>> sys.getsizeof(a) 36 >>> b=1 >>> type(b) <type 'int'> >>> sys.getsizeof(b) 12 and from python help: >>> help(sys.getsizeof) Help on built-in function getsizeof in module sys: getsizeof(...) getsizeof(object, default) -> int Return the size of object in bytes. A: If you want lists of numerical values, the standard array module provides optimized arrays (that have an append method). The non-standard, but commonly used NumPy module gives you fixed-size efficient arrays.
Python: How much space does each element of a list take?
I need a very large list, and am trying to figure out how big I can make it so that it still fits in 1-2GB of RAM. I am using the CPython implementation, on 64 bit (x86_64). Edit: thanks to bua's answer, I have filled in some of the more concrete answers. What is the space (memory) usage of (in bytes): the list itself sys.getsizeof([]) == 72 each list entry (not including the data) sys.getsizeof([0, 1, 2, 3]) == 104, so 8 bytes overhead per entry. the data if it is an integer sys.getsizeof(2**62) == 24 (but varies according to integer size) sys.getsizeof(2**63) == 40 sys.getsizeof(2**128) == 48 sys.getsizeof(2**256) == 66 the data if it is an object (sizeof(Pyobject) I guess)) sys.getsizeof(C()) == 72 (C is an empty user-space object) If you can share more general data about the observed sizes, that would be great. For example: Are there special cases (I think immutable values might be shared, so maybe a list of bools doesn't take any extra space for the data)? Perhaps small lists take X bytes overhead but large lists take Y bytes overhead?
[ "point to start: \n>>> import sys\n>>> a=list()\n>>> type(a)\n<type 'list'>\n>>> sys.getsizeof(a)\n36\n>>> b=1\n>>> type(b)\n<type 'int'>\n>>> sys.getsizeof(b)\n12\n\nand from python help:\n>>> help(sys.getsizeof)\nHelp on built-in function getsizeof in module sys:\n\ngetsizeof(...)\n getsizeof(object, default) -> int\n\n Return the size of object in bytes.\n\n", "If you want lists of numerical values, the standard array module provides optimized arrays (that have an append method).\nThe non-standard, but commonly used NumPy module gives you fixed-size efficient arrays.\n" ]
[ 11, 7 ]
[]
[]
[ "list", "memory", "performance", "python" ]
stackoverflow_0001680436_list_memory_performance_python.txt
Q: Custom exception handling in Python I have two modules, main and notmain. I declared my custom exception in main module and want to catch it. This exception is raised in notmain module. The problem is I can't catch my exception raised in notmain module. main.py: class MyException(Exception): pass m = __import__('notmain') try: m.func() except MyException as e: print(type(e)) print('ops') notmain.py: def func(): import main # 1 # from main import MyException # 2 # from main import MyException as MyException # 3 raise main.MyException # 1 # raise MyException # 2, 3 I tried different import methods, with the same result. When I run main.py, I see this: <class 'main.MyException'> ops Traceback (most recent call last): File "D:\exception\main.py", line 6, in <module> m.func() File "D:\exception\notmain.py", line 6, in func raise main.MyException # 1 main.MyException This means exception is somehow caught, but why I see traceback? And why exception class has "main.MyException" name? Now if I modify main.py a little bit and run it: try: raise MyException except MyException as e: print(type(e)) print('ops') I'll see what's expected: <class '__main__.MyException'> ops I don't know why this class MyException has different names in main.py and in notmain.py? And why Python can't catch it as expected? Thank you :) A: Your module main is imported twice (as main and __main__), each having its own class MyException. You should consider redesigning your application to avoid circular imports. A: The __main__ name, with underscores, is an automatic namespace for the program being called. A workaround would be to declare the exception in a third file (or have a third file that is the user-called program, and just a single method in your "real" program). Also, the way you import notmain may have something to do with it. Why not just "from notmain import func" ?
Custom exception handling in Python
I have two modules, main and notmain. I declared my custom exception in main module and want to catch it. This exception is raised in notmain module. The problem is I can't catch my exception raised in notmain module. main.py: class MyException(Exception): pass m = __import__('notmain') try: m.func() except MyException as e: print(type(e)) print('ops') notmain.py: def func(): import main # 1 # from main import MyException # 2 # from main import MyException as MyException # 3 raise main.MyException # 1 # raise MyException # 2, 3 I tried different import methods, with the same result. When I run main.py, I see this: <class 'main.MyException'> ops Traceback (most recent call last): File "D:\exception\main.py", line 6, in <module> m.func() File "D:\exception\notmain.py", line 6, in func raise main.MyException # 1 main.MyException This means exception is somehow caught, but why I see traceback? And why exception class has "main.MyException" name? Now if I modify main.py a little bit and run it: try: raise MyException except MyException as e: print(type(e)) print('ops') I'll see what's expected: <class '__main__.MyException'> ops I don't know why this class MyException has different names in main.py and in notmain.py? And why Python can't catch it as expected? Thank you :)
[ "Your module main is imported twice (as main and __main__), each having its own class MyException. You should consider redesigning your application to avoid circular imports.\n", "The __main__ name, with underscores, is an automatic namespace for the program being called. A workaround would be to declare the exception in a third file (or have a third file that is the user-called program, and just a single method in your \"real\" program).\nAlso, the way you import notmain may have something to do with it. Why not just \"from notmain import func\" ?\n" ]
[ 8, 1 ]
[]
[]
[ "exception", "exception_handling", "python" ]
stackoverflow_0001681036_exception_exception_handling_python.txt
Q: How can I replace a class with another class from another module in a lot of files without a lot of manual editing? Basically, I have a lot of Python classes (representing our database schema) that look something like this: from foo import xyz, b, c class bar(object): x = xyz() y = b() z = c() ...and I want to change it to this: from foo import b, c from baz import foobar class bar(object): x = foobar() y = b() z = c() Essentially, I just want to replace all instances of xyz with foobar. It's acceptable to me to leave the import of a in, so this would also be fine: from foo import a, b, c from baz import foobar class bar(object): x = foobar() y = b() z = c() It seems trivial to do a sed s/xyz/foobar/ on this, but then I'd still have to go back and change the import statements. I'm fine with doing some manual work, but I'd like to learn new ways to minimize the amount of it. So how would you do this change? Is there anything I can do with sed to do this? Or rope (I don't see anything obvious that would help me here)? A: Monkey-patching would be the quick and dirty way to do it -- before you do any other import, perform the following preliminary: import foo import baz foo.a = baz.m now, every subsequent use of attribute a of module foo will actually be using class m of module baz, as required, rather than the original class a of module foo. Not particularly clean, but potentially quite effective. Just DO make sure the monkey-patching happens BEFORE any other import (you can also chase throughout the object graph to locate every reference to foo.a that was put in place before the patching and change it into a baz.m, but that's way heavier and trickier). A: sed s/a/m would be disasterous since bar would be changed to bmr. If the variable names are truly short and/or non-unique, non-regex-able, then perhaps the easiest thing to do would be to insert from baz import m as a Then you don't have to change any other code further down in the file. You could use sed to change from foo import a,b,c to from foo import b,c though from foo import a,b,c from baz import m as a would work too, since the last import wins. A: You might try this sed script: sed -r 's/(^from foo import.*)(xyz, |, xyz)(.*)/\1\3/; T; a\from baz import foobar' or, equivalently: sed 's/\(^from foo import.*\)\(xyz, \|, xyz\)\(.*\)/\1\3/; T; a\from baz import foobar' If you try it like this, you'll get the results shown: $ echo "from foo import xyz, b, c"|sed -r 's/(^from foo import.*)(xyz, |, xyz)(.*)/\1\3/; T; a\from baz import foobar' from foo import b, c from baz import foobar $ echo "from foo import b, xyz, c"|sed -r 's/(^from foo import.*)(xyz, |, xyz)(.*)/\1\3/; T; a\from baz import foobar' from foo import b, c from baz import foobar $ echo "from foo import b, c, xyz"|sed -r 's/(^from foo import.*)(xyz, |, xyz)(.*)/\1\3/; T; a\from baz import foobar' from foo import b, c from baz import foobar The T command in sed branches to a label (or the end if no label is given) if no substitution is made. In this example, the "from baz" line is only appended once: $ echo "from foo import d, e, f from foo import xyz, b, c from bar import g, h, i"|sed -r 's/(^from foo import.*)(xyz, |, xyz)(.*)/\1\3/;a\from baz import foobar' from foo import d, e, f from foo import b, c from baz import foobar from bar import g, h, i A: I have not used rope but can't you move a to baz then rename baz.a to baz.m You can in other refactoring tools for other languages and the rope page suggests it can. For minimal edits - but probably worse style and maintainability make foo.a call baz.m A: I wouldn't use monkeypatching, I'd implement my own functions: import foo def xyz(): return foo.xyz() def b(): return foo.b() def c(): return foo.c() Now I can change xyz() to make it do anything that I want, and if I ever want to explicitly call foo.xyz(), I can. Also, if I stick that code in a module, I can globally replace from foo import with from my_foo import in all of the modules that presently use foo.
How can I replace a class with another class from another module in a lot of files without a lot of manual editing?
Basically, I have a lot of Python classes (representing our database schema) that look something like this: from foo import xyz, b, c class bar(object): x = xyz() y = b() z = c() ...and I want to change it to this: from foo import b, c from baz import foobar class bar(object): x = foobar() y = b() z = c() Essentially, I just want to replace all instances of xyz with foobar. It's acceptable to me to leave the import of a in, so this would also be fine: from foo import a, b, c from baz import foobar class bar(object): x = foobar() y = b() z = c() It seems trivial to do a sed s/xyz/foobar/ on this, but then I'd still have to go back and change the import statements. I'm fine with doing some manual work, but I'd like to learn new ways to minimize the amount of it. So how would you do this change? Is there anything I can do with sed to do this? Or rope (I don't see anything obvious that would help me here)?
[ "Monkey-patching would be the quick and dirty way to do it -- before you do any other import, perform the following preliminary:\nimport foo\nimport baz\nfoo.a = baz.m\n\nnow, every subsequent use of attribute a of module foo will actually be using class m of module baz, as required, rather than the original class a of module foo. Not particularly clean, but potentially quite effective. Just DO make sure the monkey-patching happens BEFORE any other import (you can also chase throughout the object graph to locate every reference to foo.a that was put in place before the patching and change it into a baz.m, but that's way heavier and trickier).\n", "sed s/a/m would be disasterous since bar would be changed to bmr.\nIf the variable names are truly short and/or non-unique, non-regex-able,\nthen perhaps the easiest thing to do would be to insert\nfrom baz import m as a\n\nThen you don't have to change any other code further down in the file.\nYou could use sed to change\nfrom foo import a,b,c \n\nto\nfrom foo import b,c\n\nthough \nfrom foo import a,b,c \nfrom baz import m as a\n\nwould work too, since the last import wins.\n", "You might try this sed script:\nsed -r 's/(^from foo import.*)(xyz, |, xyz)(.*)/\\1\\3/; T; a\\from baz import foobar'\n\nor, equivalently:\nsed 's/\\(^from foo import.*\\)\\(xyz, \\|, xyz\\)\\(.*\\)/\\1\\3/; T; a\\from baz import foobar'\n\nIf you try it like this, you'll get the results shown:\n$ echo \"from foo import xyz, b, c\"|sed -r 's/(^from foo import.*)(xyz, |, xyz)(.*)/\\1\\3/; T; a\\from baz import foobar'\nfrom foo import b, c\nfrom baz import foobar\n\n$ echo \"from foo import b, xyz, c\"|sed -r 's/(^from foo import.*)(xyz, |, xyz)(.*)/\\1\\3/; T; a\\from baz import foobar'\nfrom foo import b, c\nfrom baz import foobar\n\n$ echo \"from foo import b, c, xyz\"|sed -r 's/(^from foo import.*)(xyz, |, xyz)(.*)/\\1\\3/; T; a\\from baz import foobar'\nfrom foo import b, c\nfrom baz import foobar\n\nThe T command in sed branches to a label (or the end if no label is given) if no substitution is made. In this example, the \"from baz\" line is only appended once:\n$ echo \"from foo import d, e, f\nfrom foo import xyz, b, c\nfrom bar import g, h, i\"|sed -r 's/(^from foo import.*)(xyz, |, xyz)(.*)/\\1\\3/;a\\from baz import foobar'\nfrom foo import d, e, f\nfrom foo import b, c\nfrom baz import foobar\nfrom bar import g, h, i\n\n", "I have not used rope but can't you move a to baz then rename baz.a to baz.m You can in other refactoring tools for other languages and the rope page suggests it can.\nFor minimal edits - but probably worse style and maintainability make foo.a call baz.m\n", "I wouldn't use monkeypatching, I'd implement my own functions:\nimport foo\n\ndef xyz():\n return foo.xyz()\n\ndef b():\n return foo.b()\n\ndef c():\n return foo.c()\n\nNow I can change xyz() to make it do anything that I want, and if I ever want to explicitly call foo.xyz(), I can.\nAlso, if I stick that code in a module, I can globally replace from foo import with from my_foo import in all of the modules that presently use foo.\n" ]
[ 3, 1, 1, 0, 0 ]
[]
[]
[ "automation", "python", "rope", "sed" ]
stackoverflow_0001674791_automation_python_rope_sed.txt
Q: How can I change a list of strings into CSV in Python? Example strings: uji708 uhodih utus29 agamu4 azi340 ekon62 I need to change them into CSV list like this: uji708,uhodih,utus29, agamu4,azi340,ekon62, My code so far: email = 'mail_list.txt' handle = open(email) for line in handle: try: email = line.split()[0].replace('\n', '') l = line.split() print '\n'.join((','.join(x) for x in zip(l[::3], l[1::3], l[2::3]))) except: print 'error' How can I do this in Python? A: Use csv.writer: import csv import sys writer = csv.csvwriter(sys.stdout) writer.writerow(iterable_containing_my_strings) A: Here is a very specific answer to a very specific question when you will clarify/generalize your question I may update my answer s = """ uji708 uhodih utus29 agamu4 azi340 ekon62 """ l = s.split() print '\n'.join((','.join(x) for x in zip(l[::3], l[1::3], l[2::3])))
How can I change a list of strings into CSV in Python?
Example strings: uji708 uhodih utus29 agamu4 azi340 ekon62 I need to change them into CSV list like this: uji708,uhodih,utus29, agamu4,azi340,ekon62, My code so far: email = 'mail_list.txt' handle = open(email) for line in handle: try: email = line.split()[0].replace('\n', '') l = line.split() print '\n'.join((','.join(x) for x in zip(l[::3], l[1::3], l[2::3]))) except: print 'error' How can I do this in Python?
[ "Use csv.writer:\nimport csv\nimport sys\n\nwriter = csv.csvwriter(sys.stdout)\nwriter.writerow(iterable_containing_my_strings)\n\n", "Here is a very specific answer to a very specific question\nwhen you will clarify/generalize your question I may update my answer\ns = \"\"\"\nuji708\nuhodih\nutus29\nagamu4\nazi340\nekon62\n\"\"\"\nl = s.split()\nprint '\\n'.join((','.join(x) for x in zip(l[::3], l[1::3], l[2::3])))\n\n" ]
[ 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001671786_python.txt
Q: Cross-Platform Programming Language with a decent gui toolkit? For the program idea I have, it requires that the software be written in one binary that is executeable by all major desktop platforms, meaning it needs an interpreted language or a language within a JVM. Either is fine with me, but the programming language has to balance power & simplicity (e.g. Python) I know of wxPython but I have read that it's support on Mac OS X is fairly limited Java sounds good & it looks good but it seems almost too difficult to program in Any help? A: I used Python with wxPython for quite a while and found it very easy to use. I now use Java with both Swing and SWT. I prefer Java but that's just a personal preference so you shouldn't let that sway you. I didn't find the transition from Python to Java that difficult. In terms of GUI, they both have the layout manager paradigm - the managers are different but not so different you'll have trouble switching. Java has an absolute huge class library to the point where you probably don't need to write your own version of anything, just string together the components. I never really got that deep into Python but it may well be similar. One thing I did notice is that all the really good stuff I used in Python (e.g., s[-4:-1]) could still be done quite easily in Java. Both languages were a step up from C where I had to manage strings with my own libraries. If you think wxPython is limited on MacOS, you should try Java. I run my Java code on Windows, Linux and other UNIXes without compatibility problems. Sadly, not Mac, so I can't really advise you there. My advice, pick a smallish project - do it in both Python and Java - see how it runs on all the platforms you're interested in. A: Python with PyQt or the eventually-to-be-equivalent-but-gratis PySide seems the way to go -- after all, few languages are easier to program in than Java (which you consider "almost too difficult to program in"), Python is one of those few, Qt arguably the best cross-platform GUI toolkit in any language, and PyQt (now, but GPL or for-$$$) or PySide (eventually, gratis even if you want to close-source your own code) are powerful interfaces between Python and Qt. A: You can use any of languages targeting JVM, e.g. Jython (Python impl) and JRuby (Ruby impl). You can try using Qt bindings for Python, Qt seems to support many of Mac OSX specifics. A: Consider Tcl/Tk. I'm not sure how you define "one binary that is executeable [sic] by all major desktop platforms" but Tcl probably meets this as well as java, and likely better than any other scripting language. Using the tcl packaging technology of starkits you can either a) create a single file that can be run on any platform that has an appropriate runtime engine (and they are available for all major and many minor platforms), or you can package that platform-specific runtime engine and and cross-platform starkit into a single file executable for each platform. The starkit technology is something other languages should aspire to. What you get is a complete, fully functional virtual file system within a single file. This lets you easily package up sound files, dll/.so files (which must be copied to disk for obvious (?) reasons), images, data, etc along with your executable code. Tk, the graphical library, is very mature and has really good support on all platforms. Some people think it looks dated but those impressions are usually based on information that is at least 5 years old. Modern Tk looks quite good. For some examples see the tkdocs website. I's not clear whether you're more concerned with eye candy or functionality, but if it's functionality you're interested in then Tk is something to seriously consider. Most agree that Tcl is an aquired taste but those that use it professionally usually swear by it. I've been doing wxPython programming the last several months and would switch back to tcl/tk in a heartbeat if given the opportunity. A: You could use Groovy to work around the Java complexities. Still you'll need good foundations of Swing. While the learning curve may be steep, the trade of of not having to completely re-write the whole application again for the next platform will be a good reward. Bear in mind, that even though it is cross platform, you should consider different platforms still have different idioms ( e.g. Copy/Past in Windows is ctrl+v, ctrl+v while in Mac it is cmd+c, cmd+v ) A: I work on a program that has to run on Windows, Linux and OS X (and OS X is my development platform), and wxPython is what we use. If I had a chance to start again, I'd probably go with PyQT (based on advice from friends), but wxPython will get the job done. A: I think wxPython is pretty good, though I am not sure what you mean by "support on Mac OS X is fairly limited" but I have been porting a wxPython app (www.mockupscreens.com) to Mac and it wasn't that difficult with few tweaks e.g. some UI elements may not come up as you expected, as wxPython uses native UI elements, which can be an advanatage or disadvantage based on your requirements. Other good option is PyQT which will give you consistent look on all platforms. A: Java seems better for what you want. Well what about the web application in Javascript? A: How about SWT Cross Platform Native Look and feel Huge community Constantly maintained/upgraded ( IBM backed ) Atleast one mega successful cross platform project A: I would suggest going the wxPython route, I know that wxWidgets (which is what wxPython is using) can be made to have great looking Mac apps (look at PgAdmin3 from postgresql). While PgAdmin3 is not done in python, it was done with wxWidgets and looks fine on a mac. A: I use three cross-platform tools regularly: Realbasic from Realsoftware which is what Visual Basic v6 would have been if allowed to grow; Revolution from Runrev which is what Hypercard would have been if allowed to survive (and its neat using a scripting language whose syntax is basically English); and finally, Delphi Prism with Mono. All are quite mature and yet expanding at a great rate. For instance, Revolution is just introducing a web-application feature to its language that is really easy to use.
Cross-Platform Programming Language with a decent gui toolkit?
For the program idea I have, it requires that the software be written in one binary that is executeable by all major desktop platforms, meaning it needs an interpreted language or a language within a JVM. Either is fine with me, but the programming language has to balance power & simplicity (e.g. Python) I know of wxPython but I have read that it's support on Mac OS X is fairly limited Java sounds good & it looks good but it seems almost too difficult to program in Any help?
[ "I used Python with wxPython for quite a while and found it very easy to use. I now use Java with both Swing and SWT.\nI prefer Java but that's just a personal preference so you shouldn't let that sway you.\nI didn't find the transition from Python to Java that difficult. In terms of GUI, they both have the layout manager paradigm - the managers are different but not so different you'll have trouble switching.\nJava has an absolute huge class library to the point where you probably don't need to write your own version of anything, just string together the components. I never really got that deep into Python but it may well be similar. One thing I did notice is that all the really good stuff I used in Python (e.g., s[-4:-1]) could still be done quite easily in Java. Both languages were a step up from C where I had to manage strings with my own libraries.\nIf you think wxPython is limited on MacOS, you should try Java. I run my Java code on Windows, Linux and other UNIXes without compatibility problems. Sadly, not Mac, so I can't really advise you there.\nMy advice, pick a smallish project - do it in both Python and Java - see how it runs on all the platforms you're interested in.\n", "Python with PyQt or the eventually-to-be-equivalent-but-gratis PySide seems the way to go -- after all, few languages are easier to program in than Java (which you consider \"almost too difficult to program in\"), Python is one of those few, Qt arguably the best cross-platform GUI toolkit in any language, and PyQt (now, but GPL or for-$$$) or PySide (eventually, gratis even if you want to close-source your own code) are powerful interfaces between Python and Qt.\n", "\nYou can use any of languages targeting JVM, e.g. Jython (Python impl) and JRuby (Ruby impl).\nYou can try using Qt bindings for Python, Qt seems to support many of Mac OSX specifics.\n\n", "Consider Tcl/Tk. I'm not sure how you define \"one binary that is executeable [sic] by all major desktop platforms\" but Tcl probably meets this as well as java, and likely better than any other scripting language.\nUsing the tcl packaging technology of starkits you can either a) create a single file that can be run on any platform that has an appropriate runtime engine (and they are available for all major and many minor platforms), or you can package that platform-specific runtime engine and and cross-platform starkit into a single file executable for each platform.\nThe starkit technology is something other languages should aspire to. What you get is a complete, fully functional virtual file system within a single file. This lets you easily package up sound files, dll/.so files (which must be copied to disk for obvious (?) reasons), images, data, etc along with your executable code.\nTk, the graphical library, is very mature and has really good support on all platforms. Some people think it looks dated but those impressions are usually based on information that is at least 5 years old. Modern Tk looks quite good. For some examples see the tkdocs website. I's not clear whether you're more concerned with eye candy or functionality, but if it's functionality you're interested in then Tk is something to seriously consider. \nMost agree that Tcl is an aquired taste but those that use it professionally usually swear by it. I've been doing wxPython programming the last several months and would switch back to tcl/tk in a heartbeat if given the opportunity. \n", "You could use Groovy to work around the Java complexities.\nStill you'll need good foundations of Swing. \nWhile the learning curve may be steep, the trade of of not having to completely re-write the whole application again for the next platform will be a good reward. \nBear in mind, that even though it is cross platform, you should consider different platforms still have different idioms ( e.g. Copy/Past in Windows is ctrl+v, ctrl+v while in Mac it is cmd+c, cmd+v ) \n", "I work on a program that has to run on Windows, Linux and OS X (and OS X is my development platform), and wxPython is what we use.\nIf I had a chance to start again, I'd probably go with PyQT (based on advice from friends), but wxPython will get the job done.\n", "I think wxPython is pretty good, though I am not sure what you mean by \"support on Mac OS X is fairly limited\" but I have been porting a wxPython app (www.mockupscreens.com) to Mac and it wasn't that difficult with few tweaks e.g. some UI elements may not come up as you expected, as wxPython uses native UI elements, which can be an advanatage or disadvantage based on your requirements.\nOther good option is PyQT which will give you consistent look on all platforms.\n", "Java seems better for what you want.\nWell what about the web application in Javascript?\n", "How about SWT \n\nCross Platform\nNative Look and feel\nHuge community \nConstantly maintained/upgraded ( IBM backed )\nAtleast one mega successful cross platform project \n\n", "I would suggest going the wxPython route, I know that wxWidgets (which is what wxPython is using) can be made to have great looking Mac apps (look at PgAdmin3 from postgresql). While PgAdmin3 is not done in python, it was done with wxWidgets and looks fine on a mac.\n", "I use three cross-platform tools regularly: Realbasic from Realsoftware which is what Visual Basic v6 would have been if allowed to grow; Revolution from Runrev which is what Hypercard would have been if allowed to survive (and its neat using a scripting language whose syntax is basically English); and finally, Delphi Prism with Mono.\nAll are quite mature and yet expanding at a great rate. For instance, Revolution is just introducing a web-application feature to its language that is really easy to use. \n" ]
[ 6, 5, 3, 3, 2, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "cross_platform", "java", "multiplatform", "python", "wxpython" ]
stackoverflow_0001653419_cross_platform_java_multiplatform_python_wxpython.txt
Q: Nested Python C Extensions/Modules? How do I compile a C-Python module such that it is local to another? E.g. if I have a module named "bar" and another module named "mymodule", how do I compile "bar" so that it imported via "import mymodule.bar"? (Sorry if this is poorly phrased, I wasn't sure what the proper term for it was.) I tried the following in setup.py, but it doesn't seem to work: from distutils.core import setup, Extension setup(name='mymodule', version='1.0', author='Me', ext_modules=[Extension('mymodule', ['mymodule-module.c']), Extension('bar', ['bar-module.c'])]) Edit Thanks Alex. So this is what I ended up using: from distutils.core import setup, Extension PACKAGE_NAME = 'mymodule' setup(name=PACKAGE_NAME, version='1.0', author='Me', packages=[PACKAGE_NAME], ext_package=PACKAGE_NAME ext_modules=[Extension('foo', ['mymodule-foo-module.c']), Extension('bar', ['mymodule-bar-module.c'])]) with of course a folder named "mymodule" containing __init__.py. A: The instructions are here: Extension('foo', ['src/foo1.c', 'src/foo2.c']) describes an extension that lives in the root package, while Extension('pkg.foo', ['src/foo1.c', 'src/foo2.c']) describes the same extension in the pkg package. The source files and resulting object code are identical in both cases; the only difference is where in the filesystem (and therefore where in Python’s namespace hierarchy) the resulting extension lives. Remember, a package is always a directory (or zipfile) containing a module __init__. To create a module that's a package body, that module will be called __init__ and live under the package's directory (or zipfile). I've never done that in C; if it doesn't work to do it directly, name the module e.g. _init instead, and in __init__.py do from _init import * (one of the very few legitimate uses of from ... import *;-).
Nested Python C Extensions/Modules?
How do I compile a C-Python module such that it is local to another? E.g. if I have a module named "bar" and another module named "mymodule", how do I compile "bar" so that it imported via "import mymodule.bar"? (Sorry if this is poorly phrased, I wasn't sure what the proper term for it was.) I tried the following in setup.py, but it doesn't seem to work: from distutils.core import setup, Extension setup(name='mymodule', version='1.0', author='Me', ext_modules=[Extension('mymodule', ['mymodule-module.c']), Extension('bar', ['bar-module.c'])]) Edit Thanks Alex. So this is what I ended up using: from distutils.core import setup, Extension PACKAGE_NAME = 'mymodule' setup(name=PACKAGE_NAME, version='1.0', author='Me', packages=[PACKAGE_NAME], ext_package=PACKAGE_NAME ext_modules=[Extension('foo', ['mymodule-foo-module.c']), Extension('bar', ['mymodule-bar-module.c'])]) with of course a folder named "mymodule" containing __init__.py.
[ "The instructions are here:\n\nExtension('foo', ['src/foo1.c',\n'src/foo2.c'])\ndescribes an extension that lives in\nthe root package, while\nExtension('pkg.foo', ['src/foo1.c',\n'src/foo2.c'])\ndescribes the same extension in the\npkg package. The source files and\nresulting object code are identical in\nboth cases; the only difference is\nwhere in the filesystem (and therefore\nwhere in Python’s namespace hierarchy)\nthe resulting extension lives.\n\nRemember, a package is always a directory (or zipfile) containing a module __init__. To create a module that's a package body, that module will be called __init__ and live under the package's directory (or zipfile). I've never done that in C; if it doesn't work to do it directly, name the module e.g. _init instead, and in __init__.py do from _init import * (one of the very few legitimate uses of from ... import *;-).\n" ]
[ 5 ]
[]
[]
[ "distutils", "python", "python_c_api" ]
stackoverflow_0001681281_distutils_python_python_c_api.txt
Q: Detailed explanation about Python's "freeze" Is there anywhere a detailed explanation about Python's "freeze" thing? I saw the PyPi page, but I don't think it's comprehensive enough. A: There is documentation about freeze on the wiki and the source docstring is pretty good.. There is an alternative, cx_Freeze. For windows there is py2exe. For Macs, py2app. Unless you are trying to make a single-download type program for windows, it is often easier to rely on eggs or source packages installed via setup.py/setuptools/distribute/pip.
Detailed explanation about Python's "freeze"
Is there anywhere a detailed explanation about Python's "freeze" thing? I saw the PyPi page, but I don't think it's comprehensive enough.
[ "There is documentation about freeze on the wiki and the source docstring is pretty good.. There is an alternative, cx_Freeze. For windows there is py2exe. For Macs, py2app. \nUnless you are trying to make a single-download type program for windows, it is often easier to rely on eggs or source packages installed via setup.py/setuptools/distribute/pip.\n" ]
[ 6 ]
[]
[]
[ "freeze", "python" ]
stackoverflow_0001681021_freeze_python.txt
Q: Komodo Python auto complete: type inference by variable metadata? I'm using Komodo Edit for Python development, and I want to get the best out of the auto complete. If I do this: a = A() a. I can see a list of members of A. But if I do this: a = [A()] b = a[0] b. It does not work. I want to be able to do this: a = [A()] b = a[0] """b Type: A """ b. So how can I tell the auto complete that b is of type A? A: This doesn't really answer your question, but with Wing IDE you can give hints to the type analyzer with assert isinstance(b, A). See here. I haven't found a way to do it with Komodo, though apparently it's possible when writing PHP or JavaScript. Update: I've found a way to trick Komodo into doing this: if 0: b=A() This works (at least on Komodo 5.2) and has no side effects, but is sure to confuse whoever reads your code. A: I don't think that you'll have much luck with this. The problem is that it's really quite difficult to statically infer the type of variables in Python except in the simplest of cases. Often the type isn't known until run-time and so auto completion isn't possible. The IDE does some static analysis to work out the obvious and best guesses, but I'll bet it isn't even trying for elements in a container. Although we can work out that b is of type A even small variations to your code can make it unknowable, especially as it's in a mutable container. Incidentally I've tried this on the full Komodo IDE and it's no better. I hear that Wing IDE has excellent code completion, but I wouldn't be sure it could do any better either.
Komodo Python auto complete: type inference by variable metadata?
I'm using Komodo Edit for Python development, and I want to get the best out of the auto complete. If I do this: a = A() a. I can see a list of members of A. But if I do this: a = [A()] b = a[0] b. It does not work. I want to be able to do this: a = [A()] b = a[0] """b Type: A """ b. So how can I tell the auto complete that b is of type A?
[ "This doesn't really answer your question, but with Wing IDE you can give hints to the type analyzer with assert isinstance(b, A). See here. I haven't found a way to do it with Komodo, though apparently it's possible when writing PHP or JavaScript.\nUpdate:\nI've found a way to trick Komodo into doing this:\nif 0: b=A()\n\nThis works (at least on Komodo 5.2) and has no side effects, but is sure to confuse whoever reads your code.\n", "I don't think that you'll have much luck with this. The problem is that it's really quite difficult to statically infer the type of variables in Python except in the simplest of cases. Often the type isn't known until run-time and so auto completion isn't possible.\nThe IDE does some static analysis to work out the obvious and best guesses, but I'll bet it isn't even trying for elements in a container. Although we can work out that b is of type A even small variations to your code can make it unknowable, especially as it's in a mutable container.\nIncidentally I've tried this on the full Komodo IDE and it's no better. I hear that Wing IDE has excellent code completion, but I wouldn't be sure it could do any better either.\n" ]
[ 8, 3 ]
[]
[]
[ "autocomplete", "komodo", "python" ]
stackoverflow_0001678953_autocomplete_komodo_python.txt
Q: How do I find out where an icon was clicked (relative to itself) using python? Essentially, what I want to do is have an icon that has different symbols for various programs at the bottom of it (for example, a python file might have a symbol for command prompt, a text editor, and a debugger, all little squares at the bottom of the icon), and when the user double clicks on one of these, that program is used to open it up. How can I put these symbols there (do I have to make a special icon, or is there an easier way to do it) and how can I see if they double click on these? Also, how can I stop the normal program from running when they double click, so only the program I want will work? Some background: I'm using Python 2.5 and I'm on a Windows Vista, but if possible I'd like for this to be able to run on XP or 7 as well. A: You use some GUI system, like GTK+ or wxPython or something. A: Sounds like you want to write an extension to windows explorer. This would be a significant project and I'm not sure Python would be the best language (you'd be accessing the win32 api a lot).
How do I find out where an icon was clicked (relative to itself) using python?
Essentially, what I want to do is have an icon that has different symbols for various programs at the bottom of it (for example, a python file might have a symbol for command prompt, a text editor, and a debugger, all little squares at the bottom of the icon), and when the user double clicks on one of these, that program is used to open it up. How can I put these symbols there (do I have to make a special icon, or is there an easier way to do it) and how can I see if they double click on these? Also, how can I stop the normal program from running when they double click, so only the program I want will work? Some background: I'm using Python 2.5 and I'm on a Windows Vista, but if possible I'd like for this to be able to run on XP or 7 as well.
[ "You use some GUI system, like GTK+ or wxPython or something.\n", "Sounds like you want to write an extension to windows explorer. This would be a significant project and I'm not sure Python would be the best language (you'd be accessing the win32 api a lot).\n" ]
[ 1, 1 ]
[]
[]
[ "icons", "python" ]
stackoverflow_0001681339_icons_python.txt
Q: How insecure is / replacement for tmpnam? I considered using tmpnam to set the output file name of a QPrinter. But the Python documentation recommends against using it. os.tmpnam() Return a unique path name that is reasonable for creating a temporary file. ... Applications are responsible for properly creating and managing files created using paths returned by tmpnam(); no automatic cleanup is provided. Warning Use of tmpnam() is vulnerable to symlink attacks; consider using tmpfile() (section File Object Creation) instead. Windows: Microsoft’s implementation of tmpnam() always creates a name in the root directory of the current drive, and that’s generally a poor location for a temp file (depending on privileges, you may not even be able to open a file using this name). Is this really insecure if my application doesn't need any special privileges? What are secure alternatives considering that I can only set a path as the output file name of the QPrinter? A: Please read http://docs.python.org/library/tempfile.html Use that instead. A: Depending on how your QPrinter deals with a file that already exists, you could use QTemporaryFile to create a file, then close the file and keep the reference to the QTemporaryFile object around until you are done with it. (This will also clean up the file for you when you destroy the object.)
How insecure is / replacement for tmpnam?
I considered using tmpnam to set the output file name of a QPrinter. But the Python documentation recommends against using it. os.tmpnam() Return a unique path name that is reasonable for creating a temporary file. ... Applications are responsible for properly creating and managing files created using paths returned by tmpnam(); no automatic cleanup is provided. Warning Use of tmpnam() is vulnerable to symlink attacks; consider using tmpfile() (section File Object Creation) instead. Windows: Microsoft’s implementation of tmpnam() always creates a name in the root directory of the current drive, and that’s generally a poor location for a temp file (depending on privileges, you may not even be able to open a file using this name). Is this really insecure if my application doesn't need any special privileges? What are secure alternatives considering that I can only set a path as the output file name of the QPrinter?
[ "Please read http://docs.python.org/library/tempfile.html\nUse that instead.\n", "Depending on how your QPrinter deals with a file that already exists, you could use QTemporaryFile to create a file, then close the file and keep the reference to the QTemporaryFile object around until you are done with it. (This will also clean up the file for you when you destroy the object.)\n" ]
[ 7, 0 ]
[]
[]
[ "pyqt", "python", "qt", "security" ]
stackoverflow_0001679844_pyqt_python_qt_security.txt
Q: Cross-platform way to terminate a process in python When I try to kill a process in windows with the subprocess.Popen.terminate() or kill() commands, I get an access denied error. I really need a cross-platform way to terminate the process if the file no longer exists (Yes, I know it's not the most elegant way of doing what I'm doing), I don't want to have to use platform calls or import win32api if at all possible. Also - Once I kill the task, I should be able to just delete the iteration of that portion of the library, no? (I remember reading something about having to use slice if I plan on working on something and modifying it while working on it?) #/usr/bin/env python #import sys import time import os import subprocess import platform ServerRange = range(7878, 7890) #Range of ports you want your server to use. cmd = 'VoiceChatterServer.exe' #********DO NOT EDIT BELOW THIS LINE******* def Start_IfConfExist(i): if os.path.exists(str(i) + ".conf"): Process[i] = subprocess.Popen(" " + cmd + " --config " + str(i) + ".conf", shell=True) Process = {} for i in ServerRange: Start_IfConfExist(i) while True: for i in ServerRange: if os.path.exists(str(i) + ".conf"): res = Process[i].poll() if not os.path.exists(str(i) + ".conf"): #This is the problem area res = Process[i].terminate() #This is the problem area. if res is not None: Start_IfConfExist(i) print "\nRestarting: " + str(i) + "\n" time.sleep(1) A: You can easily make a platform independent call by doing something trivial like: try: import win32 def kill(param): # the code from S.Lotts link except ImportError: def kill(param): # the unix way Why this doesn't exist in python by default I don't know, but there are very similar problems in other areas like file change notifications where it really isn't that hard to make a platform independent lib (or at least win+mac+linux). I guess it's open source so you have to fix it yourself :P
Cross-platform way to terminate a process in python
When I try to kill a process in windows with the subprocess.Popen.terminate() or kill() commands, I get an access denied error. I really need a cross-platform way to terminate the process if the file no longer exists (Yes, I know it's not the most elegant way of doing what I'm doing), I don't want to have to use platform calls or import win32api if at all possible. Also - Once I kill the task, I should be able to just delete the iteration of that portion of the library, no? (I remember reading something about having to use slice if I plan on working on something and modifying it while working on it?) #/usr/bin/env python #import sys import time import os import subprocess import platform ServerRange = range(7878, 7890) #Range of ports you want your server to use. cmd = 'VoiceChatterServer.exe' #********DO NOT EDIT BELOW THIS LINE******* def Start_IfConfExist(i): if os.path.exists(str(i) + ".conf"): Process[i] = subprocess.Popen(" " + cmd + " --config " + str(i) + ".conf", shell=True) Process = {} for i in ServerRange: Start_IfConfExist(i) while True: for i in ServerRange: if os.path.exists(str(i) + ".conf"): res = Process[i].poll() if not os.path.exists(str(i) + ".conf"): #This is the problem area res = Process[i].terminate() #This is the problem area. if res is not None: Start_IfConfExist(i) print "\nRestarting: " + str(i) + "\n" time.sleep(1)
[ "You can easily make a platform independent call by doing something trivial like:\ntry:\n import win32\n def kill(param):\n # the code from S.Lotts link\nexcept ImportError:\n def kill(param):\n # the unix way\n\nWhy this doesn't exist in python by default I don't know, but there are very similar problems in other areas like file change notifications where it really isn't that hard to make a platform independent lib (or at least win+mac+linux). I guess it's open source so you have to fix it yourself :P\n" ]
[ 2 ]
[]
[]
[ "python", "terminate" ]
stackoverflow_0001682447_python_terminate.txt
Q: How to write a stub for a classmethod in Python I have a method which calls for a classmethod of another class def get_interface_params_by_mac(self, host, mac_unified): lines = RemoteCommand.remote_command(host, cls.IFCONFIG) ... class RemoteCommand(object): @classmethod def remote_command(cls, host, cmd, sh = None): ... I'm going to write a unit test for get_interface_params_by_mac method, in which I'd like to change an implementation of remote_command (I think it calls stub - fix me if I wrong) What the right way to do this in Python? A: Your unit-test code (maybe in its setUp method, if this is needed across several test methods and thus qualifies as a fixture) should do: def fake_command(cls, host, cmd, sh=None): pass # whatever you want in here self.save_remote_command = somemodule.RemoteCommand.remote_command somemodule.RemoteCommand.remote_command = classmethod(fake_command) and then undo this monkey-patching (e.g. in the tearDown method if the patching is done in setUp) by somemodule.RemoteCommand.remote_command = self.save_remote_command It's not always necessary to put things back after a test, but it's good general practice. A more elegant approach would be to design your code for testability via the Dependency Injection (DI) pattern: def __init__(self, ...): ... self.remote_command = RemoteCommand.remote_command ... def set_remote_command_function(self, thefunction): self.remote_command = thefunction def get_interface_params_by_mac(self, host, mac_unified): lines = self.remote_command(host, cls.IFCONFIG) DI buys you a lot of flexibility (testability-wise, but also in many other contexts) at very little cost, which makes it one of my favorite design patterns (I'd much rather avoid monkey patching wherever I possibly can). Of course, if you design your code under test to use DI, all you need to do in your test is appropriately prepare that instance by calling the instance's set_remote_command_function with whatever fake-function you want to use!
How to write a stub for a classmethod in Python
I have a method which calls for a classmethod of another class def get_interface_params_by_mac(self, host, mac_unified): lines = RemoteCommand.remote_command(host, cls.IFCONFIG) ... class RemoteCommand(object): @classmethod def remote_command(cls, host, cmd, sh = None): ... I'm going to write a unit test for get_interface_params_by_mac method, in which I'd like to change an implementation of remote_command (I think it calls stub - fix me if I wrong) What the right way to do this in Python?
[ "Your unit-test code (maybe in its setUp method, if this is needed across several test methods and thus qualifies as a fixture) should do:\ndef fake_command(cls, host, cmd, sh=None):\n pass # whatever you want in here\nself.save_remote_command = somemodule.RemoteCommand.remote_command\nsomemodule.RemoteCommand.remote_command = classmethod(fake_command)\n\nand then undo this monkey-patching (e.g. in the tearDown method if the patching is done in setUp) by \nsomemodule.RemoteCommand.remote_command = self.save_remote_command \n\nIt's not always necessary to put things back after a test, but it's good general practice.\nA more elegant approach would be to design your code for testability via the Dependency Injection (DI) pattern:\ndef __init__(self, ...):\n ...\n self.remote_command = RemoteCommand.remote_command\n ...\n\ndef set_remote_command_function(self, thefunction):\n self.remote_command = thefunction\n\ndef get_interface_params_by_mac(self, host, mac_unified):\n lines = self.remote_command(host, cls.IFCONFIG)\n\nDI buys you a lot of flexibility (testability-wise, but also in many other contexts) at very little cost, which makes it one of my favorite design patterns (I'd much rather avoid monkey patching wherever I possibly can). Of course, if you design your code under test to use DI, all you need to do in your test is appropriately prepare that instance by calling the instance's set_remote_command_function with whatever fake-function you want to use!\n" ]
[ 7 ]
[]
[]
[ "python", "unit_testing" ]
stackoverflow_0001682504_python_unit_testing.txt
Q: What are cycles ? in relation to python im using the fantastic eric4 ide to code python, it's got a tool built in called 'cyclops', which is apparently looking for cycles. After running it, it gives me a bunch of big bold red letters declaring there to be a multitude of cycles in my code. The problem is the output is nearly indecipherable, there's no way im gonna understand what a cycle is by reading its output. ive browsed the web for hours and cant seem to find so much as a blog post. when the cycles pile up to a certain point the profiler and debugger stop working :(. my question is what are cycles, how do i know when im making a cycle, how do i avoid making cycles in python. thanks. A: A cycle (or "references loop") is two or more objects referring to each other, e.g.: alist = [] anoth = [alist] alist.append(anoth) or class Child(object): pass class Parent(object): pass c = Child() p = Parent() c.parent = p p.child = c Of course, these are extremely simple examples with cycles of just two items; real-life examples are often longer and harder to spot. There's no magic bullet telling you that you just made a cycle -- you just need to watch for it. The gc module (whose specific job is to garbage-collect unreachable cycles) can help you diagnose existing cycles (when you set the appropriate debug flags). The weakref module can help you to avoid building cycles when you do need (e.g.) a child and parent to know about each other without creating a reference cycle (make just one of the two mutual references into a weak ref or proxy, or use the handy weak-dictionary containers that the module supplies). A: All Cyclops tells you is whether there are objects in your code that refer to themselves through a chain of other objects. This used to be an issue in python, because the garbage collector wouldn't handle these kinds of objects correctly. That problem has since been, for the most part, fixed. Bottom line: if you're not observing a memory leak, you don't need to worry about the output of Cyclops in most instances.
What are cycles ? in relation to python
im using the fantastic eric4 ide to code python, it's got a tool built in called 'cyclops', which is apparently looking for cycles. After running it, it gives me a bunch of big bold red letters declaring there to be a multitude of cycles in my code. The problem is the output is nearly indecipherable, there's no way im gonna understand what a cycle is by reading its output. ive browsed the web for hours and cant seem to find so much as a blog post. when the cycles pile up to a certain point the profiler and debugger stop working :(. my question is what are cycles, how do i know when im making a cycle, how do i avoid making cycles in python. thanks.
[ "A cycle (or \"references loop\") is two or more objects referring to each other, e.g.:\nalist = []\nanoth = [alist]\nalist.append(anoth)\n\nor\nclass Child(object): pass\n\nclass Parent(object): pass\n\nc = Child()\np = Parent()\nc.parent = p\np.child = c\n\nOf course, these are extremely simple examples with cycles of just two items; real-life examples are often longer and harder to spot. There's no magic bullet telling you that you just made a cycle -- you just need to watch for it. The gc module (whose specific job is to garbage-collect unreachable cycles) can help you diagnose existing cycles (when you set the appropriate debug flags). The weakref module can help you to avoid building cycles when you do need (e.g.) a child and parent to know about each other without creating a reference cycle (make just one of the two mutual references into a weak ref or proxy, or use the handy weak-dictionary containers that the module supplies).\n", "All Cyclops tells you is whether there are objects in your code that refer to themselves through a chain of other objects. This used to be an issue in python, because the garbage collector wouldn't handle these kinds of objects correctly. That problem has since been, for the most part, fixed.\nBottom line: if you're not observing a memory leak, you don't need to worry about the output of Cyclops in most instances.\n" ]
[ 4, 1 ]
[]
[]
[ "cycle", "python" ]
stackoverflow_0001682657_cycle_python.txt
Q: Getting Easting & Northing Values from geopy I have a table full of longitude/ latitude pairs in decimal format (e.g., -41.547, 23.456). I want to display the values in "Easting and Northing"/ UTM format. Does geopy provide a way to convert from decimal to UTM? I see in the code that it will parse UTM values, but I don't see how to get them back out and the geopy Google Group has gone the way of all things. A: Nope. You need to reproject your points, and geopy isn't going to do that for you. What you need is libgdal and some Python bindings. I always use the bindings in GeoDjango, but there are other alternatives. EDIT: It is just a mathematical formula, but it's non-trivial. There are thousands of different ways to represent the surface of the Earth. See here for a huge but incomplete list. There are two parts to a geographic projection of the Earth-- a coordinate system and a datum. The latter is essentially a three-dimensional model of the planet. When you say you want to convert latitude/longitude points to UTM values, you're missing a couple of pieces of the puzzle. Let's assume that your lat/long points are based on the WGS84 datum, because that's a pretty common standard for lat/long points these days. You want to convert those points to a UTM coordinate system. But to which UTM coordinate system? There are 60 of them. A: I think I may have over-complicated things. All I wanted was the dms values (so 42.519540, -70.896716 becomes 42º31'10.34" N 70º53'48.18" W). You can get this by creating a geopy point object with your long and lat, then calling format(). However, as of this writing, format() is broken and requires the patch here.
Getting Easting & Northing Values from geopy
I have a table full of longitude/ latitude pairs in decimal format (e.g., -41.547, 23.456). I want to display the values in "Easting and Northing"/ UTM format. Does geopy provide a way to convert from decimal to UTM? I see in the code that it will parse UTM values, but I don't see how to get them back out and the geopy Google Group has gone the way of all things.
[ "Nope. You need to reproject your points, and geopy isn't going to do that for you.\nWhat you need is libgdal and some Python bindings. I always use the bindings in GeoDjango, but there are other alternatives.\nEDIT: It is just a mathematical formula, but it's non-trivial. There are thousands of different ways to represent the surface of the Earth. See here for a huge but incomplete list.\nThere are two parts to a geographic projection of the Earth-- a coordinate system and a datum. The latter is essentially a three-dimensional model of the planet. When you say you want to convert latitude/longitude points to UTM values, you're missing a couple of pieces of the puzzle.\nLet's assume that your lat/long points are based on the WGS84 datum, because that's a pretty common standard for lat/long points these days. You want to convert those points to a UTM coordinate system. But to which UTM coordinate system? There are 60 of them. \n", "I think I may have over-complicated things. All I wanted was the dms values (so 42.519540,\n-70.896716 becomes 42º31'10.34\" N 70º53'48.18\" W). You can get this by creating a geopy point object with your long and lat, then calling format(). However, as of this writing, format() is broken and requires the patch here.\n" ]
[ 2, 0 ]
[]
[]
[ "formats", "geocoding", "geopy", "python" ]
stackoverflow_0001647408_formats_geocoding_geopy_python.txt
Q: Embedding Python Design There are lots of tutorials/instructions on how to embed python in an application, but nothing (that I've seen) on overall design for how the embedded interpreter should be used and interact with the application. The only idea I could think of would be to simply give the user a method (menu option, etc) of executing scripts in the program. So certain classes, functions, objects, etc. would be exported to python, some script would do something, then said script could be run from the program. Would such a design be "safe?" Meaning is it feasible for a malicious/poorly-written script to "damage" the program and/or computer? I assume its possible depending on the functions available to the script (e.g: it could try to overwrite some important files, etc.) How might one prevent such from happening? (e.g: script certification, program design, etc.) This is implementation specific, but is it possible/feasible to have the effects of the script stay after its done running? Meaning if a script computes something, will the result be available to the program after execution of the script has finished? I think it is possible to do if the program were setup to interact with a specific script, but the program will be released before most scripts are written; and such a setup seems like a misuse of embedding a scripting language. Is there actually cases where you would want the result of a scripts execution to be available, or is this a contrived situation that doesn't really occur? Are there any other designs for embedding python? What about using python in a way similar to a plugin architecture? Thanks, Matthew A. Todd A: The only idea I could think of would be to simply give the user a method (menu option, etc) of executing scripts in the program. Correct. So certain classes, functions, objects, etc. would be exported to python, some script would do something, then said script could be run from the program. Correct. Would such a design be "safe?" Yes. Unless your users are malicious, psychotic sociopaths. They want to make your program do useful things. They bought/downloaded the software in the first place. They think it has value. They trusted your software. Why not trust them? Meaning if a script computes something, will the result be available to the program after execution of the script has finished? Programs like Apache do this all the time. You screw up the configuration ("script"), it crashes. Lesson learned? Don't screw up the configuration.
Embedding Python Design
There are lots of tutorials/instructions on how to embed python in an application, but nothing (that I've seen) on overall design for how the embedded interpreter should be used and interact with the application. The only idea I could think of would be to simply give the user a method (menu option, etc) of executing scripts in the program. So certain classes, functions, objects, etc. would be exported to python, some script would do something, then said script could be run from the program. Would such a design be "safe?" Meaning is it feasible for a malicious/poorly-written script to "damage" the program and/or computer? I assume its possible depending on the functions available to the script (e.g: it could try to overwrite some important files, etc.) How might one prevent such from happening? (e.g: script certification, program design, etc.) This is implementation specific, but is it possible/feasible to have the effects of the script stay after its done running? Meaning if a script computes something, will the result be available to the program after execution of the script has finished? I think it is possible to do if the program were setup to interact with a specific script, but the program will be released before most scripts are written; and such a setup seems like a misuse of embedding a scripting language. Is there actually cases where you would want the result of a scripts execution to be available, or is this a contrived situation that doesn't really occur? Are there any other designs for embedding python? What about using python in a way similar to a plugin architecture? Thanks, Matthew A. Todd
[ "The only idea I could think of would be to simply give the user a method (menu option, etc) of executing scripts in the program.\nCorrect.\nSo certain classes, functions, objects, etc. would be exported to python, some script would do something, then said script could be run from the program.\nCorrect.\nWould such a design be \"safe?\"\nYes. Unless your users are malicious, psychotic sociopaths. They want to make your program do useful things. They bought/downloaded the software in the first place. They think it has value.\nThey trusted your software. Why not trust them?\nMeaning if a script computes something, will the result be available to the program after execution of the script has finished?\nPrograms like Apache do this all the time. You screw up the configuration (\"script\"), it crashes. Lesson learned? Don't screw up the configuration.\n" ]
[ 1 ]
[]
[]
[ "embedding", "python" ]
stackoverflow_0001682831_embedding_python.txt
Q: Django: Blank Choices in Many To Many Fields When making forms in Django, the IntegerField comes with a blank choice (a bunch of dashes "------") if called with blank=True and null=True. Is there any way to get ManyToManyField to include such an explicit blank choice? I've tried subclassing ManyToManyField with no success: class ManyFieldWithBlank(ManyToManyField): """ A Many-to-Many Field with a blank choice """ def get_choices_default(self): return Field.get_choices(self, include_blank=True) A: That is not really an improvement on the interface, IMO. Why not have a button in your template saying "none of these" or "reset choices"? Better yet - if your field is called "Blah" make the button say "Unselect all Blah". The button would just have some javascript to clear out any selection in the select box. This is a much clearer UI for the user and easy to implement. Disclaimer: IANADesigner.
Django: Blank Choices in Many To Many Fields
When making forms in Django, the IntegerField comes with a blank choice (a bunch of dashes "------") if called with blank=True and null=True. Is there any way to get ManyToManyField to include such an explicit blank choice? I've tried subclassing ManyToManyField with no success: class ManyFieldWithBlank(ManyToManyField): """ A Many-to-Many Field with a blank choice """ def get_choices_default(self): return Field.get_choices(self, include_blank=True)
[ "That is not really an improvement on the interface, IMO.\nWhy not have a button in your template saying \"none of these\" or \"reset choices\"? Better yet - if your field is called \"Blah\" make the button say \"Unselect all Blah\".\nThe button would just have some javascript to clear out any selection in the select box.\nThis is a much clearer UI for the user and easy to implement.\nDisclaimer: IANADesigner.\n" ]
[ 4 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001682537_django_python.txt
Q: Sorting Lists of List of Dictionaries I've just read in a file that is something like: name: john, jane car: db9, m5 food: pizza, lasagne Each of these rows (names, car, food) are in order of who owns what. Therefore John owns the car 'DB9' and his favourite food is 'Pizza'. Likewise with Jane, her car is an 'M5' and her favourite food is 'Lasagne'. I effectively have: >>> names['Name']="John" >>> namesL.append(name) >>> names['Name']="Jane" >>> namesL.append(name) >>> car['Car']="DB9" >>> cars.append(car) >>> car['Car']="M5" >>> cars.append(car) >>> food['Food']="Pizza" >>> foodL.append(food) >>> food['Food']="Lasagne" >>> foodL.append(food) >>>ultimateList.append(foodL) ... However I want it so that each of these things are in their own dictionary. So something like this: >>>PersonalDict {'Name': 'John', 'Car': 'DB9', 'Food': 'Pizza'} I've been staring at it for a while and can't work out how I should approach this. Can anyone offer some ideas or shall I just do this some other way? A: Looks like you want something like: import collections data = '''name: john, jane car: db9, m5 food: pizza, lasagne ''' personal_list = collections.defaultdict(dict) for line in data.splitlines(): key, _, info = line.partition(':') infos = info.split(',') key = key.strip().title() for i, item in enumerate(infos): item = item.strip().title() personal_list[i][key] = item for i in personal_list: print personal_list[i] That doesn't do exactly what you specify (the capitalization of the B in DB9 seems totally weird for example -- how would the code know to capitalize that particular second letter and not any other second letter?!) but it seems pretty close. A: Try: f = open('filename.txt') result = [] for line in f: key, values = line.split(':') values = values.rstrip().split(', ') for i, value in enumerate(values): try: result[i][key] = value except IndexError: result.append({ key: value}) print result A: Split the initial data into index/key/value triples go from there. def parse_data(lines): for line in lines: key, _, data = line.partition(':') for i, datum in enumerate(x.strip() for x in data.split(',')): yield i, key, datum From there you can aggregate the data useing Alex's defaultdict approach (probably best) or sort and a bunch of extra code to build individual dictionaries on demand. A: An homage to generators: #!/usr/bin/env python data=(zip(*([elt.strip().title() for elt in line.replace(':',',',1).split(',')] for line in open('filename.txt','r')))) personal_list=[dict(zip(data[0],datum)) for datum in data[1:]] print(personal_list) # [{'Food': 'Pizza', 'Car': 'Db9', 'Name': 'John'}, {'Food': 'Lasagne', 'Car': 'M5', 'Name': 'Jane'}] To understand how the script works, we break it apart: First we load filename.txt into a list of lines: In [41]: [line for line in open('filename.txt','r')] Out[41]: ['name: john, jane\n', 'car: db9, m5\n', 'food: pizza, lasagne\n'] Next we replace the first colon (:) with a comma (,) In [42]: [line.replace(':',',',1) for line in open('filename.txt','r')] Out[42]: ['name, john, jane\n', 'car, db9, m5\n', 'food, pizza, lasagne\n'] Then we split each line on commas: In [43]: [line.replace(':',',',1).split(',') for line in open('filename.txt','r')] Out[43]: [['name', ' john', ' jane\n'], ['car', ' db9', ' m5\n'], ['food', ' pizza', ' lasagne\n']] For each element in each line, we strip off beginning/ending whitespace and capitalize the string like a title: In [45]: [[elt.strip().title() for elt in line.replace(':',',',1).split(',')] for line in open('filename.txt','r')] Out[45]: [['Name', 'John', 'Jane'], ['Car', 'Db9', 'M5'], ['Food', 'Pizza', 'Lasagne']] Now we collect the first element of each list, then the second, and so forth: In [47]: data=(zip(*([elt.strip().title() for elt in line.replace(':',',',1).split(',')] for line in open('filename.txt','r')))) In [48]: data Out[48]: [('Name', 'Car', 'Food'), ('John', 'Db9', 'Pizza'), ('Jane', 'M5', 'Lasagne')] data[0] now holds the keys for a dict. In [49]: data[0] Out[49]: ('Name', 'Car', 'Food') Each tuple in data[1:] are the values for a dict. In [50]: data[1:] Out[50]: [('John', 'Db9', 'Pizza'), ('Jane', 'M5', 'Lasagne')] Here we zip up the keys with the values: In [52]: [ zip(data[0],datum) for datum in data[1:]] Out[52]: [[('Name', 'John'), ('Car', 'Db9'), ('Food', 'Pizza')], [('Name', 'Jane'), ('Car', 'M5'), ('Food', 'Lasagne')]] Finally, we turn it into a list of dicts: In [54]: [dict(zip(data[0],datum)) for datum in data[1:]] Out[54]: [{'Car': 'Db9', 'Food': 'Pizza', 'Name': 'John'}, {'Car': 'M5', 'Food': 'Lasagne', 'Name': 'Jane'}]
Sorting Lists of List of Dictionaries
I've just read in a file that is something like: name: john, jane car: db9, m5 food: pizza, lasagne Each of these rows (names, car, food) are in order of who owns what. Therefore John owns the car 'DB9' and his favourite food is 'Pizza'. Likewise with Jane, her car is an 'M5' and her favourite food is 'Lasagne'. I effectively have: >>> names['Name']="John" >>> namesL.append(name) >>> names['Name']="Jane" >>> namesL.append(name) >>> car['Car']="DB9" >>> cars.append(car) >>> car['Car']="M5" >>> cars.append(car) >>> food['Food']="Pizza" >>> foodL.append(food) >>> food['Food']="Lasagne" >>> foodL.append(food) >>>ultimateList.append(foodL) ... However I want it so that each of these things are in their own dictionary. So something like this: >>>PersonalDict {'Name': 'John', 'Car': 'DB9', 'Food': 'Pizza'} I've been staring at it for a while and can't work out how I should approach this. Can anyone offer some ideas or shall I just do this some other way?
[ "Looks like you want something like:\nimport collections\n\ndata = '''name: john, jane\ncar: db9, m5\nfood: pizza, lasagne\n'''\n\npersonal_list = collections.defaultdict(dict)\n\nfor line in data.splitlines():\n key, _, info = line.partition(':')\n infos = info.split(',')\n key = key.strip().title()\n for i, item in enumerate(infos):\n item = item.strip().title()\n personal_list[i][key] = item\n\nfor i in personal_list:\n print personal_list[i]\n\nThat doesn't do exactly what you specify (the capitalization of the B in DB9 seems totally weird for example -- how would the code know to capitalize that particular second letter and not any other second letter?!) but it seems pretty close.\n", "Try:\nf = open('filename.txt')\n\nresult = []\nfor line in f:\n key, values = line.split(':')\n values = values.rstrip().split(', ')\n for i, value in enumerate(values):\n try:\n result[i][key] = value\n except IndexError:\n result.append({ key: value})\n\nprint result\n\n", "Split the initial data into index/key/value triples go from there.\ndef parse_data(lines):\n for line in lines:\n key, _, data = line.partition(':')\n for i, datum in enumerate(x.strip() for x in data.split(',')):\n yield i, key, datum\n\nFrom there you can aggregate the data useing Alex's defaultdict approach (probably best) or sort and a bunch of extra code to build individual dictionaries on demand.\n", "An homage to generators:\n#!/usr/bin/env python\ndata=(zip(*([elt.strip().title() for elt in line.replace(':',',',1).split(',')]\n for line in open('filename.txt','r'))))\npersonal_list=[dict(zip(data[0],datum)) for datum in data[1:]]\nprint(personal_list)\n\n# [{'Food': 'Pizza', 'Car': 'Db9', 'Name': 'John'}, {'Food': 'Lasagne', 'Car': 'M5', 'Name': 'Jane'}]\n\nTo understand how the script works, we break it apart:\nFirst we load filename.txt into a list of lines:\nIn [41]: [line for line in open('filename.txt','r')]\nOut[41]: ['name: john, jane\\n', 'car: db9, m5\\n', 'food: pizza, lasagne\\n']\n\nNext we replace the first colon (:) with a comma (,)\nIn [42]: [line.replace(':',',',1) for line in open('filename.txt','r')]\nOut[42]: ['name, john, jane\\n', 'car, db9, m5\\n', 'food, pizza, lasagne\\n']\n\nThen we split each line on commas:\nIn [43]: [line.replace(':',',',1).split(',') for line in open('filename.txt','r')]\nOut[43]: \n[['name', ' john', ' jane\\n'],\n ['car', ' db9', ' m5\\n'],\n ['food', ' pizza', ' lasagne\\n']]\n\nFor each element in each line, we strip off beginning/ending whitespace and capitalize the string like a title:\nIn [45]: [[elt.strip().title() for elt in line.replace(':',',',1).split(',')] for line in open('filename.txt','r')]\nOut[45]: [['Name', 'John', 'Jane'], ['Car', 'Db9', 'M5'], ['Food', 'Pizza', 'Lasagne']]\n\nNow we collect the first element of each list, then the second, and so forth:\nIn [47]: data=(zip(*([elt.strip().title() for elt in line.replace(':',',',1).split(',')] for line in open('filename.txt','r'))))\n\nIn [48]: data\nOut[48]: [('Name', 'Car', 'Food'), ('John', 'Db9', 'Pizza'), ('Jane', 'M5', 'Lasagne')]\n\ndata[0] now holds the keys for a dict.\nIn [49]: data[0]\nOut[49]: ('Name', 'Car', 'Food')\n\nEach tuple in data[1:] are the values for a dict.\nIn [50]: data[1:]\nOut[50]: [('John', 'Db9', 'Pizza'), ('Jane', 'M5', 'Lasagne')]\n\nHere we zip up the keys with the values:\nIn [52]: [ zip(data[0],datum) for datum in data[1:]]\nOut[52]: \n[[('Name', 'John'), ('Car', 'Db9'), ('Food', 'Pizza')],\n [('Name', 'Jane'), ('Car', 'M5'), ('Food', 'Lasagne')]]\n\nFinally, we turn it into a list of dicts:\nIn [54]: [dict(zip(data[0],datum)) for datum in data[1:]]\nOut[54]: \n[{'Car': 'Db9', 'Food': 'Pizza', 'Name': 'John'},\n {'Car': 'M5', 'Food': 'Lasagne', 'Name': 'Jane'}]\n\n" ]
[ 3, 1, 1, 1 ]
[]
[]
[ "dictionary", "list", "python", "sorting" ]
stackoverflow_0001682506_dictionary_list_python_sorting.txt
Q: Google Apps Engine mail fetch How can I fetch mails from gmail account in Google Apps Engine Django application? A: Configure your app to receive email, then, if it really has to be a gmail address, set up the gmail address to forward everything to your appspot address. A: You could cronjob rss feed to gmail messages? gmail rss python cron A: I would use libgmail -- seems to be the most popular pure-Python way to do it. However for app engine use I believe it would have to be ported to use urlfetch and I don't think anybody's done that yet (I'd happily receive news to the contrary!-).
Google Apps Engine mail fetch
How can I fetch mails from gmail account in Google Apps Engine Django application?
[ "Configure your app to receive email, then, if it really has to be a gmail address, set up the gmail address to forward everything to your appspot address.\n", "You could cronjob rss feed to gmail messages?\ngmail rss\npython cron\n", "I would use libgmail -- seems to be the most popular pure-Python way to do it. However for app engine use I believe it would have to be ported to use urlfetch and I don't think anybody's done that yet (I'd happily receive news to the contrary!-).\n" ]
[ 3, 0, 0 ]
[ "Just use the Python's standard POP or IMAP client. Google does not provide a GMail API.\n" ]
[ -1 ]
[ "gmail", "google_app_engine", "python" ]
stackoverflow_0001680856_gmail_google_app_engine_python.txt
Q: Can I use Django's Generic Views with google-app-engine-django? Put simply, is there a way to get generic views to work? If I try the following in urls.py: publisher_info = { 'queryset': Publisher.objects.all(), } urlpatterns = patterns('', (r'^publishers/$', list_detail.object_list, publisher_info) ) I get the following error: AttributeError at /publishers 'Query' object has no attribute '_clone' Is this due to the fact that Django models aren't supported on App Engine and google-app-engine-django hasn't been able to port over all associated code? If so, would it be easy to fix myself? A: It looks like this project should provide that functionality as a core feature. http://code.google.com/p/app-engine-patch/ A: The answer seems to be No.
Can I use Django's Generic Views with google-app-engine-django?
Put simply, is there a way to get generic views to work? If I try the following in urls.py: publisher_info = { 'queryset': Publisher.objects.all(), } urlpatterns = patterns('', (r'^publishers/$', list_detail.object_list, publisher_info) ) I get the following error: AttributeError at /publishers 'Query' object has no attribute '_clone' Is this due to the fact that Django models aren't supported on App Engine and google-app-engine-django hasn't been able to port over all associated code? If so, would it be easy to fix myself?
[ "It looks like this project should provide that functionality as a core feature.\nhttp://code.google.com/p/app-engine-patch/\n", "The answer seems to be No.\n" ]
[ 1, 1 ]
[]
[]
[ "django", "django_generic_views", "django_models", "google_app_engine", "python" ]
stackoverflow_0001572255_django_django_generic_views_django_models_google_app_engine_python.txt
Q: website load testing Python script I am after a Python script to help me load test my Google App Engine website. I want to give it a set of URLs and a request rate (would need to use threads) and then measure the response times of my website. I have had a look at a few solutions but they don't let you set an upper limit for the request rate. Any ideas? Thanks A: You don't need a python script for this, you want to use the apache tool ab. http://httpd.apache.org/docs/2.0/programs/ab.html It is the canonical load testing solution, and will get you great metrics for performance. You can set the request rate, but should really look at the concurrency level which is a far more meaningful statistic. A: Pylot is a versatile load testing tool written in Python. I haven't used it personally, but it seems good.
website load testing Python script
I am after a Python script to help me load test my Google App Engine website. I want to give it a set of URLs and a request rate (would need to use threads) and then measure the response times of my website. I have had a look at a few solutions but they don't let you set an upper limit for the request rate. Any ideas? Thanks
[ "You don't need a python script for this, you want to use the apache tool ab.\nhttp://httpd.apache.org/docs/2.0/programs/ab.html\nIt is the canonical load testing solution, and will get you great metrics for performance. You can set the request rate, but should really look at the concurrency level which is a far more meaningful statistic.\n", "Pylot is a versatile load testing tool written in Python. I haven't used it personally, but it seems good.\n" ]
[ 3, 3 ]
[]
[]
[ "google_app_engine", "load_testing", "python", "web" ]
stackoverflow_0001683342_google_app_engine_load_testing_python_web.txt
Q: 'getattr(): attribute name must be string' error in admin panel for a model with an ImageField I have the following model set up: class UserProfile(models.Model): "Additional attributes for users." url = models.URLField() location = models.CharField(max_length=100) user = models.ForeignKey(User, unique=True) avatar = models.ImageField(upload_to='/home/something/www/avatars', height_field=80, width_field=80) def __unicode__(self): return "Profile of " + self.user.username It's supposed to store additional information about a user, for example an avatar. Unfortunately, when I try to upload an image via the admin panel, it gives me an error, something like: getattr(): attribute name must be string Which is not produced when I remove that field from the model, do a db reset and reload the server. I'd imagine the cause is this particular field, just not sure how. This is my traceback: File "/usr/lib/pymodules/python2.6/django/core/handlers/base.py" in get_response 92. response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/pymodules/python2.6/django/contrib/admin/options.py" in wrapper 226. return self.admin_site.admin_view(view)(*args, **kwargs) File "/usr/lib/pymodules/python2.6/django/views/decorators/cache.py" in _wrapped_view_func 44. response = view_func(request, *args, **kwargs) File "/usr/lib/pymodules/python2.6/django/contrib/admin/sites.py" in inner 186. return view(request, *args, **kwargs) File "/usr/lib/pymodules/python2.6/django/db/transaction.py" in _commit_on_success 240. res = func(*args, **kw) File "/usr/lib/pymodules/python2.6/django/contrib/admin/options.py" in add_view 718. new_object = self.save_form(request, form, change=False) File "/usr/lib/pymodules/python2.6/django/contrib/admin/options.py" in save_form 551. return form.save(commit=False) File "/usr/lib/pymodules/python2.6/django/forms/models.py" in save 407. fail_message, commit, exclude=self._meta.exclude) File "/usr/lib/pymodules/python2.6/django/forms/models.py" in save_instance 65. f.save_form_data(instance, cleaned_data[f.name]) File "/usr/lib/pymodules/python2.6/django/db/models/fields/files.py" in save_form_data 283. setattr(instance, self.name, data) File "/usr/lib/pymodules/python2.6/django/db/models/fields/files.py" in __set__ 316. self.field.update_dimension_fields(instance, force=True) File "/usr/lib/pymodules/python2.6/django/db/models/fields/files.py" in update_dimension_fields 368. (self.width_field and not getattr(instance, self.width_field)) Exception Type: TypeError at /admin/proj/userprofile/add/ Exception Value: getattr(): attribute name must be string A: Your problem is with height_field=80 and width_field=80 these should not contain the height and width you require but rather the names of fields in your model that can have the values for height and width save in them. As explained in the Django documentation for the ImagedField these are attributes on your model which will be populated for you when the model is saved. If you want this information populated for you the create model attribute where this information can be stored otherwise just remove these attributes they are optional. A: The problem is probably this: height_field=80, width_field=80 height_field and width_field, if you use them, are supposed to be names of fields on the model which contain the height and width information. Fix this it should work.
'getattr(): attribute name must be string' error in admin panel for a model with an ImageField
I have the following model set up: class UserProfile(models.Model): "Additional attributes for users." url = models.URLField() location = models.CharField(max_length=100) user = models.ForeignKey(User, unique=True) avatar = models.ImageField(upload_to='/home/something/www/avatars', height_field=80, width_field=80) def __unicode__(self): return "Profile of " + self.user.username It's supposed to store additional information about a user, for example an avatar. Unfortunately, when I try to upload an image via the admin panel, it gives me an error, something like: getattr(): attribute name must be string Which is not produced when I remove that field from the model, do a db reset and reload the server. I'd imagine the cause is this particular field, just not sure how. This is my traceback: File "/usr/lib/pymodules/python2.6/django/core/handlers/base.py" in get_response 92. response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/pymodules/python2.6/django/contrib/admin/options.py" in wrapper 226. return self.admin_site.admin_view(view)(*args, **kwargs) File "/usr/lib/pymodules/python2.6/django/views/decorators/cache.py" in _wrapped_view_func 44. response = view_func(request, *args, **kwargs) File "/usr/lib/pymodules/python2.6/django/contrib/admin/sites.py" in inner 186. return view(request, *args, **kwargs) File "/usr/lib/pymodules/python2.6/django/db/transaction.py" in _commit_on_success 240. res = func(*args, **kw) File "/usr/lib/pymodules/python2.6/django/contrib/admin/options.py" in add_view 718. new_object = self.save_form(request, form, change=False) File "/usr/lib/pymodules/python2.6/django/contrib/admin/options.py" in save_form 551. return form.save(commit=False) File "/usr/lib/pymodules/python2.6/django/forms/models.py" in save 407. fail_message, commit, exclude=self._meta.exclude) File "/usr/lib/pymodules/python2.6/django/forms/models.py" in save_instance 65. f.save_form_data(instance, cleaned_data[f.name]) File "/usr/lib/pymodules/python2.6/django/db/models/fields/files.py" in save_form_data 283. setattr(instance, self.name, data) File "/usr/lib/pymodules/python2.6/django/db/models/fields/files.py" in __set__ 316. self.field.update_dimension_fields(instance, force=True) File "/usr/lib/pymodules/python2.6/django/db/models/fields/files.py" in update_dimension_fields 368. (self.width_field and not getattr(instance, self.width_field)) Exception Type: TypeError at /admin/proj/userprofile/add/ Exception Value: getattr(): attribute name must be string
[ "Your problem is with height_field=80 and width_field=80 these should not contain the height and width you require but rather the names of fields in your model that can have the values for height and width save in them.\nAs explained in the Django documentation for the ImagedField these are attributes on your model which will be populated for you when the model is saved. If you want this information populated for you the create model attribute where this information can be stored otherwise just remove these attributes they are optional.\n", "The problem is probably this: \nheight_field=80, width_field=80\n\nheight_field and width_field, if you use them, are supposed to be names of fields on the model which contain the height and width information. Fix this it should work.\n" ]
[ 23, 9 ]
[]
[]
[ "django", "django_admin", "django_models", "python" ]
stackoverflow_0001683362_django_django_admin_django_models_python.txt
Q: How to defer a Django DB operation from within Twisted? I have a normal Django site running. In addition, there is another twisted process, which listens for Jabber presence notifications and updates the Django DB using Django's ORM. So far it works as I just call the corresponding Django models (after having set up the settings environment correctly). This, however, blocks the Twisted app, which is not what I want. As I'm new to twisted I don't know, what the best way would be to access the Django DB (via its ORM) in a non-blocking way using deferreds. deferredGenerator ? twisted.enterprise.adbapi ? (circumvent the ORM?) ??? If the presence message is parsed I want to save in the Django DB that the user with jid_str is online/offline (using the Django model UserProfile). I do it with that function: def django_useravailable(jid_str, user_available): try: userhost = jid.JID(jid_str).userhost() user = UserProfile.objects.get(im_jabber_name=userhost) user.im_jabber_online = user_available user.save() return jid_str, user_available except Exception, e: print e raise jid_str, user_available,e Currently, I invoke it with: d = threads.deferToThread(django_useravailable, from_attr, user_available) d.addCallback(self.success) d.addErrback(self.failure) A: "I have a normal Django site running." Presumably under Apache using mod_wsgi or similar. If you're using mod_wsgi embedded in Apache, note that Apache is multi-threaded and your Python threads are mashed into Apache's threading. Analysis of what's blocking could get icky. If you're using mod_wsgi in daemon mode (which you should be) then your Django is a separate process. Why not continue this design pattern and make your "jabber listener" a separate process. If you'd like this process to be run any any of a number of servers, then have it be started from init.rc or cron. Because it's a separate process it will not compete for attention. Your Django process runs quickly and your Jabber listener runs independently. A: I have been successful using the method you described as your current method. You'll find by reading the docs that the twisted DB api uses threads under the hood because most SQL libraries have a blocking API. I have a twisted server that saves data from power monitors in the field, and it does it by starting up a subthread every now and again and calling my Django save code. You can read more about my live data collection pipeline (that's a blog link). Are you saying that you are starting up a sub thread and that is still blocking? A: I have a running Twisted app where I use Django ORM. I'm not deferring it. I know it's wrong, but hadd no problems yet.
How to defer a Django DB operation from within Twisted?
I have a normal Django site running. In addition, there is another twisted process, which listens for Jabber presence notifications and updates the Django DB using Django's ORM. So far it works as I just call the corresponding Django models (after having set up the settings environment correctly). This, however, blocks the Twisted app, which is not what I want. As I'm new to twisted I don't know, what the best way would be to access the Django DB (via its ORM) in a non-blocking way using deferreds. deferredGenerator ? twisted.enterprise.adbapi ? (circumvent the ORM?) ??? If the presence message is parsed I want to save in the Django DB that the user with jid_str is online/offline (using the Django model UserProfile). I do it with that function: def django_useravailable(jid_str, user_available): try: userhost = jid.JID(jid_str).userhost() user = UserProfile.objects.get(im_jabber_name=userhost) user.im_jabber_online = user_available user.save() return jid_str, user_available except Exception, e: print e raise jid_str, user_available,e Currently, I invoke it with: d = threads.deferToThread(django_useravailable, from_attr, user_available) d.addCallback(self.success) d.addErrback(self.failure)
[ "\"I have a normal Django site running.\"\nPresumably under Apache using mod_wsgi or similar.\nIf you're using mod_wsgi embedded in Apache, note that Apache is multi-threaded and your Python threads are mashed into Apache's threading. Analysis of what's blocking could get icky.\nIf you're using mod_wsgi in daemon mode (which you should be) then your Django is a separate process. \nWhy not continue this design pattern and make your \"jabber listener\" a separate process. \nIf you'd like this process to be run any any of a number of servers, then have it be started from init.rc or cron. \nBecause it's a separate process it will not compete for attention. Your Django process runs quickly and your Jabber listener runs independently.\n", "I have been successful using the method you described as your current method. You'll find by reading the docs that the twisted DB api uses threads under the hood because most SQL libraries have a blocking API.\nI have a twisted server that saves data from power monitors in the field, and it does it by starting up a subthread every now and again and calling my Django save code. You can read more about my live data collection pipeline (that's a blog link).\nAre you saying that you are starting up a sub thread and that is still blocking?\n", "I have a running Twisted app where I use Django ORM. I'm not deferring it. I know it's wrong, but hadd no problems yet.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "deferred_execution", "django", "python", "twisted" ]
stackoverflow_0001642392_deferred_execution_django_python_twisted.txt
Q: What does the ** maths operator do in Python? What does this mean in Python: sock.recvfrom(2**16) I know what sock is, and I get the gist of the recvfrom function, but what the heck is 2**16? Specifically, the two asterisk/double asterisk operator? (english keywords, because it's hard to search for this: times-times star-star asterisk-asterisk double-times double-star double-asterisk operator) A: It is the power operator. From the Python 3 docs: The power operator has the same semantics as the built-in pow() function, when called with two arguments: it yields its left argument raised to the power of its right argument. The numeric arguments are first converted to a common type, and the result is of that type. It is equivalent to 216 = 65536, or pow(2, 16) A: http://docs.python.org/library/operator.html#mapping-operators-to-functions a ** b = pow(a,b) A: 2 raised to the 16th power A: I believe that's the power operator, such that 2**5 = 32. A: It is the awesome power operator which like complex numbers is another thing you wonder why more programming languages don't have.
What does the ** maths operator do in Python?
What does this mean in Python: sock.recvfrom(2**16) I know what sock is, and I get the gist of the recvfrom function, but what the heck is 2**16? Specifically, the two asterisk/double asterisk operator? (english keywords, because it's hard to search for this: times-times star-star asterisk-asterisk double-times double-star double-asterisk operator)
[ "It is the power operator.\nFrom the Python 3 docs: \n\nThe power operator has the same semantics as the built-in pow() function, when called with two arguments: it yields its left argument raised to the power of its right argument. The numeric arguments are first converted to a common type, and the result is of that type.\n\nIt is equivalent to 216 = 65536, or pow(2, 16)\n", "http://docs.python.org/library/operator.html#mapping-operators-to-functions\na ** b = pow(a,b)\n\n", "2 raised to the 16th power\n", "I believe that's the power operator, such that 2**5 = 32.\n", "It is the awesome power operator which like complex numbers is another thing you wonder why more programming languages don't have.\n" ]
[ 55, 14, 6, 4, 1 ]
[]
[]
[ "operators", "python", "syntax" ]
stackoverflow_0001683008_operators_python_syntax.txt
Q: Sort a multidimensional list by a variable number of keys I've read this post and is hasn't ended up working for me. Edit: the functionality I'm describing is just like the sorting function in Excel... if that makes it any clearer Here's my situation, I have a tab-delimited text document. There are about 125,000 lines and 6 columns per line (columns are separated by a tab character). I've split the document into a two-dimension list. I am trying to write a generic function to sort two-dimensional lists. Basically I would like to have a function where I can pass the big list, and the key of one or more columns I would like to sort the big list by. Obviously, I would like the first key passed to be the primary sorting point, then the second key, etc. Still confuzzled? Here's an example of what I would like to do. Joel 18 Orange 1 Anna 17 Blue 2 Ryan 18 Green 3 Luke 16 Blue 1 Katy 13 Pink 5 Tyler 22 Blue 6 Bob 22 Blue 10 Garrett 24 Red 7 Ryan 18 Green 8 Leland 18 Yellow 9 Say I passed this list to my magical function, like so: sortByColumn(bigList, 0) Anna 17 Blue 2 Bob 22 Blue 10 Garrett 24 Red 7 Joel 18 Orange 1 Katy 13 Pink 5 Leland 18 Yellow 9 Luke 16 Blue 1 Ryan 18 Green 3 Ryan 18 Green 8 Tyler 22 Blue 6 and... sortByColumn(bigList, 2, 3) Luke 16 Blue 1 Anna 17 Blue 2 Tyler 22 Blue 6 Bob 22 Blue 10 Ryan 18 Green 3 Ryan 18 Green 8 Joel 18 Orange 1 Katy 13 Pink 5 Garrett 24 Red 7 Leland 18 Yellow 9 Any clues? A: import operator: def sortByColumn(bigList, *args) bigList.sort(key=operator.itemgetter(*args)) # sorts the list in place A: This will sort by columns 2 and 3: a.sort(key=operator.itemgetter(2,3)) A: The key idea here (pun intended) is to use a key function that returns a tuple. Below, the key function is lambda x: (x[idx] for idx in args) x is set to equal an element of aList -- that is, a row of data. It returns a tuple of values, not just one value. The sort() method sorts according to the first element of the list, then breaks ties with the second, and so on. See http://wiki.python.org/moin/HowTo/Sorting#Sortingbykeys #!/usr/bin/env python import csv def sortByColumn(aList,*args): aList.sort(key=lambda x: (x[idx] for idx in args)) return aList filename='file.txt' def convert_ints(astr): try: return int(astr) except ValueError: return astr biglist=[[convert_ints(elt) for elt in line] for line in csv.reader(open(filename,'r'),delimiter='\t')] for row in sortByColumn(biglist,0): print row for row in sortByColumn(biglist,2,3): print row A: Make sure you have converted the numbers to ints, otherwise they will sort alphabetically rather than numerically # Sort the list in place def sortByColumn(A,*args): import operator A.sort(key=operator.itemgetter(*args)) return A or # Leave the original list alone and return a new sorted one def sortByColumn(A,*args): import opertator return sorted(A,key=operator.itemgetter(*args))
Sort a multidimensional list by a variable number of keys
I've read this post and is hasn't ended up working for me. Edit: the functionality I'm describing is just like the sorting function in Excel... if that makes it any clearer Here's my situation, I have a tab-delimited text document. There are about 125,000 lines and 6 columns per line (columns are separated by a tab character). I've split the document into a two-dimension list. I am trying to write a generic function to sort two-dimensional lists. Basically I would like to have a function where I can pass the big list, and the key of one or more columns I would like to sort the big list by. Obviously, I would like the first key passed to be the primary sorting point, then the second key, etc. Still confuzzled? Here's an example of what I would like to do. Joel 18 Orange 1 Anna 17 Blue 2 Ryan 18 Green 3 Luke 16 Blue 1 Katy 13 Pink 5 Tyler 22 Blue 6 Bob 22 Blue 10 Garrett 24 Red 7 Ryan 18 Green 8 Leland 18 Yellow 9 Say I passed this list to my magical function, like so: sortByColumn(bigList, 0) Anna 17 Blue 2 Bob 22 Blue 10 Garrett 24 Red 7 Joel 18 Orange 1 Katy 13 Pink 5 Leland 18 Yellow 9 Luke 16 Blue 1 Ryan 18 Green 3 Ryan 18 Green 8 Tyler 22 Blue 6 and... sortByColumn(bigList, 2, 3) Luke 16 Blue 1 Anna 17 Blue 2 Tyler 22 Blue 6 Bob 22 Blue 10 Ryan 18 Green 3 Ryan 18 Green 8 Joel 18 Orange 1 Katy 13 Pink 5 Garrett 24 Red 7 Leland 18 Yellow 9 Any clues?
[ "import operator:\ndef sortByColumn(bigList, *args)\n bigList.sort(key=operator.itemgetter(*args)) # sorts the list in place\n\n", "This will sort by columns 2 and 3:\na.sort(key=operator.itemgetter(2,3))\n\n", "The key idea here (pun intended) is to use a key function that returns a tuple.\nBelow, the key function is lambda x: (x[idx] for idx in args)\nx is set to equal an element of aList -- that is, a row of data. It returns a tuple of values, not just one value. The sort() method sorts according to the first element of the list, then breaks ties with the second, and so on. See http://wiki.python.org/moin/HowTo/Sorting#Sortingbykeys\n#!/usr/bin/env python\nimport csv\ndef sortByColumn(aList,*args):\n aList.sort(key=lambda x: (x[idx] for idx in args))\n return aList\n\nfilename='file.txt'\ndef convert_ints(astr):\n try:\n return int(astr)\n except ValueError:\n return astr \nbiglist=[[convert_ints(elt) for elt in line]\n for line in csv.reader(open(filename,'r'),delimiter='\\t')]\n\nfor row in sortByColumn(biglist,0):\n print row\n\nfor row in sortByColumn(biglist,2,3):\n print row\n\n", "Make sure you have converted the numbers to ints, otherwise they will sort alphabetically rather than numerically\n# Sort the list in place\ndef sortByColumn(A,*args):\n import operator\n A.sort(key=operator.itemgetter(*args))\n return A\n\nor \n# Leave the original list alone and return a new sorted one\ndef sortByColumn(A,*args):\n import opertator\n return sorted(A,key=operator.itemgetter(*args))\n\n" ]
[ 11, 8, 2, 1 ]
[]
[]
[ "python", "sorting" ]
stackoverflow_0001683775_python_sorting.txt
Q: Limitations of TEMP directory in Windows? I have an application written in Python that's writing large amounts of data to the %TEMP% folder. Oddly, every once and awhile, it dies, returning IOError: [Errno 28] No space left on device. The drive has plenty of free space, %TEMP% is not its own partition, I'm an administrator, and the system has no quotas. Does Windows artificially put some types of limits on the data in %TEMP%? If not, any ideas on what could be causing this issue? EDIT: Following discussions below, I clarified the question to better explain what's going on. A: What is the exact error you encounter? Are you creating too many temp files? The GetTempFileName method will raise an IOException if it is used to create more than 65535 files without deleting previous temporary files. The GetTempFileName method will raise an IOException if no unique temporary file name is available. To resolve this error, delete all unneeded temporary files. One thing to note is that if you're indirectly using the Win32 API, and you're only using it to get temp file names, note that while (indirectly) calling it: Creates a uniquely named, zero-byte temporary file on disk and returns the full path of that file. If you're using that path but also changing the value returned, be aware you might actually be creating a 0byte file and an additional file on top of that (e.g. My_App_tmpXXXX.tmp and tmpXXXX.tmp). As Nestor suggested below, consider deleting your temp files after you're done using them. A: Using a FAT32 filesystem I can imagine this happening when: Writing a lot of data to one file, and you reach the 4GB file size cap. Or when you are creating a lot of small files and reaching the 2^16-2 files per directory cap. Apart from this, I don't know of any limitations the system can impose on the temp folder, apart from the phyiscal partition actually being full. Another limitation is as Mike Atlas has suggested the GetTempFileName() function which creates files of type tmpXXXX.tmp. Although you might not be using it directly, verify that the %TEMP% folder does not contain too many of them (2^16). And maybe the obvious, have you tried emptying the %TEMP% folder before running the utility? A: There shouldn't be such space limitation in Temp. If you wrote the app, I would recommend creating your files in ProgramData... A: There should be no trouble whatsoever with regard to your %TEMP% directory. What is your disk quota set to for %TEMP%'s hosting volume? Depending in part on what the apps themselves are doing, one of them may be throwing an error due to the disk quota being reached, which is a pain if this quota is set unreasonably high. If the quota is very high, try lowering it, which you can do as Administrator.
Limitations of TEMP directory in Windows?
I have an application written in Python that's writing large amounts of data to the %TEMP% folder. Oddly, every once and awhile, it dies, returning IOError: [Errno 28] No space left on device. The drive has plenty of free space, %TEMP% is not its own partition, I'm an administrator, and the system has no quotas. Does Windows artificially put some types of limits on the data in %TEMP%? If not, any ideas on what could be causing this issue? EDIT: Following discussions below, I clarified the question to better explain what's going on.
[ "What is the exact error you encounter?\nAre you creating too many temp files?\n\nThe GetTempFileName method will raise\n an IOException if it is used to\n create more than 65535 files without \n deleting previous temporary files.\nThe GetTempFileName method will raise\n an IOException if no unique temporary\n file name is available. To resolve\n this error, delete all unneeded\n temporary files.\n\nOne thing to note is that if you're indirectly using the Win32 API, and you're only using it to get temp file names, note that while (indirectly) calling it:\n\nCreates a uniquely named, zero-byte\n temporary file on disk and returns the\n full path of that file.\n\nIf you're using that path but also changing the value returned, be aware you might actually be creating a 0byte file and an additional file on top of that (e.g. My_App_tmpXXXX.tmp and tmpXXXX.tmp).\nAs Nestor suggested below, consider deleting your temp files after you're done using them.\n", "Using a FAT32 filesystem I can imagine this happening when:\n\nWriting a lot of data to one file, and you reach the 4GB file size cap.\nOr when you are creating a lot of small files and reaching the 2^16-2 files per directory cap.\n\nApart from this, I don't know of any limitations the system can impose on the temp folder, apart from the phyiscal partition actually being full.\nAnother limitation is as Mike Atlas has suggested the GetTempFileName() function which creates files of type tmpXXXX.tmp. Although you might not be using it directly, verify that the %TEMP% folder does not contain too many of them (2^16).\nAnd maybe the obvious, have you tried emptying the %TEMP% folder before running the utility?\n", "There shouldn't be such space limitation in Temp. If you wrote the app, I would recommend creating your files in ProgramData...\n", "There should be no trouble whatsoever with regard to your %TEMP% directory. \nWhat is your disk quota set to for %TEMP%'s hosting volume? Depending in part on what the apps themselves are doing, one of them may be throwing an error due to the disk quota being reached, which is a pain if this quota is set unreasonably high. If the quota is very high, try lowering it, which you can do as Administrator.\n" ]
[ 11, 2, 1, 0 ]
[]
[]
[ "python", "temporary_files", "windows" ]
stackoverflow_0001683831_python_temporary_files_windows.txt
Q: Django: Streaming dynamically generated XML output through an HttpResponse recently I wanted to return through a Django view a dynamically generated XML tree. The module I use for XML manipulation is the usual cElementTree. I think I tackled what I wanted by doing the following: def view1(request): resp = HttpResponse(g()) return resp def g(): root = Element("ist") list_stamp = SubElement(root, "list_timestamp") list_creation = str(datetime.now()) for i in range(1,1000000): root.text = str(i) yield cET.tostring(root) Is something like this a good idea ? Do I miss something ? A: About middlewares "breaking" streaming: CommonMiddleware will try to consume the whole iterator if you set USE_ETAGS = True in settings. But in modern Django (1.1) there's a better way to do conditional get than CommonMiddleware + ConditionalGetMiddleware -- condition decorator. Use that and your streaming will stream okay :-) Another thing that will try to consume the iterator is GzipMiddleware. If you want to use it you can avoid gzipping your streaming responses by turning it into a decorator and applying to individual views instead of globally. A: Does it work? If it doesn't work, what error does it throw? If you're building a full-blown API for a django site, take a look at django-piston. It takes care of a lot of the busywork related to that. http://bitbucket.org/jespern/django-piston/wiki/Home A: Yes, it's perfectly legitimate to return an iterator in an HttpResponse. As you've discovered, that allows you to stream content to the client. A: Yes. That's THE WAY you do it on Django.
Django: Streaming dynamically generated XML output through an HttpResponse
recently I wanted to return through a Django view a dynamically generated XML tree. The module I use for XML manipulation is the usual cElementTree. I think I tackled what I wanted by doing the following: def view1(request): resp = HttpResponse(g()) return resp def g(): root = Element("ist") list_stamp = SubElement(root, "list_timestamp") list_creation = str(datetime.now()) for i in range(1,1000000): root.text = str(i) yield cET.tostring(root) Is something like this a good idea ? Do I miss something ?
[ "About middlewares \"breaking\" streaming:\nCommonMiddleware will try to consume the whole iterator if you set USE_ETAGS = True in settings. But in modern Django (1.1) there's a better way to do conditional get than CommonMiddleware + ConditionalGetMiddleware -- condition decorator. Use that and your streaming will stream okay :-)\nAnother thing that will try to consume the iterator is GzipMiddleware. If you want to use it you can avoid gzipping your streaming responses by turning it into a decorator and applying to individual views instead of globally.\n", "Does it work? If it doesn't work, what error does it throw? \nIf you're building a full-blown API for a django site, take a look at django-piston. It takes care of a lot of the busywork related to that.\nhttp://bitbucket.org/jespern/django-piston/wiki/Home\n", "Yes, it's perfectly legitimate to return an iterator in an HttpResponse. As you've discovered, that allows you to stream content to the client.\n", "Yes. That's THE WAY you do it on Django.\n" ]
[ 11, 2, 2, 2 ]
[]
[]
[ "django", "python", "xml" ]
stackoverflow_0001683144_django_python_xml.txt
Q: Acessing other py file's class I have two files: a.py b.py How can I access my ABC123 class defined in a.py from b.py? A: import a x = a.ABC123() or from a import ABC123 x = ABC123() will do the job, as long as a.py and b.py are in the same directory, or if a.py is in a directory in sys.path or in a directory in your environment's $PYTHONPATH. If neither of those is the case, you might want to read up on relative imports in PEP328. In spite of being several years old, Importing Python Modules might be worth reading for a more thorough overview of importing from other modules. It does seem beginner-friendly, too. A: You need to import the objects from the other file: from a import ABC123 For a good discussion on this topic please see Importing Python Modules: The import and from-import statements are a constant cause of serious confusion for newcomers to Python. Luckily, once you’ve figured out what they really do, you’ll never have problems with them again. This note tries to sort out some of the more common issues related to import and from-import and everything.
Acessing other py file's class
I have two files: a.py b.py How can I access my ABC123 class defined in a.py from b.py?
[ "import a\nx = a.ABC123()\n\nor\nfrom a import ABC123\nx = ABC123()\n\nwill do the job, as long as a.py and b.py are in the same directory, or if a.py is in a directory in sys.path or in a directory in your environment's $PYTHONPATH. If neither of those is the case, you might want to read up on relative imports in PEP328. \nIn spite of being several years old, Importing Python Modules might be worth reading for a more thorough overview of importing from other modules. It does seem beginner-friendly, too.\n", "You need to import the objects from the other file:\nfrom a import ABC123\n\nFor a good discussion on this topic please see Importing Python Modules:\n\nThe import and from-import statements\n are a constant cause of serious\n confusion for newcomers to Python.\n Luckily, once you’ve figured out what\n they really do, you’ll never have\n problems with them again.\nThis note tries to sort out some of\n the more common issues related to\n import and from-import and everything.\n\n" ]
[ 10, 2 ]
[]
[]
[ "python" ]
stackoverflow_0001684274_python.txt
Q: org.apache.commons.lang.StringEscapeUtils in python is there any python module or code that implements the org.apache.commons.lang.StringEscapeUtils.escapeHtml ? exactly the same as in http://commons.apache.org/lang/api/org/apache/commons/lang/StringEscapeUtils.html#escapeHtml(java.lang.String) i googled around but could only find the cgi.escape function that doesn't do the same thing. thanks in advance, sorry for the english :D A: This is for XML and not for HTML, but it might fit your needs: Escaping XML >>> from xml.sax.saxutils import escape >>> >>> escape("< & >") '&lt; &amp; &gt;'
org.apache.commons.lang.StringEscapeUtils in python
is there any python module or code that implements the org.apache.commons.lang.StringEscapeUtils.escapeHtml ? exactly the same as in http://commons.apache.org/lang/api/org/apache/commons/lang/StringEscapeUtils.html#escapeHtml(java.lang.String) i googled around but could only find the cgi.escape function that doesn't do the same thing. thanks in advance, sorry for the english :D
[ "This is for XML and not for HTML, but it might fit your needs: Escaping XML\n>>> from xml.sax.saxutils import escape\n>>>\n>>> escape(\"< & >\")\n'&lt; &amp; &gt;'\n\n" ]
[ 1 ]
[]
[]
[ "apache", "apache_commons", "java", "python" ]
stackoverflow_0001683965_apache_apache_commons_java_python.txt
Q: Uploading to the cheeseshop different versions of a package for different versions of Python I have an open-source Python project (called GarlicSim), and I maintain 4 different versions of it for Python versions 2.4, 2.5, 2.6 and 3.1. Yes, maybe it's unusual, but I like using as much features as possible. I keep them in 4 different forks of the repository. Now I want to upload my project to the cheeseshop. What's the way to do this? I expect that a user will automatically get the version of GarlicSim appropriate for his Python version. How do I do that? A: python2.4 setup.py bdist_egg upload python2.5 setup.py bdist_egg upload python2.6 setup.py bdist_egg upload python3.1 setup.py bdist_egg upload
Uploading to the cheeseshop different versions of a package for different versions of Python
I have an open-source Python project (called GarlicSim), and I maintain 4 different versions of it for Python versions 2.4, 2.5, 2.6 and 3.1. Yes, maybe it's unusual, but I like using as much features as possible. I keep them in 4 different forks of the repository. Now I want to upload my project to the cheeseshop. What's the way to do this? I expect that a user will automatically get the version of GarlicSim appropriate for his Python version. How do I do that?
[ "python2.4 setup.py bdist_egg upload\npython2.5 setup.py bdist_egg upload\npython2.6 setup.py bdist_egg upload\npython3.1 setup.py bdist_egg upload\n\n" ]
[ 3 ]
[]
[]
[ "distribution", "distutils", "pypi", "python", "python_3.x" ]
stackoverflow_0001684173_distribution_distutils_pypi_python_python_3.x.txt
Q: Python .pyc files removal in django app i have a following solution structure in python: main_app main_app/template_processor/ main_app/template_processor/models main_app/template_processor/views everything works just fine on my local machine. as soon as code gets to server (it stopped working after i removed all .pyc files from svn), it doesn't see the assembly (if it could be called like that in terms of python) with models. during syncdb command, it creates no tables except for admin ones. but in runtime it finds models themselves, but does not find tables (since they were not created by syncdb) i've added application to settings.py as installed app, everything seems to be fine. the only difference that i can see for now is that on my local machine i have main_app/template_processor/models/models.pyc file, but it doesn't precompile it on server for some reason (might be a hint??) __init__.py files are existing in every folder/subfolder. have anyone faced an issue like that? A: Sounds like Django isn't seeing that module (folder, in this case) for some reason. Make sure all the folders have a file called __init__.py (notice the two underscores before and after). Once that's done, make sure it's listed in your installed apps. Maybe you made some change to it that's causing it to stop loading. You can also try moving the .pyc files out of the directory on your local machine and see whether they're regenerated or not when you runserver. Maybe the most helpful thing: ./manage.py shell to bring up an interactive shell and then 'import app_name' (without quotes) to see whether django is having trouble finding the module. A: Hm, looking at your directory structure, I think that you need to import all your models in template_processor/models/__init__.py, because Django looks only in <app_name>.models module when loading models automatically (i.e. for syncdb). A: My guess would be that you renamed a file, and didn't delete the oldname.pyc file. So if you try to import oldname then you rename oldname to rename, but don't update your import statement, the code will work on systems where oldname.pyc exists, however python won't be able to recreate oldname.pyc if oldname.py doesn't exist. try find . | grep py | xargs grep import | grep -v django | sort -u that should give you a list of all imports in your project, see if one of those imports is pointing at a module for which you have a pyc file but not a .py file. In general, python quickly compiles .py files into .pyc files, and it does this once. I wouldn't worry about the time it takes to generate new .pyc files. A: Ok, if anyone's interested in what really was happening, i've got a story to tell ya: http://code.djangoproject.com/ticket/4470 that's basically what i was going to implement. in order to really get this thing work i still should have a file models.py, which will have a proper list of classess, with Pass inside of it. Then i should have taken all the files (my models) and changed their meta for syncdb to understand they are from the certain "assembly". source code is available (pls see url above). thx for helping out!
Python .pyc files removal in django app
i have a following solution structure in python: main_app main_app/template_processor/ main_app/template_processor/models main_app/template_processor/views everything works just fine on my local machine. as soon as code gets to server (it stopped working after i removed all .pyc files from svn), it doesn't see the assembly (if it could be called like that in terms of python) with models. during syncdb command, it creates no tables except for admin ones. but in runtime it finds models themselves, but does not find tables (since they were not created by syncdb) i've added application to settings.py as installed app, everything seems to be fine. the only difference that i can see for now is that on my local machine i have main_app/template_processor/models/models.pyc file, but it doesn't precompile it on server for some reason (might be a hint??) __init__.py files are existing in every folder/subfolder. have anyone faced an issue like that?
[ "Sounds like Django isn't seeing that module (folder, in this case) for some reason. Make sure all the folders have a file called __init__.py (notice the two underscores before and after). Once that's done, make sure it's listed in your installed apps.\nMaybe you made some change to it that's causing it to stop loading. You can also try moving the .pyc files out of the directory on your local machine and see whether they're regenerated or not when you runserver.\nMaybe the most helpful thing: ./manage.py shell to bring up an interactive shell and then 'import app_name' (without quotes) to see whether django is having trouble finding the module.\n", "Hm, looking at your directory structure, I think that you need to import all your models in template_processor/models/__init__.py, because Django looks only in <app_name>.models module when loading models automatically (i.e. for syncdb).\n", "My guess would be that you renamed a file, and didn't delete the oldname.pyc file.\nSo if you try to \nimport oldname\nthen you rename oldname to rename, but don't update your import statement, the code will work on systems where oldname.pyc exists, however python won't be able to recreate oldname.pyc if oldname.py doesn't exist.\ntry \nfind . | grep py | xargs grep import | grep -v django | sort -u\nthat should give you a list of all imports in your project, see if one of those imports is pointing at a module for which you have a pyc file but not a .py file.\nIn general, python quickly compiles .py files into .pyc files, and it does this once. I wouldn't worry about the time it takes to generate new .pyc files.\n", "Ok, if anyone's interested in what really was happening, i've got a story to tell ya:\nhttp://code.djangoproject.com/ticket/4470\nthat's basically what i was going to implement.\nin order to really get this thing work i still should have a file models.py, which will have a proper list of classess, with Pass inside of it. Then i should have taken all the files (my models) and changed their meta for syncdb to understand they are from the certain \"assembly\".\nsource code is available (pls see url above).\nthx for helping out!\n" ]
[ 1, 1, 1, 0 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001683695_django_django_models_python.txt
Q: How to set attributes using property decorators? This code returns an error: AttributeError: can't set attribute This is really a pity because I would like to use properties instead of calling the methods. Does anyone know why this simple example is not working? #!/usr/bin/python2.6 class Bar( object ): """ ... """ @property def value(): """ ... """ def fget( self ): return self._value def fset(self, value ): self._value = value class Foo( object ): def __init__( self ): self.bar = Bar() self.bar.value = "yyy" if __name__ == '__main__': foo = Foo() A: Is this what you want? class C(object): def __init__(self): self._x = None @property def x(self): """I'm the 'x' property.""" return self._x @x.setter def x(self, value): self._x = value Taken from http://docs.python.org/library/functions.html#property.
How to set attributes using property decorators?
This code returns an error: AttributeError: can't set attribute This is really a pity because I would like to use properties instead of calling the methods. Does anyone know why this simple example is not working? #!/usr/bin/python2.6 class Bar( object ): """ ... """ @property def value(): """ ... """ def fget( self ): return self._value def fset(self, value ): self._value = value class Foo( object ): def __init__( self ): self.bar = Bar() self.bar.value = "yyy" if __name__ == '__main__': foo = Foo()
[ "Is this what you want?\nclass C(object):\n def __init__(self):\n self._x = None\n\n @property\n def x(self):\n \"\"\"I'm the 'x' property.\"\"\"\n return self._x\n\n @x.setter\n def x(self, value):\n self._x = value\n\nTaken from http://docs.python.org/library/functions.html#property.\n" ]
[ 155 ]
[]
[]
[ "python" ]
stackoverflow_0001684828_python.txt
Q: Call Ruby or Python API in C# .NET I have a lot of APIs/Classes that I have developed in Ruby and Python that I would like to use in my .NET apps. Is it possible to instantiate a Ruby or Python Object in C# and call its methods? It seems that libraries like IronPython do the opposite of this. Meaning, they allow Python to utilize .NET objects, but not the reciprocal of this which is what I am looking for... Am I missing something here? Any ideas? A: This is one of the two things that the Dynamic Language Runtime is supposed to do: everybody thinks that the DLR is only for language implementors to make it easier to implement dynamic languages on the CLI. But, it is also for application writers, to make it easier to host dynamic languages in their applications. Before the DLR, every language had their own hosting API. Now, the DLR has a standardized hosting specification that works the same for every language, and with support for dynamically typed objects in C# 4 and VB.NET 10, it gets easier than ever: // MethodMissingDemo.cs using System; using IronRuby; class Program { static void Main() { var rubyEngine = Ruby.CreateEngine(); rubyEngine.ExecuteFile("method_missing_demo.rb"); dynamic globals = rubyEngine.Runtime.Globals; dynamic methodMissingDemo = globals.MethodMissingDemo.@new(); Console.WriteLine(methodMissingDemo.HelloDynamicWorld()); methodMissingDemo.print_all(args); } } # method_missing_demo.rb class MethodMissingDemo def print_all(args) args.map {|arg| puts arg} end def method_missing(name, *args) name.to_s.gsub(/([[:lower:]\d])([[:upper:]])/,'\1 \2') end end Here you see stuff getting passed around in every possible direction. The C# code is calling a method on the Ruby object which doesn't even exist and the Ruby code is iterating over a .NET array and printing its contents to the console. A: If you can wait for C# 4.0 (you can use the beta right now), it will come with the "dynamic" keyword, and you can call IronRuby or IronPython code as described here. A: Both IronRuby and IronPython allow you to call Ruby and Python native modules, functions and classes. Both are supported as more or less first class languages in .NET, specifically under the DLR (Dynamic Language Runtime).
Call Ruby or Python API in C# .NET
I have a lot of APIs/Classes that I have developed in Ruby and Python that I would like to use in my .NET apps. Is it possible to instantiate a Ruby or Python Object in C# and call its methods? It seems that libraries like IronPython do the opposite of this. Meaning, they allow Python to utilize .NET objects, but not the reciprocal of this which is what I am looking for... Am I missing something here? Any ideas?
[ "This is one of the two things that the Dynamic Language Runtime is supposed to do: everybody thinks that the DLR is only for language implementors to make it easier to implement dynamic languages on the CLI. But, it is also for application writers, to make it easier to host dynamic languages in their applications.\nBefore the DLR, every language had their own hosting API. Now, the DLR has a standardized hosting specification that works the same for every language, and with support for dynamically typed objects in C# 4 and VB.NET 10, it gets easier than ever:\n// MethodMissingDemo.cs\nusing System;\nusing IronRuby;\n\nclass Program\n{\n static void Main()\n {\n var rubyEngine = Ruby.CreateEngine();\n rubyEngine.ExecuteFile(\"method_missing_demo.rb\");\n dynamic globals = rubyEngine.Runtime.Globals;\n\n dynamic methodMissingDemo = globals.MethodMissingDemo.@new();\n\n Console.WriteLine(methodMissingDemo.HelloDynamicWorld());\n\n methodMissingDemo.print_all(args);\n }\n}\n\n# method_missing_demo.rb\nclass MethodMissingDemo\n def print_all(args)\n args.map {|arg| puts arg}\n end\n\n def method_missing(name, *args)\n name.to_s.gsub(/([[:lower:]\\d])([[:upper:]])/,'\\1 \\2')\n end\nend\n\nHere you see stuff getting passed around in every possible direction. The C# code is calling a method on the Ruby object which doesn't even exist and the Ruby code is iterating over a .NET array and printing its contents to the console.\n", "If you can wait for C# 4.0 (you can use the beta right now), it will come with the \"dynamic\" keyword, and you can call IronRuby or IronPython code as described here.\n", "Both IronRuby and IronPython allow you to call Ruby and Python native modules, functions and classes. Both are supported as more or less first class languages in .NET, specifically under the DLR (Dynamic Language Runtime).\n" ]
[ 9, 3, 1 ]
[ "I have seen ways to call into Ruby / Python from c#. But it's easier the other way around.\n" ]
[ -1 ]
[ ".net", "c#", "python", "ruby" ]
stackoverflow_0001684145_.net_c#_python_ruby.txt
Q: Transactions behaviour when request times out. [google app engine] Google app engine has this useful little function in its db class, db.run_in_transaction() Which is suppose to garentee that your method will be rolled back if an exception is raised. "If the function raises an exception, the transaction is rolled back." What happens if my request times out in the middle of its execution? Will it roll back? A: Yes, the timeout raises an exception, so that also will mean a rollback.
Transactions behaviour when request times out. [google app engine]
Google app engine has this useful little function in its db class, db.run_in_transaction() Which is suppose to garentee that your method will be rolled back if an exception is raised. "If the function raises an exception, the transaction is rolled back." What happens if my request times out in the middle of its execution? Will it roll back?
[ "Yes, the timeout raises an exception, so that also will mean a rollback.\n" ]
[ 2 ]
[]
[]
[ "google_app_engine", "python" ]
stackoverflow_0001685329_google_app_engine_python.txt
Q: how to write regex for below format using python I want to validate below data using regex and python. Below is the dump of the data which Can be stored in string variable Start 0 .......... group=..... name=...... number=.... end=(digits) Start 1 .......... group=..... name=...... number=.... end=(digits) Start 2 .......... group=..... name=...... number=.... end=(digits) Start 3 .......... group=..... name=...... number=.... end=(digits) Where ......is some random data need not to validate ... .. Start 100 .......... group=..... name=...... number=.... end=(digits) Thanks in advance A: You could use r'(Start \d+.*?group=.*?name=.*?number=.*?end=\d+)*'.
how to write regex for below format using python
I want to validate below data using regex and python. Below is the dump of the data which Can be stored in string variable Start 0 .......... group=..... name=...... number=.... end=(digits) Start 1 .......... group=..... name=...... number=.... end=(digits) Start 2 .......... group=..... name=...... number=.... end=(digits) Start 3 .......... group=..... name=...... number=.... end=(digits) Where ......is some random data need not to validate ... .. Start 100 .......... group=..... name=...... number=.... end=(digits) Thanks in advance
[ "You could use r'(Start \\d+.*?group=.*?name=.*?number=.*?end=\\d+)*'.\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0001685558_python.txt
Q: Python mechanize - two buttons of type 'submit' I have a mechanize script written in python that fills out a web form and is supposed to click on the 'create' button. But there's a problem, the form has two buttons. One for 'add attached file' and one for 'create'. Both are of type 'submit', and the attach button is the first one listed. So when I select the forum and do br.submit(), it clicks on the 'attach' button instead of 'create'. Extensive Googling has yielded nothing useful for selecting a specific button in a form. Does anyone know of any methods for skipping over the first 'submit' button and clicking the second? A: I tried using the nr parameter, without any luck. I was able to get it to work with a combination of the name and label parameters, where "label" seems to correspond to the "value" in the HTML: Here are my two submit buttons: <input type="submit" name="Preview" value="Preview" /> <input type="submit" name="Create" value="Create New Page" /> ... and here's the code that clicks the first one, goes back, and then clicks the second: from mechanize import Browser self.br = Browser() self.br.open('http://foo.com/path/to/page.html') self.br.select_form(name='my_form') self.br['somefieldname'] = 'Foo' submit_response = self.br.submit(name='Preview', label='Preview') self.br.back() self.br.select_form(name='my_form') self.br['somefieldname'] = 'Bar' submit_response = self.br.submit(name='Create', label='Create New Page') There's a variant that also worked for me, where the "name" of the submit button is the same, such as: <input type="submit" name="action" value="Preview" /> <input type="submit" name="action" value="Save" /> <input type="submit" name="action" value="Cancel" /> and self.br.select_form(name='my_form') submit_response = self.br.submit(name='action', label='Preview') self.br.back() submit_response = self.br.submit(name='action', label='Save') IMPORTANT NOTE - I was only able to get any of this multiple-submit-button code to work after cleaning up some HTML in the rest of the page. Specifically, I could not have <br/> - instead I had to have <br /> ... and, making even less sense, I could not have anything between the two submit buttons. It frustrated me to no end that the mechanize/ClientForm bug I hunted for over two hours boiled down to this: <tr><td colspan="2"><br/><input type="submit" name="Preview" value="Preview" />&nbsp;<input type="submit" name="Create" value="Create New Page" /></td></tr> (all on one line) did not work, but <tr><td colspan="2"><br /> <input type="submit" name="Preview" value="Preview" /> <input type="submit" name="Create" value="Create New Page" /></td></tr> worked fine (on multiple lines, which also shouldn't have mattered). I like mechanize because it was easy to install (just copy the files into my include directory) and because it's pretty simple to use, but unless I'm missing something major, I think that bugs like this are kind of awful - I can't think of a good reason at all why the first example there should fail and the second should work. And, incidentally, I also found another mechanize bug where a <textarea> which is contained within a <p> is not recognized as a valid control, but once you take it out of the <p> container it's recognized just fine. And I checked, textarea is allowed to be included in other block-level elements like <p>. A: I would suggest you to use Twill which uses mechanize (mostly monkeypatched). So say you have form with some fields and two submit buttons with names "submit_to_preview" and "real_submit". Following code should work. BTW remember this is not threadsafe so you might want to use locks in case if you want to use the code in a threaded env. import twill.commands b = twill.get_browser() url = "http://site/myform" twill.commands.go(url) twill.commands.fv("2", "name", "Me") twill.commands.fv("2", "age", "32") twill.commands.fv("2", "comment", "useful article") twill.commands.browser.submit("real_submit") Hope that helps. Cheers. A: Use the 'click' method. E.g. mybrowser.select_form(nr=0) req = mybrowser.click(type="submit", nr=1) mybrowser.open(req) Should work. A: I can talk from experience using HTTP, rather than mechanize, but I think this is probably what you want. When there are two submit buttons in a form, a server can determine which one was pressed, because the client should have added an argument for the submit button. So: <form action="blah" method="get"> <p> <input type="submit" name="button_1" value="One" /> <input type="submit" name="button_2" value="Two" /> </p> </form> Will take you either the URL: blah?button_1=One or: blah?button_2=Two Depending on which button was pressed. If you're programatically determining what arguments are going to be sent, you need to add an argument with the name of the submit button that was pressed, and it's value.
Python mechanize - two buttons of type 'submit'
I have a mechanize script written in python that fills out a web form and is supposed to click on the 'create' button. But there's a problem, the form has two buttons. One for 'add attached file' and one for 'create'. Both are of type 'submit', and the attach button is the first one listed. So when I select the forum and do br.submit(), it clicks on the 'attach' button instead of 'create'. Extensive Googling has yielded nothing useful for selecting a specific button in a form. Does anyone know of any methods for skipping over the first 'submit' button and clicking the second?
[ "I tried using the nr parameter, without any luck.\nI was able to get it to work with a combination of the name and label parameters, where \"label\" seems to correspond to the \"value\" in the HTML:\nHere are my two submit buttons:\n<input type=\"submit\" name=\"Preview\" value=\"Preview\" />\n<input type=\"submit\" name=\"Create\" value=\"Create New Page\" />\n\n... and here's the code that clicks the first one, goes back, and then clicks the second:\nfrom mechanize import Browser\nself.br = Browser()\nself.br.open('http://foo.com/path/to/page.html')\nself.br.select_form(name='my_form')\nself.br['somefieldname'] = 'Foo'\nsubmit_response = self.br.submit(name='Preview', label='Preview')\nself.br.back()\nself.br.select_form(name='my_form')\nself.br['somefieldname'] = 'Bar'\nsubmit_response = self.br.submit(name='Create', label='Create New Page')\n\nThere's a variant that also worked for me, where the \"name\" of the submit button is the same, such as:\n<input type=\"submit\" name=\"action\" value=\"Preview\" />\n<input type=\"submit\" name=\"action\" value=\"Save\" />\n<input type=\"submit\" name=\"action\" value=\"Cancel\" />\n\nand\nself.br.select_form(name='my_form')\nsubmit_response = self.br.submit(name='action', label='Preview')\nself.br.back()\nsubmit_response = self.br.submit(name='action', label='Save')\n\nIMPORTANT NOTE - I was only able to get any of this multiple-submit-button code to work after cleaning up some HTML in the rest of the page.\nSpecifically, I could not have <br/> - instead I had to have <br /> ... and, making even less sense, I could not have anything between the two submit buttons.\nIt frustrated me to no end that the mechanize/ClientForm bug I hunted for over two hours boiled down to this:\n<tr><td colspan=\"2\"><br/><input type=\"submit\" name=\"Preview\" value=\"Preview\" />&nbsp;<input type=\"submit\" name=\"Create\" value=\"Create New Page\" /></td></tr>\n\n(all on one line) did not work, but\n<tr><td colspan=\"2\"><br />\n<input type=\"submit\" name=\"Preview\" value=\"Preview\" />\n<input type=\"submit\" name=\"Create\" value=\"Create New Page\" /></td></tr>\n\nworked fine (on multiple lines, which also shouldn't have mattered).\nI like mechanize because it was easy to install (just copy the files into my include directory) and because it's pretty simple to use, but unless I'm missing something major, I think that bugs like this are kind of awful - I can't think of a good reason at all why the first example there should fail and the second should work.\nAnd, incidentally, I also found another mechanize bug where a <textarea> which is contained within a <p> is not recognized as a valid control, but once you take it out of the <p> container it's recognized just fine. And I checked, textarea is allowed to be included in other block-level elements like <p>.\n", "I would suggest you to use Twill which uses mechanize (mostly monkeypatched). \nSo say you have form with some fields and two submit buttons with names \"submit_to_preview\" and \"real_submit\". Following code should work.\nBTW remember this is not threadsafe so you might want to use locks in case if you want to use the code in a threaded env.\nimport twill.commands\nb = twill.get_browser()\nurl = \"http://site/myform\"\ntwill.commands.go(url)\ntwill.commands.fv(\"2\", \"name\", \"Me\")\ntwill.commands.fv(\"2\", \"age\", \"32\")\ntwill.commands.fv(\"2\", \"comment\", \"useful article\")\ntwill.commands.browser.submit(\"real_submit\")\n\nHope that helps. Cheers.\n", "Use the 'click' method. E.g.\nmybrowser.select_form(nr=0)\nreq = mybrowser.click(type=\"submit\", nr=1)\nmybrowser.open(req)\n\nShould work.\n", "I can talk from experience using HTTP, rather than mechanize, but I think this is probably what you want.\nWhen there are two submit buttons in a form, a server can determine which one was pressed, because the client should have added an argument for the submit button. So:\n<form action=\"blah\" method=\"get\">\n <p>\n <input type=\"submit\" name=\"button_1\" value=\"One\" />\n <input type=\"submit\" name=\"button_2\" value=\"Two\" />\n </p>\n</form>\n\nWill take you either the URL:\nblah?button_1=One\n\nor:\nblah?button_2=Two\n\nDepending on which button was pressed.\nIf you're programatically determining what arguments are going to be sent, you need to add an argument with the name of the submit button that was pressed, and it's value.\n" ]
[ 22, 7, 5, 2 ]
[]
[]
[ "mechanize", "python" ]
stackoverflow_0000734893_mechanize_python.txt
Q: Debugging the reading of output of a Windows console app using Python This question is very similar to this one. I want to read output from a console app of mine. The app does not terminate, nor does it take input from stdin. When I modify rix0rrr's solution to execute my app and then run his solution, Python hangs because read(1) does not return. The initial output of the app is "Starting the server.\n". Can you guess what property my app may have that is preventing his solution from working? The extent of my changes is that I changed this: p = Popen( ["cmd.exe"], stdin=PIPE, stdout=PIPE ) prompt = re.compile(r"^C:\\.*>", re.M) to this: p = Popen( ["c:\\path\\to\\my\\app\\app.exe"], stdin=PIPE, stdout=PIPE ) prompt = re.compile(r"Starting", re.M) import pdb;pdb.set_trace() I also created a test version of my app that returns immediately and verified that the output from the app is returned by read() in that case. His original, unmodified example, as expected, also does not hang. I also tried out the ActiveState code that Piotr linked to in his answer. No output is returned from the process in that case, either. This is Python 2.4.4 on Vista. A: The very first thing I would check is the buffering in app.exe. If "Starting the server.\n" is being buffered and doesn't make it to the pipe, there is nothing you can do on the reader's side. So, try adding fflush(stdout) after printf("Starting the server.\n").
Debugging the reading of output of a Windows console app using Python
This question is very similar to this one. I want to read output from a console app of mine. The app does not terminate, nor does it take input from stdin. When I modify rix0rrr's solution to execute my app and then run his solution, Python hangs because read(1) does not return. The initial output of the app is "Starting the server.\n". Can you guess what property my app may have that is preventing his solution from working? The extent of my changes is that I changed this: p = Popen( ["cmd.exe"], stdin=PIPE, stdout=PIPE ) prompt = re.compile(r"^C:\\.*>", re.M) to this: p = Popen( ["c:\\path\\to\\my\\app\\app.exe"], stdin=PIPE, stdout=PIPE ) prompt = re.compile(r"Starting", re.M) import pdb;pdb.set_trace() I also created a test version of my app that returns immediately and verified that the output from the app is returned by read() in that case. His original, unmodified example, as expected, also does not hang. I also tried out the ActiveState code that Piotr linked to in his answer. No output is returned from the process in that case, either. This is Python 2.4.4 on Vista.
[ "The very first thing I would check is the buffering in app.exe. If \"Starting the server.\\n\" is being buffered and doesn't make it to the pipe, there is nothing you can do on the reader's side.\nSo, try adding fflush(stdout) after printf(\"Starting the server.\\n\").\n" ]
[ 1 ]
[]
[]
[ "popen", "process", "python", "windows" ]
stackoverflow_0001684995_popen_process_python_windows.txt
Q: How to execute os.* methods as root? Is it possible to ask for a root pw without storing in in my script memory and to run some of os.* commands as root? My script scans some folders and files to check if it can do the job makes some changes in /etc/... creates a folder and files that should be owned by the user who ran the script (1) can be done as a normal user. I can do (2) by sudoing the script, but then the folder and files in (3) will be root's. The issue is that I use a lot of os.makedirs, os.symlink, etc, which stops me from making it runnable by a normal user. Tanks 2 all for suggestions The solution so far is: # do all in sudo os.chown(folder, int(os.getenv('SUDO_UID')), int(os.getenv('SUDO_GID'))) thanks to gnibbler for hint. A: Maybe you can put (2) in a separate script, say script2.py, and in the main script you call sudo script2.py with a popen ? This way only (2) will be executed as root. A: yourscript.py: run_part_1() subprocess.call(['sudo', sys.executable, 'part2.py']) run_part_3() part2.py: run_part_2() A: Would you consider using Linux PAM? You might want to take a look at the Linux-PAM Application Developers' Guide and Python API for PAM A: You should execute your script as root and do the proper changes to permissions for (3) using os.chmod and os.chown. It would be possible for you to execute another script with root rights through sudo, but that would also require storing your user's password in the script to pass in to sudo, which is a terrible idea from a security standpoint. Thus, your issue is about getting the correct permissions on some files/folders. First, pass in or hard code the UID/username of your regular user. Then use os.chown to change the owner, and os.chmod to change the permissions. There are also alternate chown/chmod methods in the os package you should look at: http://docs.python.org/library/os.html One final note: You don't have to worry about the permissions of symlinks. They have the permissions of what they point to. A: gnibbler gave a hint at os.chown. The problem was then to know the ID of the user behind sudo. That information is stored in environment variables SUDO_*: os.chown, (some_path, int(os.getenv('SUDO_UID')), int(os.getenv('SUDO_GID'))) Splitting the code in 3 files could be a solution, but the code is already mixed, so that's not suitable.
How to execute os.* methods as root?
Is it possible to ask for a root pw without storing in in my script memory and to run some of os.* commands as root? My script scans some folders and files to check if it can do the job makes some changes in /etc/... creates a folder and files that should be owned by the user who ran the script (1) can be done as a normal user. I can do (2) by sudoing the script, but then the folder and files in (3) will be root's. The issue is that I use a lot of os.makedirs, os.symlink, etc, which stops me from making it runnable by a normal user. Tanks 2 all for suggestions The solution so far is: # do all in sudo os.chown(folder, int(os.getenv('SUDO_UID')), int(os.getenv('SUDO_GID'))) thanks to gnibbler for hint.
[ "Maybe you can put (2) in a separate script, say script2.py, and in the main script you call sudo script2.py with a popen ? \nThis way only (2) will be executed as root.\n", "yourscript.py:\nrun_part_1()\nsubprocess.call(['sudo', sys.executable, 'part2.py'])\nrun_part_3()\n\npart2.py:\nrun_part_2()\n\n", "Would you consider using Linux PAM? You might want to take a look at the Linux-PAM Application Developers' Guide and Python API for PAM\n", "You should execute your script as root and do the proper changes to permissions for (3) using os.chmod and os.chown.\nIt would be possible for you to execute another script with root rights through sudo, but that would also require storing your user's password in the script to pass in to sudo, which is a terrible idea from a security standpoint.\nThus, your issue is about getting the correct permissions on some files/folders. First, pass in or hard code the UID/username of your regular user. Then use os.chown to change the owner, and os.chmod to change the permissions. There are also alternate chown/chmod methods in the os package you should look at: http://docs.python.org/library/os.html\nOne final note: You don't have to worry about the permissions of symlinks. They have the permissions of what they point to.\n", "gnibbler gave a hint at os.chown. The problem was then to know the ID of the user behind sudo. That information is stored in environment variables SUDO_*:\nos.chown, (some_path, int(os.getenv('SUDO_UID')), int(os.getenv('SUDO_GID')))\n\nSplitting the code in 3 files could be a solution, but the code is already mixed, so that's not suitable.\n" ]
[ 2, 2, 1, 1, 1 ]
[]
[]
[ "python", "root", "sudo" ]
stackoverflow_0001636136_python_root_sudo.txt
Q: Auto run unit test cases in Python We have a python based web application along with its unit test cases. Our need is to automate the process of running unit test cases. They should run either after every checking OR after every fixed time interval. With minimal effort and time what is best tool that we can use to automate this process. We are using Linux as OS and git as source control. A: You're basically looking for continuous integration tools and processes (I mention the term of art because it helps you research the subject in more depth). buildbot is the most popular Python system for the purpose and I would recommend it -- see here for more. A: Hudson is a good choice for this - I've used it with success in the past. It will monitor the git repository for changes; run the tests and report on failures. It will maintain a history of tests in your project. It has a large number of plugins to support python projects. The Cobertura plugin provides code coverage reports and the violations plugin integrated with pylint to give you an idea of your code quality. There is a good article on getting it setup on Setting up a Python CI Server on the Rhonabwy.com. A: Your goal is to identify which checkins caused tests to fail. Great intuition! You're using Git, so you may want to start with githooks, which would allow you to create a post-commit script which runs the tests after a commit. If you're feeling gutsy you could even reject users' commits if tests fail — check out this chapter in Pro Git for more info. Alex is right about continuous integration and Builtbot: By automatically rebuilding and testing the tree each time something has changed, build problems are pinpointed quickly, before other developers are inconvenienced by the failure. The guilty developer can be identified and harassed without human intervention. By running the builds on a variety of platforms, developers who do not have the facilities to test their changes everywhere before checkin will at least know shortly afterwards whether they have broken the build or not. Warning counts, lint checks, image size, compile time, and other build parameters can be tracked over time, are more visible, and are therefore easier to improve."
Auto run unit test cases in Python
We have a python based web application along with its unit test cases. Our need is to automate the process of running unit test cases. They should run either after every checking OR after every fixed time interval. With minimal effort and time what is best tool that we can use to automate this process. We are using Linux as OS and git as source control.
[ "You're basically looking for continuous integration tools and processes (I mention the term of art because it helps you research the subject in more depth). buildbot is the most popular Python system for the purpose and I would recommend it -- see here for more.\n", "Hudson is a good choice for this - I've used it with success in the past. It will monitor the git repository for changes; run the tests and report on failures. It will maintain a history of tests in your project. It has a large number of plugins to support python projects. The Cobertura plugin provides code coverage reports and the violations plugin integrated with pylint to give you an idea of your code quality.\nThere is a good article on getting it setup on Setting up a Python CI Server on the Rhonabwy.com.\n", "Your goal is to identify which checkins caused tests to fail. Great intuition!\nYou're using Git, so you may want to start with githooks, which would allow you to create a post-commit script which runs the tests after a commit. If you're feeling gutsy you could even reject users' commits if tests fail — check out this chapter in Pro Git for more info.\nAlex is right about continuous integration and Builtbot:\n\nBy automatically rebuilding and\n testing the tree each time something\n has changed, build problems are\n pinpointed quickly, before other\n developers are inconvenienced by the\n failure. The guilty developer can be\n identified and harassed without human\n intervention. By running the builds on\n a variety of platforms, developers who\n do not have the facilities to test\n their changes everywhere before\n checkin will at least know shortly\n afterwards whether they have broken\n the build or not. Warning counts, lint\n checks, image size, compile time, and\n other build parameters can be tracked\n over time, are more visible, and are\n therefore easier to improve.\"\n\n" ]
[ 3, 2, 1 ]
[]
[]
[ "automation", "python", "unit_testing" ]
stackoverflow_0001685885_automation_python_unit_testing.txt
Q: Is this possible? I've got a class Foo that's a running thread, what I'd like to do is limit how much class Bar can access of Foo while still having access to Foo's internals, is that possible? A: Python is a strongly, dynamically typed language. What this means is: Objects are strongly typed which means an integer is an integer and can't be treated as anything else unless you say so. Objects have a specific type and stay that way. You can use a name (a variable) to refer to an object, but the name doesn't have any particular type. It all depends on what the name refers to, and this can change as other things are assigned to the same name. Python strongly makes use of the so-called "duck typing" technique where objects do not have (and do not need) specifically typed interfaces. If an object supports a certain set of methods (the canonical example is a file-like object), then it can be used in a context that expects file-like objects. A: According to your question, it looks that you want an instance of IFoo that may act like Foo. Following code does that, but its not recommended to do it that way in Python. class IFoo(object): pass class Foo(IFoo): pass f = IFoo() Foo.__init__(f) Better way is to simply use (multi)inheritance: class IFoo(object): def __init__(self, *args, **kwargs): pass class Foo(IFoo): def __init__(self, *args, **kwargs): IFoo.__init__(self, *args, **kwargs) f = Foo()
Is this possible?
I've got a class Foo that's a running thread, what I'd like to do is limit how much class Bar can access of Foo while still having access to Foo's internals, is that possible?
[ "Python is a strongly, dynamically typed language. What this means is:\n\nObjects are strongly typed which means an integer is an integer and can't be treated as anything else unless you say so. Objects have a specific type and stay that way.\nYou can use a name (a variable) to refer to an object, but the name doesn't have any particular type. It all depends on what the name refers to, and this can change as other things are assigned to the same name.\n\nPython strongly makes use of the so-called \"duck typing\" technique where objects do not have (and do not need) specifically typed interfaces. If an object supports a certain set of methods (the canonical example is a file-like object), then it can be used in a context that expects file-like objects.\n", "According to your question, it looks that you want an instance of IFoo that may act like Foo. Following code does that, but its not recommended to do it that way in Python.\nclass IFoo(object): pass\nclass Foo(IFoo): pass\n\nf = IFoo()\nFoo.__init__(f)\n\nBetter way is to simply use (multi)inheritance:\nclass IFoo(object):\n def __init__(self, *args, **kwargs):\n pass\n\nclass Foo(IFoo):\n def __init__(self, *args, **kwargs):\n IFoo.__init__(self, *args, **kwargs)\n\nf = Foo()\n\n" ]
[ 2, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001686194_python.txt
Q: Matrix data structure A simple 2 dimensional array allows swapping rows (or columns) in a matrix in O(1) time. Is there an efficient data structure that would allow swapping both rows and columns of a matrix in O(1) time? A: You have to store your matrix either as a list of rows or list of columns. Which gives either swapping of rows or swapping of columns in O(1). However, you can add another layer on top of it to handle column order so that you can reorder columns in O(1). So for every access you need to do: x = data[row][colorder[col]] Swap rows as: data[row1], data[row2] = data[row2], data[row1] And swap columns as: colorder[col1], colorder[col2] = colorder[c2], colorder[c1] A: Maybe numpy array can help you -- it allows to access both rows and columns and it's fairly efficient (it's the basic data type for scipy) >>> def f(x,y): ... return 10*x+y ... >>> b = fromfunction(f,(5,4),dtype=int) >>> b array([[ 0, 1, 2, 3], [10, 11, 12, 13], [20, 21, 22, 23], [30, 31, 32, 33], [40, 41, 42, 43]]) >>> b[:,1] # the second column of b array([ 1, 11, 21, 31, 41]) >>> b[1:3,:] # the second and third row of b array([[10, 11, 12, 13], [20, 21, 22, 23]])
Matrix data structure
A simple 2 dimensional array allows swapping rows (or columns) in a matrix in O(1) time. Is there an efficient data structure that would allow swapping both rows and columns of a matrix in O(1) time?
[ "You have to store your matrix either as a list of rows or list of columns. Which gives either swapping of rows or swapping of columns in O(1).\nHowever, you can add another layer on top of it to handle column order so that you can reorder columns in O(1).\nSo for every access you need to do:\nx = data[row][colorder[col]] \n\nSwap rows as:\ndata[row1], data[row2] = data[row2], data[row1]\n\nAnd swap columns as:\ncolorder[col1], colorder[col2] = colorder[c2], colorder[c1]\n\n", "Maybe numpy array can help you -- it allows to access both rows and columns and it's fairly efficient (it's the basic data type for scipy)\n>>> def f(x,y):\n... return 10*x+y\n...\n>>> b = fromfunction(f,(5,4),dtype=int)\n>>> b\narray([[ 0, 1, 2, 3],\n [10, 11, 12, 13],\n [20, 21, 22, 23],\n [30, 31, 32, 33],\n [40, 41, 42, 43]])\n>>> b[:,1] # the second column of b\narray([ 1, 11, 21, 31, 41])\n>>> b[1:3,:] # the second and third row of b\narray([[10, 11, 12, 13],\n [20, 21, 22, 23]])\n\n" ]
[ 4, 0 ]
[]
[]
[ "matrix", "python" ]
stackoverflow_0001686162_matrix_python.txt
Q: Python ClientForm Error import ClientForm from urllib2 import urlopen page = urlopen('http://garciainteractive.com/blog/topic_view/topics/content/') form = ClientForm.ParseResponse(page, backwards_compat=False) print form[0] The problem is that ClientForm parses the first html form the following way: <POST http://garciainteractive.com/blog/topic_view/topics/content/ application/x-www-form-urlencoded <HiddenControl(ACT=1) (readonly)> <HiddenControl(RET=http://garciainteractive.com/blog/topic_view/topics/content/) (readonly)> <HiddenControl(URI=/blog/topic_view/topics/content/) (readonly)> <HiddenControl(PRV=) (readonly)> <HiddenControl(XID=d840927d4eaf95cef7aeca789009fb3991f574da) (readonly)> <HiddenControl(entry_id=42) (readonly)> <HiddenControl(site_id=1) (readonly)> <CheckboxControl(save_info=[yes])> <CheckboxControl(notify_me=[yes])> <TextControl(captcha=)> <SubmitControl(submit=Submit) (readonly)>> Thus, not finding name, email and url inputs. How can I fix it? TIA Update: Actually, I'm not using ClientForm separately, but as a part of mechanize, thus would prefer a solution allowing to fix without rewriting mechanize code A: The problem is likely that the HTML itself is invalid - for example it re-uses the id="comment_form" over and over again, while there is only supposed to be one id of a given name per document. Your best solution would probably be to use BeautifulSoup to parse your urlopen page result first, then pretty-print it back into a string for ClientForm - this is likely to get rid of most of the rough edges and give ClientForm a better chance of doing its thing. If this doesn't work, get a pretty-print of the result out and work out what kind of transform you'll have to do on the HTML to make the form very simple for ClientForm - by removing extraneous tags and cruft. A: As Richard suggested use BeautifulSoup. from BeautifulSoup import BeautifulSoup, SoupStrainer from StringIO import StringIO from urllib2 import urlopen import ClientForm url='http://garciainteractive.com/blog/topic_view/topics/content/' html=urlopen(url).read() forms_filter=SoupStrainer('form',id="comment_form") soup = BeautifulSoup(html,parseOnlyThese=forms_filter) forms = ClientForm.ParseFile(StringIO(soup),"", backwards_compat=False) forms[0]['name']='Kalmi' forms[0]['email']='[email protected]'
Python ClientForm Error
import ClientForm from urllib2 import urlopen page = urlopen('http://garciainteractive.com/blog/topic_view/topics/content/') form = ClientForm.ParseResponse(page, backwards_compat=False) print form[0] The problem is that ClientForm parses the first html form the following way: <POST http://garciainteractive.com/blog/topic_view/topics/content/ application/x-www-form-urlencoded <HiddenControl(ACT=1) (readonly)> <HiddenControl(RET=http://garciainteractive.com/blog/topic_view/topics/content/) (readonly)> <HiddenControl(URI=/blog/topic_view/topics/content/) (readonly)> <HiddenControl(PRV=) (readonly)> <HiddenControl(XID=d840927d4eaf95cef7aeca789009fb3991f574da) (readonly)> <HiddenControl(entry_id=42) (readonly)> <HiddenControl(site_id=1) (readonly)> <CheckboxControl(save_info=[yes])> <CheckboxControl(notify_me=[yes])> <TextControl(captcha=)> <SubmitControl(submit=Submit) (readonly)>> Thus, not finding name, email and url inputs. How can I fix it? TIA Update: Actually, I'm not using ClientForm separately, but as a part of mechanize, thus would prefer a solution allowing to fix without rewriting mechanize code
[ "The problem is likely that the HTML itself is invalid - for example it re-uses the id=\"comment_form\" over and over again, while there is only supposed to be one id of a given name per document.\nYour best solution would probably be to use BeautifulSoup to parse your urlopen page result first, then pretty-print it back into a string for ClientForm - this is likely to get rid of most of the rough edges and give ClientForm a better chance of doing its thing.\nIf this doesn't work, get a pretty-print of the result out and work out what kind of transform you'll have to do on the HTML to make the form very simple for ClientForm - by removing extraneous tags and cruft.\n", "As Richard suggested use BeautifulSoup.\nfrom BeautifulSoup import BeautifulSoup, SoupStrainer\nfrom StringIO import StringIO\nfrom urllib2 import urlopen\nimport ClientForm\n\nurl='http://garciainteractive.com/blog/topic_view/topics/content/' \n\nhtml=urlopen(url).read()\nforms_filter=SoupStrainer('form',id=\"comment_form\")\nsoup = BeautifulSoup(html,parseOnlyThese=forms_filter)\nforms = ClientForm.ParseFile(StringIO(soup),\"\", backwards_compat=False)\nforms[0]['name']='Kalmi'\nforms[0]['email']='[email protected]'\n\n" ]
[ 1, 1 ]
[]
[]
[ "clientform", "python" ]
stackoverflow_0001681150_clientform_python.txt
Q: Django Forms not rendering ModelChoiceField's query set I have the following ModelForm: class AttendanceForm(forms.ModelForm): def __init__(self, *args, **kwargs): operation_id = kwargs['operation_id'] del kwargs['operation_id'] super(AttendanceForm, self).__init__(*args, **kwargs) self.fields['deployment'].query_set = \ Deployment.objects.filter(operation__id=operation_id) class Meta: model = Attendance When I manually create the form in the shell (using manage.py shell) form = AttendanceForm(operation_id=1) form.fields['deployment'].query_set it returns the correct query_set, but when I call form.as_p() i get extra entries that weren't in the query_set? Does django cache the html output somehow? I looked through the source, but couldn't find any caching. What am I doing wrong? A: The parameter is queryset, not query_set. See the documentation.
Django Forms not rendering ModelChoiceField's query set
I have the following ModelForm: class AttendanceForm(forms.ModelForm): def __init__(self, *args, **kwargs): operation_id = kwargs['operation_id'] del kwargs['operation_id'] super(AttendanceForm, self).__init__(*args, **kwargs) self.fields['deployment'].query_set = \ Deployment.objects.filter(operation__id=operation_id) class Meta: model = Attendance When I manually create the form in the shell (using manage.py shell) form = AttendanceForm(operation_id=1) form.fields['deployment'].query_set it returns the correct query_set, but when I call form.as_p() i get extra entries that weren't in the query_set? Does django cache the html output somehow? I looked through the source, but couldn't find any caching. What am I doing wrong?
[ "The parameter is queryset, not query_set. See the documentation.\n" ]
[ 4 ]
[]
[]
[ "django", "django_forms", "python" ]
stackoverflow_0001686292_django_django_forms_python.txt
Q: How to remove extended ascii using python? In trying to fix up a PML (Palm Markup Language) file, it appears as if my test file has non-ASCII characters which is causing MakeBook to complain. The solution would be to strip out all the non-ASCII chars in the PML. So in attempting to fix this in python, I have import unicodedata, fileinput for line in fileinput.input(): print unicodedata.normalize('NFKD', line).encode('ascii','ignore') However, this results in an error that line must be "unicode, not str". Here's a file fragment. \B1a\B \tintense, disordered and often destructive rage†.†.†.\t Not quite sure how to properly pass line in to be processed at this point. A: Try print line.decode('iso-8859-1').encode('ascii', 'ignore') -- that should be much closer to what you want. A: You would like to treat line as ASCII-encoded data, so the answer is to decode it to text using the ascii codec: line.decode('ascii') This will raise errors for data that is not in fact ASCII-encoded. This is how to ignore those errors: line.decode('ascii', 'ignore'). This gives you text, in the form of a unicode instance. If you would rather work with (ascii-encoded) data rather than text, you may re-encode it to get back a str or bytes instance (depending on your version of Python): line.decode('ascii', 'ignore').encode('ascii') A: To drop non-ASCII characters use line.decode(your_file_encoding).encode('ascii', 'ignore'). But probably you'd better use PLM escape sequences for them: import re def escape_unicode(m): return '\\U%04x' % ord(m.group()) non_ascii = re.compile(u'[\x80-\uFFFF]', re.U) line = u'\\B1a\\B \\tintense, disordered and often destructive rage\u2020.\u2020.\u2020.\\t' print non_ascii.sub(escape_unicode, line) This outputs \B1a\B \tintense, disordered and often destructive rage\U2020.\U2020.\U2020.\t. Dropping non-ASCII and control characters with regular expression is easy too (this can be safely used after escaping): regexp = re.compile('[^\x09\x0A\x0D\x20-\x7F]') regexp.sub('', line) A: When reading from a file in Python you're getting byte strings, aka "str" in Python 2.x and earlier. You need to convert these to the "unicode" type using the decode method. eg: line = line.decode('latin1') Replace 'latin1' with the correct encoding.
How to remove extended ascii using python?
In trying to fix up a PML (Palm Markup Language) file, it appears as if my test file has non-ASCII characters which is causing MakeBook to complain. The solution would be to strip out all the non-ASCII chars in the PML. So in attempting to fix this in python, I have import unicodedata, fileinput for line in fileinput.input(): print unicodedata.normalize('NFKD', line).encode('ascii','ignore') However, this results in an error that line must be "unicode, not str". Here's a file fragment. \B1a\B \tintense, disordered and often destructive rage†.†.†.\t Not quite sure how to properly pass line in to be processed at this point.
[ "Try print line.decode('iso-8859-1').encode('ascii', 'ignore') -- that should be much closer to what you want.\n", "You would like to treat line as ASCII-encoded data, so the answer is to decode it to text using the ascii codec:\nline.decode('ascii')\nThis will raise errors for data that is not in fact ASCII-encoded. This is how to ignore those errors:\nline.decode('ascii', 'ignore').\nThis gives you text, in the form of a unicode instance. If you would rather work with (ascii-encoded) data rather than text, you may re-encode it to get back a str or bytes instance (depending on your version of Python):\nline.decode('ascii', 'ignore').encode('ascii')\n", "To drop non-ASCII characters use line.decode(your_file_encoding).encode('ascii', 'ignore'). But probably you'd better use PLM escape sequences for them:\nimport re\n\ndef escape_unicode(m):\n return '\\\\U%04x' % ord(m.group())\n\nnon_ascii = re.compile(u'[\\x80-\\uFFFF]', re.U)\n\nline = u'\\\\B1a\\\\B \\\\tintense, disordered and often destructive rage\\u2020.\\u2020.\\u2020.\\\\t'\nprint non_ascii.sub(escape_unicode, line)\n\nThis outputs \\B1a\\B \\tintense, disordered and often destructive rage\\U2020.\\U2020.\\U2020.\\t.\nDropping non-ASCII and control characters with regular expression is easy too (this can be safely used after escaping):\nregexp = re.compile('[^\\x09\\x0A\\x0D\\x20-\\x7F]')\nregexp.sub('', line)\n\n", "When reading from a file in Python you're getting byte strings, aka \"str\" in Python 2.x and earlier. You need to convert these to the \"unicode\" type using the decode method. eg:\nline = line.decode('latin1')\n\nReplace 'latin1' with the correct encoding.\n" ]
[ 5, 4, 2, 0 ]
[]
[]
[ "ascii", "extended_ascii", "python" ]
stackoverflow_0001685681_ascii_extended_ascii_python.txt
Q: Weighted average of angles I want to calculate the weighted mean of a set of angles. In this Question, there's an answer how to calculate the mean as shown in this page. Now I'm trying to figure out how to calculate the weighted average. That is, for each angle there is a weight (the weights sum up to 1) 0.25, 0 degrees 0.5, 20 degrees 0.25, 90 degrees The weighted avg should (if I didn't make a mistake) be 32 degrees. A: OK, my attemp was to just multiply the values with the weights: def circular_mean(weights, angles): x = y = 0. for angle, weight in zip(angles, weights): x += math.cos(math.radians(angle)) * weight y += math.sin(math.radians(angle)) * weight mean = math.degrees(math.atan2(y, x)) return mean It SEEMS to work correct. I have to think of good test data. A: Depending on your application the question has different answers. As mentioned above you may need to normalize your values and you may need to have signed angles, or you may not wish to. Unless you know what the angle generating function is there may not be a unique answer. This was a sufficient problem for me (working in geometry) I wrote my own Angle class.
Weighted average of angles
I want to calculate the weighted mean of a set of angles. In this Question, there's an answer how to calculate the mean as shown in this page. Now I'm trying to figure out how to calculate the weighted average. That is, for each angle there is a weight (the weights sum up to 1) 0.25, 0 degrees 0.5, 20 degrees 0.25, 90 degrees The weighted avg should (if I didn't make a mistake) be 32 degrees.
[ "OK, my attemp was to just multiply the values with the weights:\ndef circular_mean(weights, angles):\n x = y = 0.\n for angle, weight in zip(angles, weights):\n x += math.cos(math.radians(angle)) * weight\n y += math.sin(math.radians(angle)) * weight\n\n mean = math.degrees(math.atan2(y, x))\n return mean\n\nIt SEEMS to work correct. I have to think of good test data.\n", "Depending on your application the question has different answers. As mentioned above you may need to normalize your values and you may need to have signed angles, or you may not wish to. Unless you know what the angle generating function is there may not be a unique answer.\nThis was a sufficient problem for me (working in geometry) I wrote my own Angle class.\n" ]
[ 6, 1 ]
[]
[]
[ "algorithm", "mean", "python" ]
stackoverflow_0001686994_algorithm_mean_python.txt
Q: How can I tell if a given method is a classmethod or instancemethod in Python? Checking to see if m.im_self is the class works some of the time but doesn't seem to be 100% reliable (ex. if you use multiple decorators on a method.) A: If it's a bound method on the class then it's a classmethod. from inspect import ismethod, isclass def isclassmethod( m ): return ismethod(m) and isclass(m.__self__)
How can I tell if a given method is a classmethod or instancemethod in Python?
Checking to see if m.im_self is the class works some of the time but doesn't seem to be 100% reliable (ex. if you use multiple decorators on a method.)
[ "If it's a bound method on the class then it's a classmethod. \nfrom inspect import ismethod, isclass\ndef isclassmethod( m ):\n return ismethod(m) and isclass(m.__self__)\n\n" ]
[ 2 ]
[]
[]
[ "class", "python" ]
stackoverflow_0001687531_class_python.txt
Q: Why does an assignment for double-sliced numpy arrays not work? why do the following lines not work as I expect? import numpy as np a = np.array([0,1,2,1,1]) a[a==1][1:] = 3 print a >>> [0 1 2 1 1] # I would expect [0 1 2 3 3] Is this a 'bug' or is there another recommended way to this? On the other hand, the following works: a[a==1] = 3 print a >>> [0 3 2 3 3] Cheers, Philipp A: It's related to how fancy indexing works. There is a thorough explanation here. It is done this way to allow inplace modification with fancy indexing (ie a[x>3] *= 2). A consequence of this is that you can't assign to a double index as you have found. Fancy indexing always returns a copy rather than a view. A: It appears you simply can't do an assignment through a double-slice like that. This works though: a[numpy.where(a==1)[0][1:]] = 3 A: Because the a[a==1] part isn't actually a slice. It creates a new array. It makes sense when you think about it-- you're only taking the elements that satisfy the boolean condition (like a filter operation). A: This does what you want a[2:][a[2:]==1]=3
Why does an assignment for double-sliced numpy arrays not work?
why do the following lines not work as I expect? import numpy as np a = np.array([0,1,2,1,1]) a[a==1][1:] = 3 print a >>> [0 1 2 1 1] # I would expect [0 1 2 3 3] Is this a 'bug' or is there another recommended way to this? On the other hand, the following works: a[a==1] = 3 print a >>> [0 3 2 3 3] Cheers, Philipp
[ "It's related to how fancy indexing works. There is a thorough explanation here. It is done this way to allow inplace modification with fancy indexing (ie a[x>3] *= 2). A consequence of this is that you can't assign to a double index as you have found. Fancy indexing always returns a copy rather than a view.\n", "It appears you simply can't do an assignment through a double-slice like that.\nThis works though:\na[numpy.where(a==1)[0][1:]] = 3\n\n", "Because the a[a==1] part isn't actually a slice. It creates a new array. It makes sense when you think about it-- you're only taking the elements that satisfy the boolean condition (like a filter operation).\n", "This does what you want\na[2:][a[2:]==1]=3\n\n" ]
[ 10, 9, 3, 0 ]
[]
[]
[ "numpy", "python", "slice", "variable_assignment" ]
stackoverflow_0001687566_numpy_python_slice_variable_assignment.txt
Q: Django Formsets - form.is_valid() is False preventing formset validation I'm am utilizing a formset to enable users subscribe to multiple feeds. I require a) Users chose a subscription by selecting a boolean field, and are also required to tag the subscription and b) a user must subscribe to an specified number of subscriptions. Currently the below code is capable of a) ensuring the users tags a subscription, however some of my forms is_valid() are False and thus preventing my validation of the full formset. [edit] Also, the relevant formset error message fails to display. Below is the code: from django import forms from django.forms.formsets import BaseFormSet from tagging.forms import TagField from rss.feeder.models import Feed class FeedForm(forms.Form): subscribe = forms.BooleanField(required=False, initial=False) tags = TagField(required=False, initial='') def __init__(self, *args, **kwargs): feed = kwargs.pop("feed") super(FeedForm, self).__init__(*args, **kwargs) self.title = feed.title self.description = feed.description def clean(self): """apply our custom validation rules""" data = self.cleaned_data feed = data.get("subscribe") tags = data.get("tags") tag_len = len(tags.split()) self._errors = {} if feed == True and tag_len < 1: raise forms.ValidationError("No tags specified for feed") return data class FeedFormSet(BaseFormSet): def __init__(self, *args, **kwargs): self.feeds = list(kwargs.pop("feeds")) self.req_subs = 3 # TODO: convert to kwargs arguement self.extra = len(self.feeds) super(FeedFormSet, self).__init__(*args, **kwargs) # WARNING! Using undocumented. see for details... def _construct_form(self, i, **kwargs): kwargs["feed"] = self.feeds[i] return super(FeedFormSet, self)._construct_form(i, **kwargs) def clean(self): """Checks that only a required number of Feed subscriptions are present""" if any(self.errors): # Do nothing, don't bother doing anything unless all the FeedForms are valid return total_subs = 0 for i in range(0, self.extra): form = self.forms[i] feed = form.cleaned_data subs = feed.get("subscribe") if subs == True: total_subs += 1 if total_subs != self.req_subs: raise forms.ValidationError("More subscriptions...") # TODO more informative return form.cleaned_data As requested, the view code: from django.forms import formsets from django.http import Http404 from django.http import HttpResponseRedirect from django.shortcuts import render_to_response from rss.feeder.forms import FeedForm from rss.feeder.forms import FeedFormSet from rss.feeder.models import Feed FeedSet = formsets.formset_factory(FeedForm, FeedFormSet) def feeds(request): if request.method == "POST": formset = create_feed_formset(request.POST) if formset.is_valid(): # submit the results return HttpResponseRedirect('/feeder/thanks/') else: formset = create_feed_formset() return render_to_response('feeder/register_step_two.html', {'formset': formset}) def create_feed_formset(data=None): """Create and populate a feed formset""" feeds = Feed.objects.order_by('id') if not feeds: # No feeds found, we should have created them raise Http404('Invalid Step') return FeedSet(data, feeds=feeds) # return the instance of the formset Any help would be appreciated. Ps. For full disclosure, this code is based on http://google.com/search?q=cache:rVtlfQ3QAjwJ:https://www.pointy-stick.com/blog/2009/01/23/advanced-formset-usage-django/+django+formset [Solved] See solution below. A: Solved. Below is a quick run through of the solution. Reporting the error required manipulating and formating a special error message. In the source code for formsets I found the errors that apply to a whole form are known as non_form_errors and produced a custom error based on this. [note: I couldn't find any authoritive documentation on this, so someone might know a better way]. The code is below: def append_non_form_error(self, message): errors = super(FeedFormSet, self).non_form_errors() errors.append(message) raise forms.ValidationError(errors) The formsets clean method also needed a few tweaks. Basically it checks the if the forms is bound (empty ones aren't, hence is_valid is false in the question) and if so accesses checks there subscribe value. def clean(self): """Checks that only a required number of Feed subscriptions are present""" count = 0 for form in self.forms: if form.is_bound: if form['subscribe'].data: count += 1 if count > 0 and count != self.required: self.append_non_form_error("not enough subs") Some might wonder why I choose to access the value using the form['field_name'].data format. This allows us to retrieve the raw value and always get a count on subscriptions, allowing me to return all relevant messages for the entire formset, i.e. specific problems with individual forms and higher level problems (like number of subscriptions), meaning that the user won't have to resubmit the form over and over to work through the list of errors. Finally, I was missing one crucial aspect of my template, the {{ formset.non_form_errors }} tag. Below is the updated template: {% extends "base.html" %} {% load i18n %} {% block content %} <form action="." method="post"> {{ formset.management_form }} {{ formset.non_form_errors }} <ol> {% for form in formset.forms %} <li><p>{{ form.title }}</p> <p>{{ form.description }}</p> {{ form.as_p }} </li> {% endfor %} </ol> <input type="submit"> </form> {% endblock %} A: I made attempt to circumvent my problem...it is not a good solution, it's very much so a hack. It allows people to proceed if they subscribe to the required number of feeds (in the case below more than 1), however if less than the required number of feeds, it fails to show the error message raised. def clean(self): count = 0 for i in range(0, self.extra): form = self.forms[i] try: if form.cleaned_data: count += 1 except AttributeError: pass if count > 1: raise forms.ValidationError('not enough subscriptions') return form.cleaned_data I do use {{ formset.management_form }} in my template so as far as I know the error should display. Below my template in case I'm misguided. {% extends "base.html" %} {% load i18n %} {% block content %} <form action="." method="post"> {{ formset.management_form }} <ol> {% for form in formset.forms %} {{ form.as_p }} </li> {% endfor %} </ol> <input type="submit"> </form> {% endblock %}
Django Formsets - form.is_valid() is False preventing formset validation
I'm am utilizing a formset to enable users subscribe to multiple feeds. I require a) Users chose a subscription by selecting a boolean field, and are also required to tag the subscription and b) a user must subscribe to an specified number of subscriptions. Currently the below code is capable of a) ensuring the users tags a subscription, however some of my forms is_valid() are False and thus preventing my validation of the full formset. [edit] Also, the relevant formset error message fails to display. Below is the code: from django import forms from django.forms.formsets import BaseFormSet from tagging.forms import TagField from rss.feeder.models import Feed class FeedForm(forms.Form): subscribe = forms.BooleanField(required=False, initial=False) tags = TagField(required=False, initial='') def __init__(self, *args, **kwargs): feed = kwargs.pop("feed") super(FeedForm, self).__init__(*args, **kwargs) self.title = feed.title self.description = feed.description def clean(self): """apply our custom validation rules""" data = self.cleaned_data feed = data.get("subscribe") tags = data.get("tags") tag_len = len(tags.split()) self._errors = {} if feed == True and tag_len < 1: raise forms.ValidationError("No tags specified for feed") return data class FeedFormSet(BaseFormSet): def __init__(self, *args, **kwargs): self.feeds = list(kwargs.pop("feeds")) self.req_subs = 3 # TODO: convert to kwargs arguement self.extra = len(self.feeds) super(FeedFormSet, self).__init__(*args, **kwargs) # WARNING! Using undocumented. see for details... def _construct_form(self, i, **kwargs): kwargs["feed"] = self.feeds[i] return super(FeedFormSet, self)._construct_form(i, **kwargs) def clean(self): """Checks that only a required number of Feed subscriptions are present""" if any(self.errors): # Do nothing, don't bother doing anything unless all the FeedForms are valid return total_subs = 0 for i in range(0, self.extra): form = self.forms[i] feed = form.cleaned_data subs = feed.get("subscribe") if subs == True: total_subs += 1 if total_subs != self.req_subs: raise forms.ValidationError("More subscriptions...") # TODO more informative return form.cleaned_data As requested, the view code: from django.forms import formsets from django.http import Http404 from django.http import HttpResponseRedirect from django.shortcuts import render_to_response from rss.feeder.forms import FeedForm from rss.feeder.forms import FeedFormSet from rss.feeder.models import Feed FeedSet = formsets.formset_factory(FeedForm, FeedFormSet) def feeds(request): if request.method == "POST": formset = create_feed_formset(request.POST) if formset.is_valid(): # submit the results return HttpResponseRedirect('/feeder/thanks/') else: formset = create_feed_formset() return render_to_response('feeder/register_step_two.html', {'formset': formset}) def create_feed_formset(data=None): """Create and populate a feed formset""" feeds = Feed.objects.order_by('id') if not feeds: # No feeds found, we should have created them raise Http404('Invalid Step') return FeedSet(data, feeds=feeds) # return the instance of the formset Any help would be appreciated. Ps. For full disclosure, this code is based on http://google.com/search?q=cache:rVtlfQ3QAjwJ:https://www.pointy-stick.com/blog/2009/01/23/advanced-formset-usage-django/+django+formset [Solved] See solution below.
[ "Solved. Below is a quick run through of the solution.\nReporting the error required manipulating and formating a special error message. In the source code for formsets I found the errors that apply to a whole form are known as non_form_errors and produced a custom error based on this. [note: I couldn't find any authoritive documentation on this, so someone might know a better way]. The code is below:\ndef append_non_form_error(self, message):\n errors = super(FeedFormSet, self).non_form_errors()\n errors.append(message)\n raise forms.ValidationError(errors)\n\nThe formsets clean method also needed a few tweaks. Basically it checks the if the forms is bound (empty ones aren't, hence is_valid is false in the question) and if so accesses checks there subscribe value.\ndef clean(self):\n \"\"\"Checks that only a required number of Feed subscriptions are present\"\"\"\n count = 0\n for form in self.forms:\n if form.is_bound:\n if form['subscribe'].data:\n count += 1\n if count > 0 and count != self.required:\n self.append_non_form_error(\"not enough subs\")\n\nSome might wonder why I choose to access the value using the form['field_name'].data format. This allows us to retrieve the raw value and always get a count on subscriptions, allowing me to return all relevant messages for the entire formset, i.e. specific problems with individual forms and higher level problems (like number of subscriptions), meaning that the user won't have to resubmit the form over and over to work through the list of errors.\nFinally, I was missing one crucial aspect of my template, the {{ formset.non_form_errors }} tag. Below is the updated template:\n{% extends \"base.html\" %}\n{% load i18n %}\n\n{% block content %}\n<form action=\".\" method=\"post\">\n {{ formset.management_form }}\n {{ formset.non_form_errors }}\n <ol> \n {% for form in formset.forms %}\n <li><p>{{ form.title }}</p>\n <p>{{ form.description }}</p>\n {{ form.as_p }}\n </li>\n {% endfor %}\n </ol>\n <input type=\"submit\">\n</form>\n\n{% endblock %}\n\n", "I made attempt to circumvent my problem...it is not a good solution, it's very much so a hack. It allows people to proceed if they subscribe to the required number of feeds (in the case below more than 1), however if less than the required number of feeds, it fails to show the error message raised.\ndef clean(self):\n count = 0\n for i in range(0, self.extra):\n form = self.forms[i]\n try:\n if form.cleaned_data:\n count += 1\n except AttributeError:\n pass\n if count > 1:\n raise forms.ValidationError('not enough subscriptions')\n return form.cleaned_data\n\nI do use {{ formset.management_form }} in my template so as far as I know the error should display. Below my template in case I'm misguided.\n{% extends \"base.html\" %}\n{% load i18n %}\n\n{% block content %}\n<form action=\".\" method=\"post\">\n {{ formset.management_form }}\n <ol> \n {% for form in formset.forms %}\n {{ form.as_p }}\n </li>\n {% endfor %}\n </ol>\n <input type=\"submit\">\n</form>\n\n{% endblock %}\n\n" ]
[ 3, 0 ]
[]
[]
[ "django", "django_forms", "python" ]
stackoverflow_0001682069_django_django_forms_python.txt
Q: Python app distribution cross-platform I want to distribute my app on OSX (using py2app) and as a Debian package. The structure of my app is like: app/ debian/ <lots of debian related stuff> scripts/ app app/ __init__.py app.py mod1/ __init__.py a.py mod2/ __init__.py b.py My setup.py looks something like: from setuptools import setup import os import os.path osname = os.uname()[0] if osname == 'Darwin': APP = ['app/app.py'] DATA_FILES = [] OPTIONS = {'argv_emulation': True} setup( app=APP, data_files=DATA_FILES, options={'py2app': OPTIONS}, setup_requires=['py2app'], ) elif osname == 'Linux': setup( name = "app", version = "0.0.1", description = "foo bar", packages = ["app", "app.mod1", "app.mod2"], scripts = ["scripts/app"], data_files = [ ("/usr/bin", ["scripts/app"]), ] ) Then, in b.py (this is on OSX): from app.mod2.b import * I get: ImportError: No module named mod2.b So basically, mod2 can't acccess mod1. On Linux there's no problem, because the python module 'app' is installed globally in /usr/shared/pyshared. But on OSX the app will obviously be a self-contained .app thing built by py2app. I wonder if I approached this totally wrong, are there any best practices when distributing Python apps on OSX? Edit: I also tried a hack like this in b.py: from ..mod2.b import * ValueError: Attempted relative import beyond toplevel package Edit2: Seems to be related to this How to do relative imports in Python? A: I'm not sure if this is the 'best practice' or not (I've not put much python software into proper distribution), but I would just make sure that the top-level app package was in sys.path. Something like putting the following into the top-level __init__.py: try: import myapp except ImportError: import sys from os.path import abspath, dirname, split parent_dir = split(dirname(abspath(__file__)))[0] sys.path.append(parent_dir) I think that should do the right thing in a cross platform way. EDIT: As kaizer.se points out this might not work in the __init__.py file, depending on how the code you're invoking is getting executed. It would only work if that file is evaluated. The key is to make sure that the top-level package is in sys.path from some the code that actually is running. Often times, so that I an execute individual files inside of a package directly (for testing with the if __name__ eq '__main__' idiom), I'll do something like place a statement: import _setup At the top of the individual file in question, and then create a file _setup.py which does the path munging as necessary. So, something like: package/ __init__.py _setup.py mod1/ __init__.py _setup.py somemodule.py If you import _setup from somemodule.py, that setup file can ensure that the top level package is in sys.path before the rest of the code in somemodule.py is evaluated.
Python app distribution cross-platform
I want to distribute my app on OSX (using py2app) and as a Debian package. The structure of my app is like: app/ debian/ <lots of debian related stuff> scripts/ app app/ __init__.py app.py mod1/ __init__.py a.py mod2/ __init__.py b.py My setup.py looks something like: from setuptools import setup import os import os.path osname = os.uname()[0] if osname == 'Darwin': APP = ['app/app.py'] DATA_FILES = [] OPTIONS = {'argv_emulation': True} setup( app=APP, data_files=DATA_FILES, options={'py2app': OPTIONS}, setup_requires=['py2app'], ) elif osname == 'Linux': setup( name = "app", version = "0.0.1", description = "foo bar", packages = ["app", "app.mod1", "app.mod2"], scripts = ["scripts/app"], data_files = [ ("/usr/bin", ["scripts/app"]), ] ) Then, in b.py (this is on OSX): from app.mod2.b import * I get: ImportError: No module named mod2.b So basically, mod2 can't acccess mod1. On Linux there's no problem, because the python module 'app' is installed globally in /usr/shared/pyshared. But on OSX the app will obviously be a self-contained .app thing built by py2app. I wonder if I approached this totally wrong, are there any best practices when distributing Python apps on OSX? Edit: I also tried a hack like this in b.py: from ..mod2.b import * ValueError: Attempted relative import beyond toplevel package Edit2: Seems to be related to this How to do relative imports in Python?
[ "I'm not sure if this is the 'best practice' or not (I've not put much python software into proper distribution), but I would just make sure that the top-level app package was in sys.path. Something like putting the following into the top-level __init__.py:\ntry:\n import myapp\nexcept ImportError:\n import sys\n from os.path import abspath, dirname, split\n parent_dir = split(dirname(abspath(__file__)))[0]\n sys.path.append(parent_dir)\n\nI think that should do the right thing in a cross platform way.\nEDIT: As kaizer.se points out this might not work in the __init__.py file, depending on how the code you're invoking is getting executed. It would only work if that file is evaluated. The key is to make sure that the top-level package is in sys.path from some the code that actually is running.\nOften times, so that I an execute individual files inside of a package directly (for testing with the if __name__ eq '__main__' idiom), I'll do something like place a statement:\nimport _setup\n\nAt the top of the individual file in question, and then create a file _setup.py which does the path munging as necessary. So, something like:\npackage/\n __init__.py\n _setup.py\n mod1/\n __init__.py\n _setup.py\n somemodule.py\n\nIf you import _setup from somemodule.py, that setup file can ensure that the top level package is in sys.path before the rest of the code in somemodule.py is evaluated.\n" ]
[ 5 ]
[]
[]
[ "cross_platform", "py2app", "python", "setuptools" ]
stackoverflow_0001688105_cross_platform_py2app_python_setuptools.txt
Q: Why am I getting a little-endian error when importing .so file in python Im attempting to use a C++ extension for Python called PySndObj. and getting an error I have never seen and cannot find anything about on the web :( ImportError: /home/nhnifong/SndObj-2.6.6/python/_sndobj.so: ELF file data encoding not little-endian I know that probably means the byte order is backwards, So I tried writing a little script that read the file 2 bytes at a time and switched their order before writing them back out. It didn't work. Anyone know what to do? A: You have to build the extension from source yourself. It was valiant of you to try and "reverse the bytes", but only certain sections of the ELF file have word-oriented (as opposed to byte-oriented) data. Furthermore, it's unlikely that the dll in question was compiled for your system's CPU architecture.
Why am I getting a little-endian error when importing .so file in python
Im attempting to use a C++ extension for Python called PySndObj. and getting an error I have never seen and cannot find anything about on the web :( ImportError: /home/nhnifong/SndObj-2.6.6/python/_sndobj.so: ELF file data encoding not little-endian I know that probably means the byte order is backwards, So I tried writing a little script that read the file 2 bytes at a time and switched their order before writing them back out. It didn't work. Anyone know what to do?
[ "You have to build the extension from source yourself.\nIt was valiant of you to try and \"reverse the bytes\", but only certain sections of the ELF file have word-oriented (as opposed to byte-oriented) data.\nFurthermore, it's unlikely that the dll in question was compiled for your system's CPU architecture.\n" ]
[ 4 ]
[]
[]
[ "import", "python" ]
stackoverflow_0001688845_import_python.txt
Q: How do you call PyObjC code from Objective-C? Possible Duplicate: Calling Python from Objective-C I'm a long-time Python programmer and short-time Cocoa programmer. I'm just getting started with PyObjC and it's really amazing how easy it it is to get stuff done. That said, I wanted to try using pure ObjC for my controller with PyObjC models. I might be enjoy letting Python be Python and Objective-C be Objective-C. I figured it was worth a try, anyways. Except I can't figure out or find anything about how to call Python from Objective-C, only the other way around. Can someone point me to any resources on this? (Maybe it's on the PyObjC site but I just don't know what I'm looking for?) Edit: I'm most interested, at the basic level, in being able to call a Python module and get some native ObjC data types back. A: There are several possible approaches. The most tempting is to use py2app to compile a loadable bundle from your python code from which you can access the principal class using NSBundle. Unfortunately, this use case hasn't gotten much love from the py2app developers, and I've found several bugs in 10.5 and 10.6, including a rather nasty memory leak when passing data from python back in to Objective-C. I wouldn't recommend using py2app at thist point. The second approach is invert the embedding. Write a Python cocoa app and load your Objective-C code from a bundle at startup (even in main()). If you already have a large Objective-C app, this may take a bit of work. The only downside, that I'm ware of, is that you won't be able to use GC in your Objective-C code, but this is really a universal limitation in working with PyObjC. Finally, you can instantiate a python interpreter in your Objective-C code to load your python code. This is obviously more involved, but may the best option if you already have a large Objective-C codebase into which you want to inject your python code. The main.m file from the Python-Cococa application template in Xcode is a good place to start to see this in action. A: Whoops, guess I should've searched a bit more: Calling Python from Objective-C
How do you call PyObjC code from Objective-C?
Possible Duplicate: Calling Python from Objective-C I'm a long-time Python programmer and short-time Cocoa programmer. I'm just getting started with PyObjC and it's really amazing how easy it it is to get stuff done. That said, I wanted to try using pure ObjC for my controller with PyObjC models. I might be enjoy letting Python be Python and Objective-C be Objective-C. I figured it was worth a try, anyways. Except I can't figure out or find anything about how to call Python from Objective-C, only the other way around. Can someone point me to any resources on this? (Maybe it's on the PyObjC site but I just don't know what I'm looking for?) Edit: I'm most interested, at the basic level, in being able to call a Python module and get some native ObjC data types back.
[ "There are several possible approaches. The most tempting is to use py2app to compile a loadable bundle from your python code from which you can access the principal class using NSBundle. Unfortunately, this use case hasn't gotten much love from the py2app developers, and I've found several bugs in 10.5 and 10.6, including a rather nasty memory leak when passing data from python back in to Objective-C. I wouldn't recommend using py2app at thist point.\nThe second approach is invert the embedding. Write a Python cocoa app and load your Objective-C code from a bundle at startup (even in main()). If you already have a large Objective-C app, this may take a bit of work. The only downside, that I'm ware of, is that you won't be able to use GC in your Objective-C code, but this is really a universal limitation in working with PyObjC.\nFinally, you can instantiate a python interpreter in your Objective-C code to load your python code. This is obviously more involved, but may the best option if you already have a large Objective-C codebase into which you want to inject your python code. The main.m file from the Python-Cococa application template in Xcode is a good place to start to see this in action.\n", "Whoops, guess I should've searched a bit more:\nCalling Python from Objective-C\n" ]
[ 3, 0 ]
[]
[]
[ "cocoa", "objective_c", "pyobjc", "python" ]
stackoverflow_0001689012_cocoa_objective_c_pyobjc_python.txt
Q: python image recognition what I want to do is a image recognition for a simple app: given image (500 x 500) pxs ( 1 color background ) the image will have only 1 geometric figure (triangle or square or smaleyface :) ) of (50x50) pxs. python will do the recognition of the figure and display what geometric figure is. any links? any hints? any API? thxs :) A: A typical python tool chain would be: read your images with with PIL transform them into Numpy arrays use Scipy's image filters (linear and rank, morphological) to implement your solution As far differentiating the shapes, I would obtain its silhouette by looking at the shape of the background. I would then detect the number of corners using a corner detection algorithm (e.g. Harris). A triangle has 3 corners, a square 4, and a smiley none. Here's a python implementation of the Harris corner detection with Scipy. Edit: As you mention in the comments, the blog post didn't present the function that produces a gaussian kernel needed in the algorithm. Here's an example of a such a function from the Scipy Cookbook (great resource btw): def gauss_kern(size, sizey=None): """ Returns a normalized 2D gauss kernel array for convolutions """ size = int(size) if not sizey: sizey = size else: sizey = int(sizey) x, y = mgrid[-size:size+1, -sizey:sizey+1] g = exp(-(x**2/float(size)+y**2/float(sizey))) return g / g.sum() A: OpenCV has blob analysis tools, it will give you metrics about the shape which you can feed for your favourite pattern recognition algorithm :) Eg. rectangle has 1.0 ratio for area / (height * width), when circle's ratio is about 0.78. A: You point the geometric figure is 50x50 px. If size and orientation of the geometric figures are fixed, you have a classic template matching problem, suitable to the correlation method. You can apply the template matching on the original image or on a border detection output. Otherwise, if size (scale) and/or orientation are arbitrary, Fourier descriptors can be applied. These descriptors are rotation and scale invariants. All these methods can be coded using OpenCV, NumPy or SciPy. A: If you know the statespace of your data, you can use Principal Component Analysis. With PCA all of the objects must be posed (in the center of the screen). PCA will not do detection, but it will seperate objects into unique layers in which you can identify as being a triangle, etc. Also note: this is not scale or rotation invariant. [I can't remember what this technique is called, but its similar to how the postoffice does handwritting rec] If you can handle only non-curved curvfaces, you could do edge detection, and then do sampling at intersections to get an approximation of similarity.
python image recognition
what I want to do is a image recognition for a simple app: given image (500 x 500) pxs ( 1 color background ) the image will have only 1 geometric figure (triangle or square or smaleyface :) ) of (50x50) pxs. python will do the recognition of the figure and display what geometric figure is. any links? any hints? any API? thxs :)
[ "A typical python tool chain would be:\n\nread your images with with PIL \ntransform them into Numpy arrays\nuse Scipy's image filters (linear and rank, morphological) to implement your solution\n\nAs far differentiating the shapes, I would obtain its silhouette by looking at the shape of the background. I would then detect the number of corners using a corner detection algorithm (e.g. Harris). A triangle has 3 corners, a square 4, and a smiley none.\nHere's a python implementation of the Harris corner detection with Scipy.\nEdit:\nAs you mention in the comments, the blog post didn't present the function that produces a gaussian kernel needed in the algorithm. Here's an example of a such a function from the Scipy Cookbook (great resource btw):\ndef gauss_kern(size, sizey=None):\n \"\"\" Returns a normalized 2D gauss kernel array for convolutions \"\"\"\n size = int(size)\n if not sizey:\n sizey = size\n else:\n sizey = int(sizey)\n x, y = mgrid[-size:size+1, -sizey:sizey+1]\n g = exp(-(x**2/float(size)+y**2/float(sizey)))\n return g / g.sum()\n\n", "OpenCV has blob analysis tools, it will give you metrics about the shape which you can feed for your favourite pattern recognition algorithm :) Eg. rectangle has 1.0 ratio for area / (height * width), when circle's ratio is about 0.78.\n", "You point the geometric figure is 50x50 px. If size and orientation of the geometric figures are fixed, you have a classic template matching problem, suitable to the correlation method. You can apply the template matching on the original image or on a border detection output. \nOtherwise, if size (scale) and/or orientation are arbitrary, Fourier descriptors can be applied. These descriptors are rotation and scale invariants.\nAll these methods can be coded using OpenCV, NumPy or SciPy.\n", "If you know the statespace of your data, you can use Principal Component Analysis. With PCA all of the objects must be posed (in the center of the screen). PCA will not do detection, but it will seperate objects into unique layers in which you can identify as being a triangle, etc. Also note: this is not scale or rotation invariant. \n[I can't remember what this technique is called, but its similar to how the postoffice does handwritting rec]\nIf you can handle only non-curved curvfaces, you could do edge detection, and then do sampling at intersections to get an approximation of similarity. \n" ]
[ 32, 10, 3, 2 ]
[]
[]
[ "algorithm", "image", "image_processing", "python", "python_imaging_library" ]
stackoverflow_0001603688_algorithm_image_image_processing_python_python_imaging_library.txt
Q: What can cause select to block in Python? Here's a snippet of code I'm using in a loop: while True: print 'loop' rlist, wlist, xlist = select.select(readers, [], [], TIMEOUT) print 'selected' # do stuff At a certain point, select will block and "selected" is never getting printed. What can cause this behavior? Is it possible there's some kind of deadlock? UPDATE: I'm running on Ubuntu linux and the reader objects are sockets. A: Yes, depending on the OS in question, it is indeed possible for a certain file descriptor to block at OS level in a non-interruptible way even though you've explicitly demanded for it to be non-blocking. Depending on your OS, there may be workarounds to these OS-level bugs (or "misfeatures"), but to offer any further help we need to know exactly what OS is in play and exactly what kinds of objects are in the readers list. A: Some longshots... If TIMEOUT is getting set to None, then select will never timeout. Also, if readers becomes an empty list, select will always wait for the full timeout value (or hang if TIMEOUT is None)
What can cause select to block in Python?
Here's a snippet of code I'm using in a loop: while True: print 'loop' rlist, wlist, xlist = select.select(readers, [], [], TIMEOUT) print 'selected' # do stuff At a certain point, select will block and "selected" is never getting printed. What can cause this behavior? Is it possible there's some kind of deadlock? UPDATE: I'm running on Ubuntu linux and the reader objects are sockets.
[ "Yes, depending on the OS in question, it is indeed possible for a certain file descriptor to block at OS level in a non-interruptible way even though you've explicitly demanded for it to be non-blocking. Depending on your OS, there may be workarounds to these OS-level bugs (or \"misfeatures\"), but to offer any further help we need to know exactly what OS is in play and exactly what kinds of objects are in the readers list.\n", "Some longshots...\nIf TIMEOUT is getting set to None, then select will never timeout.\nAlso, if readers becomes an empty list, select will always wait for the full timeout value (or hang if TIMEOUT is None)\n" ]
[ 2, 1 ]
[]
[]
[ "asynchronous", "python", "select", "sockets" ]
stackoverflow_0001689182_asynchronous_python_select_sockets.txt
Q: What's the easiest way of finding a child instance from a parent instance in Django? My application uses class inheritance to minimize repetition across my models. My models.py looks kind of like this: class BaseModel(models.Model): title = models.CharField(max_length=100) pub_date = models.DateField() class Child(BaseModel): foo = models.CharField(max_length=20) class SecondChild(BaseModel): bar = models.CharField(max_length=20) Now most of the time, my views and templates only deal with instances of Child or SecondChild. Once in a while, however, I have a situation where I have an instance of BaseModel, and need to figure out which class is inheriting from that instance. Given an instance of BaseModel, let's call it base, Django's ORM offers base.child and base.secondchild. Currently, I have a method that loops through all of them to figure it out. It would look something like this: class BaseModel(models.Model): ... def get_absolute_url(self): url = None try: self.child url = self.child.get_absolute_url() except Child.DoesNotExist: pass if not url: try: self.secondchild url = self.secondchild.get_absolute_url() except SecondChild.DoesNotExist: pass if not url: url = '/base/%i' % self.id return url That is hopelessly ugly, and gets uglier with every additional child class I have. Does anybody have any ideas on a better, more pythonic way to go about this? A: I haven't tested this, but it might be worth tinkering with: def get_absolute_url(self): subclasses = ('child', 'secondchild', ) for subclass in subclasses: if hasattr(self, subclass): return getattr(self, subclass).get_absolute_url() return '/base/%i' % self.id A: I haven't messed with Django inheitance much, so I suppose you can't override get_absolute_url() in the model classes? Perhaps the visitor pattern could help if there are lot of functions that need this in many different places. A: Various forms of this question pop up here regularly. This answer demonstrates a generic way to "cast" a parent type to its proper subtype without having to query every subtype table. That way you wouldn't need to define a monster get_absolute_url on the parent which covers all the cases, you'd just convert to the child type and call get_absolute_url normally.
What's the easiest way of finding a child instance from a parent instance in Django?
My application uses class inheritance to minimize repetition across my models. My models.py looks kind of like this: class BaseModel(models.Model): title = models.CharField(max_length=100) pub_date = models.DateField() class Child(BaseModel): foo = models.CharField(max_length=20) class SecondChild(BaseModel): bar = models.CharField(max_length=20) Now most of the time, my views and templates only deal with instances of Child or SecondChild. Once in a while, however, I have a situation where I have an instance of BaseModel, and need to figure out which class is inheriting from that instance. Given an instance of BaseModel, let's call it base, Django's ORM offers base.child and base.secondchild. Currently, I have a method that loops through all of them to figure it out. It would look something like this: class BaseModel(models.Model): ... def get_absolute_url(self): url = None try: self.child url = self.child.get_absolute_url() except Child.DoesNotExist: pass if not url: try: self.secondchild url = self.secondchild.get_absolute_url() except SecondChild.DoesNotExist: pass if not url: url = '/base/%i' % self.id return url That is hopelessly ugly, and gets uglier with every additional child class I have. Does anybody have any ideas on a better, more pythonic way to go about this?
[ "I haven't tested this, but it might be worth tinkering with:\ndef get_absolute_url(self):\n subclasses = ('child', 'secondchild', )\n\n for subclass in subclasses:\n if hasattr(self, subclass):\n return getattr(self, subclass).get_absolute_url()\n\n return '/base/%i' % self.id\n\n", "I haven't messed with Django inheitance much, so I suppose you can't override get_absolute_url() in the model classes? \nPerhaps the visitor pattern could help if there are lot of functions that need this in many different places.\n", "Various forms of this question pop up here regularly. This answer demonstrates a generic way to \"cast\" a parent type to its proper subtype without having to query every subtype table. That way you wouldn't need to define a monster get_absolute_url on the parent which covers all the cases, you'd just convert to the child type and call get_absolute_url normally.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001683711_django_python.txt
Q: Is there a more succinct / pythonic way to do this? (counting longest seq of heads, tails in coin flips) Count the longest sequence of heads and tails in 200 coin flips. I did this - is there a niftier way to do it in python? (without being too obfuscated) import random def toss(n): count = [0,0] longest = [0,0] for i in xrange(n): coinface = random.randrange(2) count[coinface] += 1 count[not coinface] = 0 if count[coinface] > longest[coinface]: longest[coinface] = count[coinface] #print coinface, count, longest print "longest sequence heads %d, tails %d" %tuple(longest) if __name__ == '__main__': toss(200) see this for what prompted my playing A: def coins(num): lst = [random.randrange(2) for i in range(num)] lst = [(i, len(list(j))) for i, j in itertools.groupby(lst)] tails = max(j for i, j in lst if i) heads = max(j for i, j in lst if not i) return {1: tails, 0: heads} A: import collections, itertools, random def makesequence(choices=2, length=200): return [random.randrange(choices) for _ in itertools.repeat(None, length)] def runlengths(sequence): runlength_by_item = collections.defaultdict(set) for key, group in itertools.groupby(sequence): runlength_by_item[key].add(sum(1 for _ in group)) return dict((k, max(v)) for k, v in runlength_by_item.items()) As you'll notice, this is much more "decoupled" -- runlengths is a completely general way to determine the maximal run-lengths of different hashable items in any iterable (highly reusable if you need such run-lengths in a variety of different contexts), just as makesequence is a completely general way to make a list of random numbers given list length and number of choices for each random number. Putting these two together may not offer an optimal point-solution to a given, highly specific problem, but it will come close, and building up your little library of reusable "building blocks" will have much higher longer-term returns than just solving each specific problem by entirely dedicated code. A: You can use itertools, which is a much more Pythonic way to do this: def toss(n): rolls = [random.randrange(2) for i in xrange(n)] maximums = [0, 0] for which, grp in itertools.groupby(rolls): maximums[which] = max(len(list(grp)), maximums[which]) print "Longest sequence of heads %d, tails %d" % tuple(maximums) A: Another inefficient solution :-) import random, re s = ''.join(str(random.randrange(2)) for c in range(10)) print s print max(re.findall(r'0+', s)) print max(re.findall(r'1+', s)) >>> 0011100100 00 111 >>> A: >>> def toss(count): result = [] for i in range(count): result.append("HT"[random.randrange(0, 2)]) return ''.join(result) >>> s = toss(200) >>> h_max = max(len(x) for x in s.split("T")) >>> t_max = max(len(x) for x in s.split("H")) >>> print h_max, t_max 4 6 A: This isn't really pythonic so much as tortured, but here's a short version (with meaningless 1-character variable names, no less!) import random x = ''.join([chr(random.randrange(2)) for i in range(200)]) print max([len(s) for s in x.split(chr(0)) + x.split(chr(1))]) A: It is probably an axiom that any code can be made more succinct. Yours looks perfectly pythonic, though. Actually, on reflection perhaps there is no succinctness axiom like that. If succinct means "marked by compact precise expression without wasted words," and if by "words" we mean words of code and not of memory, then a single word program cannot be made more succinct (unless, perhaps, it is the "exit" program). If pythonic means "of extraordinary size and power", then it seems antagonistic to succinctness unless we restrict our definition to power only. I'm not convinced your program resembles a prophetic oracle at all, although you might implement it as an ascii portrait of a particular prophetic oracle. It doesn't look like a snake, so there's room for improvement there too. import random def toss(n): ''' ___ ____________ <<<((__O\ (__<>___<>__ \ ____ \ \_(__<>___<>__)\O\_/O___>-< hiss \O__<>___<>___<>)\___/ ''' count = [0,0] longest = [0,0] for i in xrange(n): coinface = random.randrange(2) count[coinface] += 1 count[not coinface] = 0 if count[coinface] > longest[coinface]: longest[coinface] = count[coinface] #print coinface, count, longest print "longest sequence heads %d, tails %d" %tuple(longest) if __name__ == '__main__': toss(200) Nifty, huh? A: import random, itertools def toss(n): faces = (random.randrange(2) for i in range(n)) longest = [0, 0] for face, seq in itertools.groupby(faces): longest[face] = max(longest[face], len(list(seq))) print "longest sequence heads %d, tails %d" % tuple(longest) A: String scanning algorithm If you are looking for a fast algorithm, then you can use the algorithm I developed recently for an interview question that asked for the longest string of consecutive letters in a string. See blog entry here. def search_longest_substring(s): """ >>> search_longest_substring('AABBBBCBBBBACCDDDDDDAAABBBBCBBBBACCDDDDDDDAAABBBBCBBBBACCDDDDDDA') (7, 'D') """ def find_left(s, midc, mid, left): for j in range(mid-1, left-1, -1): if s[j] != midc: return j + 1 return left def find_right(s, midc, mid, right): for k in range(mid+1, right): if s[k] != midc: return k return right i, longest = 0, (0, '') while i < len(s): c = s[i] j = find_left(s, c, i, i-longest[0]) k = find_right(s, c, i, len(s)) if k-j > longest[0]: longest = (k-j, c) i = k + longest[0] return longest if __name__ == '__main__': import random heads_or_tails = "".join(["HT"[random.randrange(0, 2)] for _ in range(20)]) print search_longest_substring(heads_or_tails) print heads_or_tails This algorithm is O(n) in worst case (all coin flips are identical) or O(n/m) in average case (where m is the length of the longest match). Feel free to correct me on this. The code is not especially pythonic (i.e. it does not use list comprehensions or itertools or other stuff). It's in python and it's a good algorithm. Micro-optimizations For the micro-optimization crowd, here are changes that make this really scream in python 2.6 on a Windows Vista laptop: def find_left(s, midc, mid, left): j = mid - 1 while j >= 0: if s[j] != midc: return j + 1 j -= 1 return left def find_right(s, midc, mid, right): k = mid+1 while k < right: if s[k] != midc: return k k += 1 return right Timing results for 1000 iterations with timeit: range: 2.670 xrange: 0.3268 while-loop: 0.255 Adding psyco import to the file: try: import psyco psyco.full() except ImportError: pass 0.011 on 1000 iterations with psyco and while-loop. So with judicious micros-optimizations and importing psyco, the code runs 250-ish times faster.
Is there a more succinct / pythonic way to do this? (counting longest seq of heads, tails in coin flips)
Count the longest sequence of heads and tails in 200 coin flips. I did this - is there a niftier way to do it in python? (without being too obfuscated) import random def toss(n): count = [0,0] longest = [0,0] for i in xrange(n): coinface = random.randrange(2) count[coinface] += 1 count[not coinface] = 0 if count[coinface] > longest[coinface]: longest[coinface] = count[coinface] #print coinface, count, longest print "longest sequence heads %d, tails %d" %tuple(longest) if __name__ == '__main__': toss(200) see this for what prompted my playing
[ "def coins(num):\n lst = [random.randrange(2) for i in range(num)]\n lst = [(i, len(list(j))) for i, j in itertools.groupby(lst)]\n tails = max(j for i, j in lst if i)\n heads = max(j for i, j in lst if not i)\n return {1: tails, 0: heads}\n\n", "import collections, itertools, random\n\ndef makesequence(choices=2, length=200):\n return [random.randrange(choices) for _ in itertools.repeat(None, length)]\n\ndef runlengths(sequence):\n runlength_by_item = collections.defaultdict(set)\n for key, group in itertools.groupby(sequence):\n runlength_by_item[key].add(sum(1 for _ in group))\n return dict((k, max(v)) for k, v in runlength_by_item.items())\n\nAs you'll notice, this is much more \"decoupled\" -- runlengths is a completely general way to determine the maximal run-lengths of different hashable items in any iterable (highly reusable if you need such run-lengths in a variety of different contexts), just as makesequence is a completely general way to make a list of random numbers given list length and number of choices for each random number. Putting these two together may not offer an optimal point-solution to a given, highly specific problem, but it will come close, and building up your little library of reusable \"building blocks\" will have much higher longer-term returns than just solving each specific problem by entirely dedicated code.\n", "You can use itertools, which is a much more Pythonic way to do this:\ndef toss(n):\n rolls = [random.randrange(2) for i in xrange(n)]\n maximums = [0, 0]\n for which, grp in itertools.groupby(rolls):\n maximums[which] = max(len(list(grp)), maximums[which])\n\n print \"Longest sequence of heads %d, tails %d\" % tuple(maximums)\n\n", "Another inefficient solution :-)\nimport random, re\ns = ''.join(str(random.randrange(2)) for c in range(10))\nprint s\nprint max(re.findall(r'0+', s))\nprint max(re.findall(r'1+', s))\n\n>>> \n0011100100\n00\n111\n>>> \n\n", ">>> def toss(count):\n result = []\n for i in range(count):\n result.append(\"HT\"[random.randrange(0, 2)])\n return ''.join(result)\n\n>>> s = toss(200)\n>>> h_max = max(len(x) for x in s.split(\"T\"))\n>>> t_max = max(len(x) for x in s.split(\"H\"))\n>>> print h_max, t_max\n4 6\n\n", "This isn't really pythonic so much as tortured, but here's a short version (with meaningless 1-character variable names, no less!)\nimport random\nx = ''.join([chr(random.randrange(2)) for i in range(200)])\nprint max([len(s) for s in x.split(chr(0)) + x.split(chr(1))])\n\n", "It is probably an axiom that any code can be made more succinct. Yours looks perfectly pythonic, though.\nActually, on reflection perhaps there is no succinctness axiom like that. If succinct means \"marked by compact precise expression without wasted words,\" and if by \"words\" we mean words of code and not of memory, then a single word program cannot be made more succinct (unless, perhaps, it is the \"exit\" program).\nIf pythonic means \"of extraordinary size and power\", then it seems antagonistic to succinctness unless we restrict our definition to power only. I'm not convinced your program resembles a prophetic oracle at all, although you might implement it as an ascii portrait of a particular prophetic oracle. It doesn't look like a snake, so there's room for improvement there too.\nimport random\n\ndef toss(n):\n '''\n ___ ____________\n<<<((__O\\ (__<>___<>__ \\ ____\n \\ \\_(__<>___<>__)\\O\\_/O___>-< hiss\n \\O__<>___<>___<>)\\___/\n\n '''\n count = [0,0]\n longest = [0,0]\n for i in xrange(n):\n coinface = random.randrange(2)\n count[coinface] += 1\n count[not coinface] = 0\n\n if count[coinface] > longest[coinface]:\n longest[coinface] = count[coinface]\n #print coinface, count, longest\n\n print \"longest sequence heads %d, tails %d\" %tuple(longest)\n\nif __name__ == '__main__':\n toss(200)\n\nNifty, huh?\n", "import random, itertools\n\ndef toss(n):\n faces = (random.randrange(2) for i in range(n))\n longest = [0, 0]\n for face, seq in itertools.groupby(faces):\n longest[face] = max(longest[face], len(list(seq)))\n print \"longest sequence heads %d, tails %d\" % tuple(longest)\n\n", "String scanning algorithm\nIf you are looking for a fast algorithm, then you can use the algorithm I developed recently for an interview question that asked for the longest string of consecutive letters in a string. See blog entry here.\ndef search_longest_substring(s):\n \"\"\"\n >>> search_longest_substring('AABBBBCBBBBACCDDDDDDAAABBBBCBBBBACCDDDDDDDAAABBBBCBBBBACCDDDDDDA')\n (7, 'D')\n \"\"\"\n def find_left(s, midc, mid, left):\n for j in range(mid-1, left-1, -1):\n if s[j] != midc:\n return j + 1\n return left\n def find_right(s, midc, mid, right):\n for k in range(mid+1, right):\n if s[k] != midc:\n return k\n return right\n i, longest = 0, (0, '')\n while i < len(s):\n c = s[i]\n j = find_left(s, c, i, i-longest[0])\n k = find_right(s, c, i, len(s))\n if k-j > longest[0]:\n longest = (k-j, c)\n i = k + longest[0]\n return longest\n\nif __name__ == '__main__':\n import random\n heads_or_tails = \"\".join([\"HT\"[random.randrange(0, 2)] for _ in range(20)])\n print search_longest_substring(heads_or_tails)\n print heads_or_tails\n\nThis algorithm is O(n) in worst case (all coin flips are identical) or O(n/m) in average case (where m is the length of the longest match). Feel free to correct me on this.\nThe code is not especially pythonic (i.e. it does not use list comprehensions or itertools or other stuff). It's in python and it's a good algorithm.\nMicro-optimizations\nFor the micro-optimization crowd, here are changes that make this really scream in python 2.6 on a Windows Vista laptop:\ndef find_left(s, midc, mid, left):\n j = mid - 1\n while j >= 0:\n if s[j] != midc:\n return j + 1\n j -= 1\n return left\ndef find_right(s, midc, mid, right):\n k = mid+1\n while k < right:\n if s[k] != midc:\n return k\n k += 1\n return right\n\nTiming results for 1000 iterations with timeit:\nrange: 2.670\nxrange: 0.3268\nwhile-loop: 0.255\n\nAdding psyco import to the file:\ntry:\n import psyco\n psyco.full()\nexcept ImportError:\n pass\n\n0.011 on 1000 iterations with psyco and while-loop. So with judicious micros-optimizations and importing psyco, the code runs 250-ish times faster.\n" ]
[ 11, 11, 7, 3, 2, 2, 1, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001689032_python.txt
Q: Embedding Gnash into PyGame? Is there a way to display flash applications using Gnash renderer (I'm not averse to Adobe's renderer but would prefer not to use it) in a PyGame application? A: Does Gnash allow drawing to an SDL_Surface? If so, Pygame has a C API that would make gluing these together easy. If not your best bet will be Pygame's frombuffer command. This will interepret a raw block of data as an image. You'll still need some way of getting that pointer from Gnash to your Python code.
Embedding Gnash into PyGame?
Is there a way to display flash applications using Gnash renderer (I'm not averse to Adobe's renderer but would prefer not to use it) in a PyGame application?
[ "Does Gnash allow drawing to an SDL_Surface? If so, Pygame has a C API that would make gluing these together easy. If not your best bet will be Pygame's frombuffer command. This will interepret a raw block of data as an image. You'll still need some way of getting that pointer from Gnash to your Python code. \n" ]
[ 0 ]
[]
[]
[ "flash", "gnash", "pygame", "python" ]
stackoverflow_0001685969_flash_gnash_pygame_python.txt
Q: Overhead of a Round-trip to MySql? So I've been building django applications for a while now, and drinking the cool-aid and all: only using the ORM and never writing custom SQL. The main page of the site (the primary interface where users will spend 80% - 90% of their time) was getting slow once you have a large amount of user specific content (ie photos, friends, other data, etc) So I popped in the sql logger (was pre-installed with pinax, I just enabled it in the settings) and imagine my surprise when it reported over 500 database queries!! With hand coded sql I hardly ever ran more than 50 on the most complex pages. In hindsight it's not all together surprising, but it seems that this can't be good. ...even if only a dozen or so of the queries take 1ms+ So I'm wondering, how much overhead is there on a round trip to mysql? django and mysql are running on the same server so there shouldn't be any networking related overhead. A: Just because you are using an ORM doesn't mean that you shouldn't do performance tuning. I had - like you - a home page of one of my applications that had low performance. I saw that I was doing hundreds of queries to display that page. I went looking at my code and realized that with some careful use of select_related() my queries would bring more of the data I needed - I went from hundreds of queries to tens. You can also run a SQL profiler and see if there aren't indices that would help your most common queries - you know, standard database stuff. Caching is also your friend, I would think. If a lot of a page is not changing, do you need to query the database every single time? If all else fails, remember: the ORM is great, and yes - you should try to use it because it is the Django philosophy; but you are not married to it. If you really have a usecase where studying and tuning the ORM navigation didn't help, if you are sure that you could do it much better with a standard query: use raw sql for that case. A: The overhead of each queries is only part of the picture. The actual round trip time between your Django and Mysql servers is probably very small since most of your queries are coming back in less than a one millisecond. The bigger problem is that the number of queries issued to your database can quickly overwhelm it. 500 queries for a page is way to much, even 50 seems like a lot to me. If ten users view complicated pages you're now up to 5000 queries. The round trip time to the database server is more of a factor when the caller is accessing the database from a Wide Area Network, where roundtrips can easily be between 20ms and 100ms. I would definitely look into using some kind of caching. A: There are some ways to reduce the query volume. Use .filter() and .all() to get a bunch of things; pick and choose in the view function (or template via {%if%}). Python can process a batch of rows faster than MySQL. "But I could send too much to the template". True, but you'll execute fewer SQL requests. Measure to see which is better. This is what you used to do when you wrote SQL. It's not wrong -- it doesn't break the ORM -- but it optimizes the underlying DB work and puts the processing into the view function and the template. Avoid query navigation in the template. When you do {{foo.bar.baz.quux}}, SQL is used to get the bar associated with foo, then the baz associated with the bar, then the quux associated with baz. You may be able to reduce this query business with some careful .filter() and Python processing to assemble a useful tuple in the view function. Again, this was something you used to do when you hand-crafted SQL. In this case, you gather larger batches of ORM-managed objects in the view function and do your filtering in Python instead of via a lot of individual ORM requests. This doesn't break the ORM. It changes the usage profile from lots of little queries to a few bigger queries. A: There is always overhead in database calls, in your case the overhead is not that bad because the application and database are on the same machine so there is no network latency but there is still a significant cost. When you make a request to the database it has to prepare to service that request by doing a number of things including: Allocating resources (memory buffers, temp tables etc) to the database server connection/thread that will handle the request, De-serializing the sql and parameters (this is necessary even on one machine as this is an inter-process request unless you are using an embeded database) Checking whether the query exists in the query cache if not optimise it and put it in the cache. Note also that if your queries are not parametrised (that is the values are not separated from the SQL) this may result in cache misses for statements that should be the same meaning that each request results in the query being analysed and optimized each time. Process the query. Prepare and return the results to the client. This is just an overview of the kinds of things the most database management systems do to process an SQL request. You incur this overhead 500 times even if the the query itself runs relatively quickly. Bottom line database interactions even to local database are not as cheap as you might expect.
Overhead of a Round-trip to MySql?
So I've been building django applications for a while now, and drinking the cool-aid and all: only using the ORM and never writing custom SQL. The main page of the site (the primary interface where users will spend 80% - 90% of their time) was getting slow once you have a large amount of user specific content (ie photos, friends, other data, etc) So I popped in the sql logger (was pre-installed with pinax, I just enabled it in the settings) and imagine my surprise when it reported over 500 database queries!! With hand coded sql I hardly ever ran more than 50 on the most complex pages. In hindsight it's not all together surprising, but it seems that this can't be good. ...even if only a dozen or so of the queries take 1ms+ So I'm wondering, how much overhead is there on a round trip to mysql? django and mysql are running on the same server so there shouldn't be any networking related overhead.
[ "Just because you are using an ORM doesn't mean that you shouldn't do performance tuning. \nI had - like you - a home page of one of my applications that had low performance. I saw that I was doing hundreds of queries to display that page. I went looking at my code and realized that with some careful use of select_related() my queries would bring more of the data I needed - I went from hundreds of queries to tens.\nYou can also run a SQL profiler and see if there aren't indices that would help your most common queries - you know, standard database stuff.\nCaching is also your friend, I would think. If a lot of a page is not changing, do you need to query the database every single time?\nIf all else fails, remember: the ORM is great, and yes - you should try to use it because it is the Django philosophy; but you are not married to it.\nIf you really have a usecase where studying and tuning the ORM navigation didn't help, if you are sure that you could do it much better with a standard query: use raw sql for that case.\n", "The overhead of each queries is only part of the picture. The actual round trip time between your Django and Mysql servers is probably very small since most of your queries are coming back in less than a one millisecond. The bigger problem is that the number of queries issued to your database can quickly overwhelm it. 500 queries for a page is way to much, even 50 seems like a lot to me. If ten users view complicated pages you're now up to 5000 queries.\nThe round trip time to the database server is more of a factor when the caller is accessing the database from a Wide Area Network, where roundtrips can easily be between 20ms and 100ms.\nI would definitely look into using some kind of caching.\n", "There are some ways to reduce the query volume.\n\nUse .filter() and .all() to get a bunch of things; pick and choose in the view function (or template via {%if%}). Python can process a batch of rows faster than MySQL. \n\"But I could send too much to the template\". True, but you'll execute fewer SQL requests. Measure to see which is better.\nThis is what you used to do when you wrote SQL. It's not wrong -- it doesn't break the ORM -- but it optimizes the underlying DB work and puts the processing into the view function and the template.\nAvoid query navigation in the template. When you do {{foo.bar.baz.quux}}, SQL is used to get the bar associated with foo, then the baz associated with the bar, then the quux associated with baz. You may be able to reduce this query business with some careful .filter() and Python processing to assemble a useful tuple in the view function.\nAgain, this was something you used to do when you hand-crafted SQL. In this case, you gather larger batches of ORM-managed objects in the view function and do your filtering in Python instead of via a lot of individual ORM requests. \nThis doesn't break the ORM. It changes the usage profile from lots of little queries to a few bigger queries.\n\n", "There is always overhead in database calls, in your case the overhead is not that bad because the application and database are on the same machine so there is no network latency but there is still a significant cost.\nWhen you make a request to the database it has to prepare to service that request by doing a number of things including: \n\nAllocating resources (memory buffers, temp tables etc) to the database server connection/thread that will handle the request, \nDe-serializing the sql and parameters (this is necessary even on one machine as this is an inter-process request unless you are using an embeded database)\nChecking whether the query exists in the query cache if not optimise it and put it in the cache.\n\n\nNote also that if your queries are not parametrised (that is the values are not separated from the SQL) this may result in cache misses for statements that should be the same meaning that each request results in the query being analysed and optimized each time.\n\nProcess the query.\nPrepare and return the results to the client.\n\nThis is just an overview of the kinds of things the most database management systems do to process an SQL request. You incur this overhead 500 times even if the the query itself runs relatively quickly. Bottom line database interactions even to local database are not as cheap as you might expect.\n" ]
[ 4, 3, 2, 1 ]
[]
[]
[ "django", "mysql", "overhead", "python" ]
stackoverflow_0001689031_django_mysql_overhead_python.txt
Q: handling multiple returned objects I have a contact/address app that allows users to search the database for contact entries. The current view will return an object (Entry()) and display its fields. The code is as follows: def search_page(request): form = SearchForm() entrylinks = [] show_results = True if request.GET.has_key('query'): show_results = True query = request.GET['query'].strip() if query: form = SearchForm({'query' : query}) entrylinks = \ Entry.objects.filter (name__icontains=query)[:10] variables = RequestContext(request, { 'form': form, 'entrylinks': entrylinks, 'show_results': show_results }) return render_to_response('search.html', variables) I'd like to add an "if" statement to the view that would recognize when there are multiple objects returned (people with the same name in the database) and in such a case, divert the returned objects to a different template (which would simply list the returned objects so the user could choose which he/she'd prefer). Can anyone show what such a statement would look like? Thanks. A: The object returned by Entry.objects.filter (a QuerySet) has a length, meaning you can call len(entrylinks) to get the number of records returned. Thus, you can do something like this: if len(entrylinks) == 1: tpl = "search.html" else: tpl = "select.html" variables = RequestContext(request, { "form": form, "entrylinks": entrylinks, "show_results": show_results, }) return render_to_response(tpl, variables) A: Rather than using len(entrylinks), you should use entrylinks.count(). The built-in count method is much faster and can save you a database query.
handling multiple returned objects
I have a contact/address app that allows users to search the database for contact entries. The current view will return an object (Entry()) and display its fields. The code is as follows: def search_page(request): form = SearchForm() entrylinks = [] show_results = True if request.GET.has_key('query'): show_results = True query = request.GET['query'].strip() if query: form = SearchForm({'query' : query}) entrylinks = \ Entry.objects.filter (name__icontains=query)[:10] variables = RequestContext(request, { 'form': form, 'entrylinks': entrylinks, 'show_results': show_results }) return render_to_response('search.html', variables) I'd like to add an "if" statement to the view that would recognize when there are multiple objects returned (people with the same name in the database) and in such a case, divert the returned objects to a different template (which would simply list the returned objects so the user could choose which he/she'd prefer). Can anyone show what such a statement would look like? Thanks.
[ "The object returned by Entry.objects.filter (a QuerySet) has a length, meaning you can call len(entrylinks) to get the number of records returned. Thus, you can do something like this:\nif len(entrylinks) == 1:\n tpl = \"search.html\"\nelse:\n tpl = \"select.html\"\nvariables = RequestContext(request, {\n \"form\": form,\n \"entrylinks\": entrylinks,\n \"show_results\": show_results,\n})\nreturn render_to_response(tpl, variables)\n\n", "Rather than using len(entrylinks), you should use entrylinks.count(). The built-in count method is much faster and can save you a database query.\n" ]
[ 2, 1 ]
[]
[]
[ "django", "django_views", "python" ]
stackoverflow_0001689122_django_django_views_python.txt
Q: In Python 2.6.4, why do I get a syntax error for a function call, of which the function is defined and works perfectly on its own? This happens in IDLE and Windows 7 RC1 (if that helps). Here is the module: from math import * from TurtleWorld import * world = TurtleWorld() bob = Turtle() bob.delay = 0.1 def polyline(turtle, length, n, angle): for i in range(n): fd(turtle, length) rt(turtle, angle) def polygon(turtle, length, n): """ polygon uses a turtle to draw a polygon with n sides of the given length. """ angle = 360.0/n polyline(turtle, length, n, angle) def spokes(turtle, length_of_spoke, number_of_spokes): angle = 360.0/number_of_spokes for i in range(number_of_spokes): turtle.fd(length_of_spoke) turtle.pd turtle.bk(length_of_spoke) turtle.rt(angle) turtle.pu def pie(turtle, length_of_side, number_of_sides): """pie uses a turtle to draw a polygon with sides of the given length and with the given number of sides. """ angle = 360.0/number_of_sides length_of_spoke = length_of_side/(2*sin(pi/180*angle/2) spokes(turtle, length_of_spoke, number_of_sides) turtle.pd turtle.fd(length_of_spoke) turtle.lt(270-angle/2) polygon(turtle, length_of_side, number_of_sides) spokes(bob, 30, 11) wait_for_user() When I run the program, I get the error: There is an error in your program: invalid syntax. IDLE then highlights the "spokes" word within the pie-function. If I comment-out the whole pie-function, the program runs perfectly. A: At quick glance are you missing a closing parenthesis at the end of the line before you call spokes()? length_of_spoke = length_of_side/(2*sin(pi/180*angle/2)) instead of length_of_spoke = length_of_side/(2*sin(pi/180*angle/2) A: The previous line is missing a closing parenthesis. It should read like this instead: length_of_spoke = length_of_side/(2*sin(pi/180*angle/2)) A: There is a closing parenthesis missing in the previous line: length_of_spoke = length_of_side/(2*sin(pi/180*angle/2) A: Others have already pointed out the actual syntax error, so I won't say any more about that. One thing I will add though is that if I get a syntax error, the first place I look is on the lines before the function. Usually it's something like a paren missing or a comma in the wrong place.
In Python 2.6.4, why do I get a syntax error for a function call, of which the function is defined and works perfectly on its own?
This happens in IDLE and Windows 7 RC1 (if that helps). Here is the module: from math import * from TurtleWorld import * world = TurtleWorld() bob = Turtle() bob.delay = 0.1 def polyline(turtle, length, n, angle): for i in range(n): fd(turtle, length) rt(turtle, angle) def polygon(turtle, length, n): """ polygon uses a turtle to draw a polygon with n sides of the given length. """ angle = 360.0/n polyline(turtle, length, n, angle) def spokes(turtle, length_of_spoke, number_of_spokes): angle = 360.0/number_of_spokes for i in range(number_of_spokes): turtle.fd(length_of_spoke) turtle.pd turtle.bk(length_of_spoke) turtle.rt(angle) turtle.pu def pie(turtle, length_of_side, number_of_sides): """pie uses a turtle to draw a polygon with sides of the given length and with the given number of sides. """ angle = 360.0/number_of_sides length_of_spoke = length_of_side/(2*sin(pi/180*angle/2) spokes(turtle, length_of_spoke, number_of_sides) turtle.pd turtle.fd(length_of_spoke) turtle.lt(270-angle/2) polygon(turtle, length_of_side, number_of_sides) spokes(bob, 30, 11) wait_for_user() When I run the program, I get the error: There is an error in your program: invalid syntax. IDLE then highlights the "spokes" word within the pie-function. If I comment-out the whole pie-function, the program runs perfectly.
[ "At quick glance are you missing a closing parenthesis at the end of the line before you call spokes()?\nlength_of_spoke = length_of_side/(2*sin(pi/180*angle/2))\n\ninstead of\nlength_of_spoke = length_of_side/(2*sin(pi/180*angle/2)\n\n", "The previous line is missing a closing parenthesis. It should read like this instead:\nlength_of_spoke = length_of_side/(2*sin(pi/180*angle/2))\n\n", "There is a closing parenthesis missing in the previous line:\nlength_of_spoke = length_of_side/(2*sin(pi/180*angle/2)\n\n", "Others have already pointed out the actual syntax error, so I won't say any more about that. One thing I will add though is that if I get a syntax error, the first place I look is on the lines before the function. Usually it's something like a paren missing or a comma in the wrong place.\n" ]
[ 3, 2, 2, 2 ]
[]
[]
[ "python", "python_idle", "syntax" ]
stackoverflow_0001689594_python_python_idle_syntax.txt
Q: Tool like 2to3, except for merges I maintain a fork of my project for Python 3.1. When I initially made the port from 2.6, I used 2to3, but now I constantly have to merge new code from the 2.6 fork into the 3.1 fork. How can I perform the 2to3 operation on these merges automatically? (I use git, if it matters.) A: Hmmm, you are in a tough position. Perhaps you could run 2to3 on the 2.6 fork, then merge the results of that into your 3.1 branch? Alternatively, perhaps this pain will make you reconsider your strategy of maintaining two distinct branches for the two Python versions? I've had good luck using a single codebase for both versions: Running the same code on Python 2.x and 3.x
Tool like 2to3, except for merges
I maintain a fork of my project for Python 3.1. When I initially made the port from 2.6, I used 2to3, but now I constantly have to merge new code from the 2.6 fork into the 3.1 fork. How can I perform the 2to3 operation on these merges automatically? (I use git, if it matters.)
[ "Hmmm, you are in a tough position. Perhaps you could run 2to3 on the 2.6 fork, then merge the results of that into your 3.1 branch?\nAlternatively, perhaps this pain will make you reconsider your strategy of maintaining two distinct branches for the two Python versions? I've had good luck using a single codebase for both versions: Running the same code on Python 2.x and 3.x\n" ]
[ 7 ]
[]
[]
[ "merge", "python", "python_2to3", "python_3.x" ]
stackoverflow_0001689548_merge_python_python_2to3_python_3.x.txt
Q: Generating Separate Output files in Hadoop Streaming Using only a mapper (a Python script) and no reducer, how can I output a separate file with the key as the filename, for each line of output, rather than having long files of output? A: The input and outputformat classes can be replaced by use of the -inputformat and -outputformat commandline parameters. One example of how to do this can be found in the dumbo project, which is a python framework for writing streaming jobs. It has a feature for writing to multiple files, and internally it replaces the output format with a class from its sister project, feathers - fm.last.feathers.output.MultipleTextFiles. The reducer then needs to emit a tuple as key, with the first component of the tuple being the path to the directory where the files with the key/value pairs should be written. There might still be multiple files, that depends on the number of reducers and the application. I recommend looking into dumbo, it has many features that makes it easier to write Map/Reduce programs on Hadoop in python. A: Is it possible to replace the outputFormatClass, when using streaming? In a native Java implementation you would extend the MultipleTextOutputFormat class and modify the method that names the output file. Then define your implementation as new outputformat with JobConf's setOutputFormat method you should verify, if this is possible in streaming too. I donno :-/ A: You can either write to a text file on the local filesystem using python file functions or if you want to use HDFS use the Thrift API.
Generating Separate Output files in Hadoop Streaming
Using only a mapper (a Python script) and no reducer, how can I output a separate file with the key as the filename, for each line of output, rather than having long files of output?
[ "The input and outputformat classes can be replaced by use of the -inputformat and -outputformat commandline parameters.\nOne example of how to do this can be found in the dumbo project, which is a python framework for writing streaming jobs. It has a feature for writing to multiple files, and internally it replaces the output format with a class from its sister project, feathers - fm.last.feathers.output.MultipleTextFiles.\nThe reducer then needs to emit a tuple as key, with the first component of the tuple being the path to the directory where the files with the key/value pairs should be written. There might still be multiple files, that depends on the number of reducers and the application.\nI recommend looking into dumbo, it has many features that makes it easier to write Map/Reduce programs on Hadoop in python.\n", "Is it possible to replace the outputFormatClass, when using streaming?\nIn a native Java implementation you would extend the MultipleTextOutputFormat class and modify the method that names the output file. Then define your implementation as new outputformat with JobConf's setOutputFormat method\nyou should verify, if this is possible in streaming too. I donno :-/\n", "You can either write to a text file on the local filesystem using python file functions or if you want to use HDFS use the Thrift API.\n" ]
[ 7, 1, 1 ]
[]
[]
[ "hadoop", "mapreduce", "python", "streaming" ]
stackoverflow_0001626786_hadoop_mapreduce_python_streaming.txt
Q: What technologies are good for sending encapsulated data, and later converting it, between Python and Objective-C? I'm attempting to create a client/server web-app. The client software is written in Objective-C (Mac), and the server software is written in Python (Linux). I'd like to encapsulate object data on either side, and send it across the internet to the other side. This will include standard types such as strings, doubles, and data-structures (arrays, dictionaries), along with binary files. My question is, how would you recommend me going about doing this? What technologies are good for sending encapsulated data, and later converting it, between two different programming languages? Specifically Objective-C and Python? Python has pickle/cPickle which will allow you to take Python objects and encapsulate them into a file, but un-pickling them leaves you with Python objects and not Obj-C objects. I've also seen XML and JSON, though I'd still be stuck with the issue of converting objects, such as a Python Dictionary into an Obj-C NSDictionary, or vice-versa. From what I've gathered, XML/JSON may be difficult to use with binary data, requiring converting it into text first before encapsulating it. I may be sending large amounts of binary data (50-100MB per request), so a text-conversion would significantly increase the size of the file, something I'd like to avoid. Thanks for any help! A: If you're talking about a traditional browser based web app, then I'd probably stick with JSON or XML serialization. For anything else I'd suggest drum roll please... Google Protocol Buffers Small, fast, and has a decent set of providers for different languages. A: Might check out a piece of technology called Thrift. It is an apache based service built specifically for communicating between different languages. EDIT - link A: You could use PyObjC on the client side and some kind of python based serialization. There are facilities in PyObjC for embedding in an existing Cocoa application and for communicating with the Python runtime.
What technologies are good for sending encapsulated data, and later converting it, between Python and Objective-C?
I'm attempting to create a client/server web-app. The client software is written in Objective-C (Mac), and the server software is written in Python (Linux). I'd like to encapsulate object data on either side, and send it across the internet to the other side. This will include standard types such as strings, doubles, and data-structures (arrays, dictionaries), along with binary files. My question is, how would you recommend me going about doing this? What technologies are good for sending encapsulated data, and later converting it, between two different programming languages? Specifically Objective-C and Python? Python has pickle/cPickle which will allow you to take Python objects and encapsulate them into a file, but un-pickling them leaves you with Python objects and not Obj-C objects. I've also seen XML and JSON, though I'd still be stuck with the issue of converting objects, such as a Python Dictionary into an Obj-C NSDictionary, or vice-versa. From what I've gathered, XML/JSON may be difficult to use with binary data, requiring converting it into text first before encapsulating it. I may be sending large amounts of binary data (50-100MB per request), so a text-conversion would significantly increase the size of the file, something I'd like to avoid. Thanks for any help!
[ "If you're talking about a traditional browser based web app, then I'd probably stick with JSON or XML serialization.\nFor anything else I'd suggest drum roll please...\nGoogle Protocol Buffers\nSmall, fast, and has a decent set of providers for different languages.\n", "Might check out a piece of technology called Thrift. It is an apache based service built specifically for communicating between different languages.\nEDIT - link\n", "You could use PyObjC on the client side and some kind of python based serialization. There are facilities in PyObjC for embedding in an existing Cocoa application and for communicating with the Python runtime.\n" ]
[ 2, 1, 1 ]
[]
[]
[ "encapsulation", "objective_c", "python" ]
stackoverflow_0001690080_encapsulation_objective_c_python.txt