content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Passing a pre_delete() or post_delete() signal arguments? I am using signals to perform an action after an object has been deleted; however, sometimes I want to perform a different action (not the default one) depending on an arugment. Is there a way to pass an argument to my signal catcher? Or will I have to abandon the signal and instead hard code what I want to do in the models ? What I would like to do is something like this: >>> MyModelInstance.delete() # default pre_delete() signal is run, in this case, an email is sent >>> MyModelInstance.delete(send_email=False) # same signal is run, however, no email gets sent Any ideas on the best approach? A: I don't think you need to hardcode your actions in the model - you can still use signals. But you will need to override delete() to at the very least accept the send_email parameter and - since I don't think you can pass extra parameters into post_delete() - trigger your own custom signal. Something like this: (writing from memory, untested!!!) import django.dispatch your_signal = django.dispatch.Signal(providing_args=["send_email",]) def your_callback(sender, **kwargs): print send_email your_signal.connect(your_callback) class YourModel(models.Model): ... def delete(self, send_email=True): super(YourModel, self).delete() your_signal.send(sender=self, send_email=send_email) ... Disclaimer: don't know if that is the best approach.
Passing a pre_delete() or post_delete() signal arguments?
I am using signals to perform an action after an object has been deleted; however, sometimes I want to perform a different action (not the default one) depending on an arugment. Is there a way to pass an argument to my signal catcher? Or will I have to abandon the signal and instead hard code what I want to do in the models ? What I would like to do is something like this: >>> MyModelInstance.delete() # default pre_delete() signal is run, in this case, an email is sent >>> MyModelInstance.delete(send_email=False) # same signal is run, however, no email gets sent Any ideas on the best approach?
[ "I don't think you need to hardcode your actions in the model - you can still use signals. But you will need to override delete() to at the very least accept the send_email parameter and - since I don't think you can pass extra parameters into post_delete() - trigger your own custom signal.\nSomething like this: (writing from memory, untested!!!)\nimport django.dispatch\nyour_signal = django.dispatch.Signal(providing_args=[\"send_email\",])\n\ndef your_callback(sender, **kwargs):\n print send_email\n\nyour_signal.connect(your_callback)\n\nclass YourModel(models.Model):\n ...\n def delete(self, send_email=True):\n super(YourModel, self).delete()\n your_signal.send(sender=self, send_email=send_email)\n ...\n\nDisclaimer: don't know if that is the best approach. \n" ]
[ 4 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001643332_django_python.txt
Q: Why do I get this error when I try to print something in Putty? UnicodeEncodeError: 'ascii' codec can't encode character u'\u2019' in position 38: ordinal not in range(128) I am downloading a website and then printing its contents...simple. Do I have to encode it somehow? A: Try utf-8 for start. Website you download might have different charset than ANSI and those extra characters can not be printed on console. So in place where you do print text do print text.encode('utf-8') instead. A: Make sure you have Putty configured to accept UTF-8 encoded data. A: printing stuff to standard output can be problematic, because Python often doesn't know what character encoding the system is using. In the face of this Python 2 assumes the most conservative choice, US ASCII. So when you try to print a string that contains characters that aren't in ASCII, like the U+2019 smart quote ’, it gives you this error. In Python 3 the default charset guess for sys.stdout.encoding is UTF-8. If you are sure that your standard output (ie. PuTTY in your case) should accept UTF-8, then yes you can encode it explicitly: print content.encode('UTF-8')
Why do I get this error when I try to print something in Putty?
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2019' in position 38: ordinal not in range(128) I am downloading a website and then printing its contents...simple. Do I have to encode it somehow?
[ "Try utf-8 for start. Website you download might have different charset than ANSI and those extra characters can not be printed on console.\nSo in place where you do print text do print text.encode('utf-8') instead.\n", "Make sure you have Putty configured to accept UTF-8 encoded data.\n", "printing stuff to standard output can be problematic, because Python often doesn't know what character encoding the system is using. In the face of this Python 2 assumes the most conservative choice, US ASCII. So when you try to print a string that contains characters that aren't in ASCII, like the U+2019 smart quote ’, it gives you this error.\nIn Python 3 the default charset guess for sys.stdout.encoding is UTF-8. If you are sure that your standard output (ie. PuTTY in your case) should accept UTF-8, then yes you can encode it explicitly:\nprint content.encode('UTF-8')\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ "ascii", "encoding", "python", "unicode" ]
stackoverflow_0001643023_ascii_encoding_python_unicode.txt
Q: Killing Python webservers I am looking for a simple Python webserver that is easy to kill from within code. Right now, I'm playing with Bottle, but I can't find any way at all to kill it in code. If you know how to kill Bottle (in code, no Ctrl+C) that would be super, but I'll take anything that's Python, simple, and killable. A: We use this. import os os._exit(3) To crash in a 'controlled' way. A: If you want to kill a process from Python, on a Unix-like platform, you can send signals equivalent to Ctrl-C at the console using Pythons os module e.g. # Get this processes PID pid_of_process = os.getpid() # Send the interrupt signal to this process os.kill(pid_of_process, signal.SIGINT) A: Raise exeption and handle it in main or use sys.exit A: Try putting import sys at the top and the command sys.exit(0) In the code that handles the "kill request".
Killing Python webservers
I am looking for a simple Python webserver that is easy to kill from within code. Right now, I'm playing with Bottle, but I can't find any way at all to kill it in code. If you know how to kill Bottle (in code, no Ctrl+C) that would be super, but I'll take anything that's Python, simple, and killable.
[ "We use this.\nimport os\nos._exit(3)\n\nTo crash in a 'controlled' way.\n", "If you want to kill a process from Python, on a Unix-like platform, you can send signals equivalent to Ctrl-C at the console using Pythons os module e.g. \n# Get this processes PID\npid_of_process = os.getpid()\n# Send the interrupt signal to this process\nos.kill(pid_of_process, signal.SIGINT)\n\n", "Raise exeption and handle it in main or use sys.exit\n", "Try putting \nimport sys\n\nat the top and the command\nsys.exit(0)\n\nIn the code that handles the \"kill request\". \n" ]
[ 2, 2, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001643362_python.txt
Q: Forward slash in a Python regex I'm trying to use a Python regex to find a mathematical expression in a string. The problem is that the forward slash seems to do something unexpected. I'd have thought that [\w\d\s+-/*]* would work for finding math expressions, but it finds commas too for some reason. A bit of experimenting reveals that forward slashes are the culprit. For example: >>> import re >>> re.sub(r'[/]*', 'a', 'bcd') 'abacada' Apparently forward slashes match between characters (even when it is in a character class, though only when the asterisk is present). Back slashes do not escape them. I've hunted for a while and not found any documentation on it. Any pointers? A: Look here for documentation on Python's re module. I think it is not the /, but rather the - in your first character class: [+-/] matches +, / and any ASCII value between, which happen to include the comma. Maybe this hint from the docs help: If you want to include a ']' or a '-' inside a set, precede it with a backslash, or place it as the first character. A: You are saying it to replace zero or more slashes with 'a'. So it does replace each "no character" with 'a'. :) You probably meant [/]+, i.e. one or more slashes. EDIT: Read Ber's answer for a solution to the original problem. I didn't read the whole question carefully enough. A: r'[/]*' means "Match 0 or more forward-slashes". There are exactly 0 forward-slashes between 'b' & 'c' and between 'c' & 'd'. Hence, those matches are replaced with 'a'. A: The * matches its argument zero or more times, and thus matches the empty string. The empty string is (logically) between any two consecutive characters. Hence >>> import re >>> re.sub(r'x*', 'a', 'bcd') 'abacada' As for the forward slash, it receives no special treatment: >>> re.sub(r'/', 'a', 'b/c/d') 'bacad' The documentation describes the syntax of regular expressions in Python. As you can see, the forward slash has no special function. The reason that [\w\d\s+-/*]* also finds comma's, is because inside square brackets the dash - denotes a range. In this case you don't want all characters between + and /, but a the literal characters +, - and /. So write the dash as the last character: [\w\d\s+/*-]*. That should fix it.
Forward slash in a Python regex
I'm trying to use a Python regex to find a mathematical expression in a string. The problem is that the forward slash seems to do something unexpected. I'd have thought that [\w\d\s+-/*]* would work for finding math expressions, but it finds commas too for some reason. A bit of experimenting reveals that forward slashes are the culprit. For example: >>> import re >>> re.sub(r'[/]*', 'a', 'bcd') 'abacada' Apparently forward slashes match between characters (even when it is in a character class, though only when the asterisk is present). Back slashes do not escape them. I've hunted for a while and not found any documentation on it. Any pointers?
[ "Look here for documentation on Python's re module.\nI think it is not the /, but rather the - in your first character class: [+-/] matches +, / and any ASCII value between, which happen to include the comma.\nMaybe this hint from the docs help:\n\nIf you want to include a ']' or a '-' inside a set, precede it with a backslash, or place it as the first character.\n\n", "You are saying it to replace zero or more slashes with 'a'. So it does replace each \"no character\" with 'a'. :)\nYou probably meant [/]+, i.e. one or more slashes.\nEDIT: Read Ber's answer for a solution to the original problem. I didn't read the whole question carefully enough.\n", "r'[/]*' means \"Match 0 or more forward-slashes\". There are exactly 0 forward-slashes between 'b' & 'c' and between 'c' & 'd'. Hence, those matches are replaced with 'a'.\n", "The * matches its argument zero or more times, and thus matches the empty string. The empty string is (logically) between any two consecutive characters. Hence\n>>> import re\n>>> re.sub(r'x*', 'a', 'bcd')\n'abacada'\n\nAs for the forward slash, it receives no special treatment:\n>>> re.sub(r'/', 'a', 'b/c/d')\n'bacad'\n\nThe documentation describes the syntax of regular expressions in Python. As you can see, the forward slash has no special function.\nThe reason that [\\w\\d\\s+-/*]* also finds comma's, is because inside square brackets the dash - denotes a range. In this case you don't want all characters between + and /, but a the literal characters +, - and /. So write the dash as the last character: [\\w\\d\\s+/*-]*. That should fix it.\n" ]
[ 31, 9, 4, 2 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001643772_python_regex.txt
Q: creating a .mat file from python I have a variable exon = [[[1, 2], [3, 4], [5, 6]], [[7, 8], [9, 10]]]. I would like to create a mat file like the following >> exon : [3*2 double] [2*2 double] When I used the python code to do the same it is showing error message. here is my python code import scipy.io exon = [[[1, 2], [3, 4], [5, 6]], [[7, 8], [9, 10]]] scipy.io.savemat('/tmp/out.mat', mdict={'exon': (exon[0], exon[1])}) It will be great anyone can give a suggestion for the same. thanks in advance Vipin T S A: You seem to want two different arrays linked to same variable name in Matlab. That is not possible. In MATLAB you can have cell arrays, or structs, which contain other arrays, but you cannot have just a tuple of arrays assigned to a single variable (which is what you have in mdict={'exon': (exon[0], exon1)) - there is no concept of a tuple in Matlab. You will also need to make your objects numpy arrays: import numpy as np exon = [ np.array([[1, 2], [3, 4], [5, 6]]), np.array([[7, 8], [9, 10]]) ] There is scipy documentation here with details of how to save different Matlab types, but assuming you want cell array: obj_arr = np.zeros((2,), dtype=np.object) obj_arr[0] = exon[0] obj_arr[1] = exon[1] scipy.io.savemat('/tmp/out.mat', mdict={'exon': obj_arr}) this will result in the following at matlab: or possibly (untested): obj_arr = np.array(exon, dtype=np.object) A: Sage is an open source mathematics software which aims at bundling together the python syntax and the python interpreter with other tools like Matlab, Octave, Mathematica, etc... Maybe you want to have a look at it: http://www.sagemath.org/doc/tutorial/index.html http://www.sagemath.org/
creating a .mat file from python
I have a variable exon = [[[1, 2], [3, 4], [5, 6]], [[7, 8], [9, 10]]]. I would like to create a mat file like the following >> exon : [3*2 double] [2*2 double] When I used the python code to do the same it is showing error message. here is my python code import scipy.io exon = [[[1, 2], [3, 4], [5, 6]], [[7, 8], [9, 10]]] scipy.io.savemat('/tmp/out.mat', mdict={'exon': (exon[0], exon[1])}) It will be great anyone can give a suggestion for the same. thanks in advance Vipin T S
[ "You seem to want two different arrays linked to same variable name in Matlab. That is not possible. In MATLAB you can have cell arrays, or structs, which contain other arrays, but you cannot have just a tuple of arrays assigned to a single variable (which is what you have in mdict={'exon': (exon[0], exon1)) - there is no concept of a tuple in Matlab.\nYou will also need to make your objects numpy arrays:\nimport numpy as np\nexon = [ np.array([[1, 2], [3, 4], [5, 6]]), np.array([[7, 8], [9, 10]]) ]\n\nThere is scipy documentation here with details of how to save different Matlab types, but assuming you want cell array:\nobj_arr = np.zeros((2,), dtype=np.object)\nobj_arr[0] = exon[0]\nobj_arr[1] = exon[1]\nscipy.io.savemat('/tmp/out.mat', mdict={'exon': obj_arr})\n\nthis will result in the following at matlab:\n\nor possibly (untested):\nobj_arr = np.array(exon, dtype=np.object)\n\n", "Sage is an open source mathematics software which aims at bundling together the python syntax and the python interpreter with other tools like Matlab, Octave, Mathematica, etc...\nMaybe you want to have a look at it:\n\nhttp://www.sagemath.org/doc/tutorial/index.html\nhttp://www.sagemath.org/\n\n" ]
[ 10, 1 ]
[]
[]
[ "mat_file", "python", "scipy" ]
stackoverflow_0001526002_mat_file_python_scipy.txt
Q: Simple AtomPub server library What simple AtomPub server libraries with file- or DB-based backends can you recommend? Unix-style servers that "do one thing, do it well" are especially welcome. Maybe even libraries in Python? A: Maybe this? http://atomserver.codehaus.org/ If someone is looking for a library to use in building Atompub into an existing service, they should definitely use Abdera directly. AtomServer, by contrast, is a full java web application that can be up and running in a few minutes by configuring a database and a few XML configuration files. It addresses all of the metadata and content management pieces that Abdera doesn't, and it's undergone a lot of battle-hardening to make it rock-solid and performant. Our goal moving forward is to make AtomServer easily interoperable with any spec-compliant Atom Client, while making the deployment of a server as easy as possible, with as little coding as possible. http://www.infoq.com/articles/atomserver http://www.infoq.com/presentations/Atom-Server-Berry-Jacob A: amplee is an AtomPub library and server in Python. It haven't been actively developed since 2009. I am not aware of projects that use it. The link was found via Dan Diephouse. A: AtomServer is certainly a mature Atom server framework: http://atomserver.codehaus.org/ AtomServer is in a fleshed out Abdera server (Abdera is an abstract library that can be used to create a server - but in itself is not a server). AtomServer is covered by a series of infoQ articles starting at: http://www.infoq.com/articles/atomserver Careful with the articles, however. The Tim Bray highlights shortcomings in them whereby AtomServer implementation details may be misinterpreted as Atom standards. Thus, if you're happy to adopt an existing set of Atom extensions, and are keen to get up and running quickly, AtomServer is for you. If you're keen to adhere to pure Atom, then either create your own Adbdera instance or have a look into eXist at http://exist.sourceforge.net/atompub.html [You may get more targeted advice if you give a few more details on your requirement: languange, data throughput, existing standads, ...]
Simple AtomPub server library
What simple AtomPub server libraries with file- or DB-based backends can you recommend? Unix-style servers that "do one thing, do it well" are especially welcome. Maybe even libraries in Python?
[ "Maybe this?\nhttp://atomserver.codehaus.org/\nIf someone is looking for a library to use in building Atompub into an existing service, they should definitely use Abdera directly. AtomServer, by contrast, is a full java web application that can be up and running in a few minutes by configuring a database and a few XML configuration files. It addresses all of the metadata and content management pieces that Abdera doesn't, and it's undergone a lot of battle-hardening to make it rock-solid and performant. Our goal moving forward is to make AtomServer easily interoperable with any spec-compliant Atom Client, while making the deployment of a server as easy as possible, with as little coding as possible.\nhttp://www.infoq.com/articles/atomserver\nhttp://www.infoq.com/presentations/Atom-Server-Berry-Jacob\n", "amplee is an AtomPub library and server in Python. It haven't been actively developed since 2009. I am not aware of projects that use it.\nThe link was found via Dan Diephouse.\n", "AtomServer is certainly a mature Atom server framework:\nhttp://atomserver.codehaus.org/\nAtomServer is in a fleshed out Abdera server (Abdera is an abstract library that can be used to create a server - but in itself is not a server).\nAtomServer is covered by a series of infoQ articles starting at:\nhttp://www.infoq.com/articles/atomserver\nCareful with the articles, however. The Tim Bray highlights shortcomings in them whereby AtomServer implementation details may be misinterpreted as Atom standards. Thus, if you're happy to adopt an existing set of Atom extensions, and are keen to get up and running quickly, AtomServer is for you. If you're keen to adhere to pure Atom, then either create your own Adbdera instance or have a look into eXist at http://exist.sourceforge.net/atompub.html\n[You may get more targeted advice if you give a few more details on your requirement: languange, data throughput, existing standads, ...]\n" ]
[ 2, 1, 1 ]
[]
[]
[ "atom_feed", "atompub", "http", "python" ]
stackoverflow_0001544196_atom_feed_atompub_http_python.txt
Q: Rose diagrams in Google Chart I searched around for ways to make rose diagrams (circular histograms) in Google Chart. The API has only radar diagrams, so it seems not technically possible (am I correct?). This wind rose example was the closest I came to a solution. Because I needed them, I figured out a way to fake them quickly using the Radar plots, Python and the Google-Chartwrapper library. There's a (non-technical) write-up available and the code is on Github. Before I take this further (i.e. clean code, abstract, waste more time, etc.), has anyone else seen examples of Rose diagrams in Google Chart that might be useful? (By the way, I know about matplotlib, etc. I'm using Python 3.x out of necessity and, as yet, the graphing libraries haven't caught up enough to use as I need them. See also SO question 418835)) A: Well, it looks like no-one has any other examples. I DID decide to play around with this to see what else is possible and created a quick proof of concept for time-based rose diagrams. It's just a silly thing to show your relative Twitter posting amount by time of day, but shows how Google Chart can be used for rose diagrams. The problem here, and a problem in general that this has illustrated with Google Chart, is that you cannot have embedded data contained in the chart, with drill-down capabilities, popups, etc. (as you could with a Javascript/Flex/other solution). It'd be nice if there were a Google Chart-style API that was javascript based to allow embedded data.
Rose diagrams in Google Chart
I searched around for ways to make rose diagrams (circular histograms) in Google Chart. The API has only radar diagrams, so it seems not technically possible (am I correct?). This wind rose example was the closest I came to a solution. Because I needed them, I figured out a way to fake them quickly using the Radar plots, Python and the Google-Chartwrapper library. There's a (non-technical) write-up available and the code is on Github. Before I take this further (i.e. clean code, abstract, waste more time, etc.), has anyone else seen examples of Rose diagrams in Google Chart that might be useful? (By the way, I know about matplotlib, etc. I'm using Python 3.x out of necessity and, as yet, the graphing libraries haven't caught up enough to use as I need them. See also SO question 418835))
[ "Well, it looks like no-one has any other examples. I DID decide to play around with this to see what else is possible and created a quick proof of concept for time-based rose diagrams. It's just a silly thing to show your relative Twitter posting amount by time of day, but shows how Google Chart can be used for rose diagrams. The problem here, and a problem in general that this has illustrated with Google Chart, is that you cannot have embedded data contained in the chart, with drill-down capabilities, popups, etc. (as you could with a Javascript/Flex/other solution).\nIt'd be nice if there were a Google Chart-style API that was javascript based to allow embedded data.\n" ]
[ 1 ]
[]
[]
[ "google_visualization", "histogram", "python" ]
stackoverflow_0001368822_google_visualization_histogram_python.txt
Q: Django Custom Managers for User model How would I go about extending the default User model with custom managers? My app has many user types that will be defined using the built-in Groups model. So a User might be a client, a staff member, and so on. It would be ideal to be able to do something like: User.clients.filter(name='Test') To get all clients with a name of Test. I know how to do this using custom managers for user-defined models, but I'm not sure how to go about doing it to the User model while still keeping all the baked-in goodies, at least short of modifying the django source code itself which is a no no.... A: Yes, you can add a custom manager directly to the User class. This is monkeypatching, and it does make your code less maintainable (someone trying to figure out your code may have no idea where the User class acquired that custom manager, or where they could look to find it). In this case it's relatively harmless, as you aren't actually overriding any existing behavior of the User class, just adding something new. from django.contrib.auth.models import User User.add_to_class('clients', ClientsManager()) If you're using Django 1.1+, you could also subclass User with a proxy model; doesn't affect the database but would allow you to attach extra managers without the monkeypatch.
Django Custom Managers for User model
How would I go about extending the default User model with custom managers? My app has many user types that will be defined using the built-in Groups model. So a User might be a client, a staff member, and so on. It would be ideal to be able to do something like: User.clients.filter(name='Test') To get all clients with a name of Test. I know how to do this using custom managers for user-defined models, but I'm not sure how to go about doing it to the User model while still keeping all the baked-in goodies, at least short of modifying the django source code itself which is a no no....
[ "Yes, you can add a custom manager directly to the User class. This is monkeypatching, and it does make your code less maintainable (someone trying to figure out your code may have no idea where the User class acquired that custom manager, or where they could look to find it). In this case it's relatively harmless, as you aren't actually overriding any existing behavior of the User class, just adding something new.\nfrom django.contrib.auth.models import User\nUser.add_to_class('clients', ClientsManager())\n\nIf you're using Django 1.1+, you could also subclass User with a proxy model; doesn't affect the database but would allow you to attach extra managers without the monkeypatch.\n" ]
[ 18 ]
[ "You can user Profile for this\n\nAUTH_PROFILE_MODULE =\n 'accounts.UserProfile'\nWhen a user profile model has been\n defined and specified in this manner,\n each User object will have a method --\n get_profile() -- which returns the\n instance of the user profile model\n associated with that User.\n\nOr you can write your own authenticate backend. Added it in settings\n" ]
[ -2 ]
[ "django", "python" ]
stackoverflow_0001642779_django_python.txt
Q: Is there any way to use a strftime-like function for dates before 1900 in Python? I didn't realize this, but apparently Python's strftime function doesn't support dates before 1900: >>> from datetime import datetime >>> d = datetime(1899, 1, 1) >>> d.strftime('%Y-%m-%d') Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: year=1899 is before 1900; the datetime strftime() methods require year >= 1900 I'm sure I could hack together something myself to do this, but I figure the strftime function is there for a reason (and there also is a reason why it can't support pre-1900 dates). I need to be able to support dates before 1900. I'd just use str, but there's too much variation. In other words, it may or may not have microseconds or it may or may not have a timezone. Is there any solution to this? If it makes a difference, I'm doing this so that I can write the data to a text file and load it into a database using Oracle SQL*Loader. I essentially ended up doing Alex Martelli's answer. Here's a more complete implementation: >>> from datetime import datetime >>> d = datetime.now() >>> d = d.replace(microsecond=0, tzinfo=None) >>> str(d) '2009-10-29 11:27:27' The only difference is that str(d) is equivalent to d.isoformat(' '). A: isoformat works on datetime instances w/o limitation of range: >>> import datetime >>> x=datetime.datetime(1865, 7, 2, 9, 30, 21) >>> x.isoformat() '1865-07-02T09:30:21' If you need a different-format string it's not too hard to slice, dice and remix pieces of the string you get from isoformat, which is very consistent (YYYY-MM-DDTHH:MM:SS.mmmmmm, with the dot and following microseconds omitted if microseconds are zero). A: The documentation seems pretty clear about this: The exact range of years for which strftime() works also varies across platforms. Regardless of platform, years before 1900 cannot be used. So there isn't going to be a solution that uses strftime(). Luckily, it's pretty straightforward to do this "by hand": >>> "%02d-%02d-%02d %02d:%02d" % (d.year,d.month,d.day,d.hour,d.minute) '1899-01-01 00:00' A: mxDateTime can handle arbitrary dates. Python's time and datetime modules use UNIX timestamps internally, that's why they have limited range. In [5]: mx.DateTime.DateTime(1899) Out[5]: <mx.DateTime.DateTime object for '1899-01-01 00:00:00.00' at 154a960> In [6]: DateTime.DateTime(1899).Format('%Y-%m-%d') Out[6]: 1899-01-01 A: This is from the matplotlib source. Could provide a good starting point for rolling your own. def strftime(self, dt, fmt): fmt = self.illegal_s.sub(r"\1", fmt) fmt = fmt.replace("%s", "s") if dt.year > 1900: return cbook.unicode_safe(dt.strftime(fmt)) year = dt.year # For every non-leap year century, advance by # 6 years to get into the 28-year repeat cycle delta = 2000 - year off = 6*(delta // 100 + delta // 400) year = year + off # Move to around the year 2000 year = year + ((2000 - year)//28)*28 timetuple = dt.timetuple() s1 = time.strftime(fmt, (year,) + timetuple[1:]) sites1 = self._findall(s1, str(year)) s2 = time.strftime(fmt, (year+28,) + timetuple[1:]) sites2 = self._findall(s2, str(year+28)) sites = [] for site in sites1: if site in sites2: sites.append(site) s = s1 syear = "%4d" % (dt.year,) for site in sites: s = s[:site] + syear + s[site+4:] return cbook.unicode_safe(s) A: This is the "feature" of the ctime library (UTF). Also You may have problem above 2038.
Is there any way to use a strftime-like function for dates before 1900 in Python?
I didn't realize this, but apparently Python's strftime function doesn't support dates before 1900: >>> from datetime import datetime >>> d = datetime(1899, 1, 1) >>> d.strftime('%Y-%m-%d') Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: year=1899 is before 1900; the datetime strftime() methods require year >= 1900 I'm sure I could hack together something myself to do this, but I figure the strftime function is there for a reason (and there also is a reason why it can't support pre-1900 dates). I need to be able to support dates before 1900. I'd just use str, but there's too much variation. In other words, it may or may not have microseconds or it may or may not have a timezone. Is there any solution to this? If it makes a difference, I'm doing this so that I can write the data to a text file and load it into a database using Oracle SQL*Loader. I essentially ended up doing Alex Martelli's answer. Here's a more complete implementation: >>> from datetime import datetime >>> d = datetime.now() >>> d = d.replace(microsecond=0, tzinfo=None) >>> str(d) '2009-10-29 11:27:27' The only difference is that str(d) is equivalent to d.isoformat(' ').
[ "isoformat works on datetime instances w/o limitation of range:\n>>> import datetime\n>>> x=datetime.datetime(1865, 7, 2, 9, 30, 21)\n>>> x.isoformat()\n'1865-07-02T09:30:21'\n\nIf you need a different-format string it's not too hard to slice, dice and remix pieces of the string you get from isoformat, which is very consistent (YYYY-MM-DDTHH:MM:SS.mmmmmm, with the dot and following microseconds omitted if microseconds are zero).\n", "The documentation seems pretty clear about this:\n\nThe exact range of years for which strftime() works also varies across platforms. Regardless of platform, years before 1900 cannot be used.\n\nSo there isn't going to be a solution that uses strftime(). Luckily, it's pretty straightforward to do this \"by hand\":\n>>> \"%02d-%02d-%02d %02d:%02d\" % (d.year,d.month,d.day,d.hour,d.minute)\n'1899-01-01 00:00'\n\n", "mxDateTime can handle arbitrary dates. Python's time and datetime modules use UNIX timestamps internally, that's why they have limited range.\nIn [5]: mx.DateTime.DateTime(1899)\nOut[5]: <mx.DateTime.DateTime object for '1899-01-01 00:00:00.00' at 154a960>\n\nIn [6]: DateTime.DateTime(1899).Format('%Y-%m-%d')\nOut[6]: 1899-01-01\n\n", "This is from the matplotlib source. Could provide a good starting point for rolling your own.\ndef strftime(self, dt, fmt):\n fmt = self.illegal_s.sub(r\"\\1\", fmt)\n fmt = fmt.replace(\"%s\", \"s\")\n if dt.year > 1900:\n return cbook.unicode_safe(dt.strftime(fmt))\n\n year = dt.year\n # For every non-leap year century, advance by\n # 6 years to get into the 28-year repeat cycle\n delta = 2000 - year\n off = 6*(delta // 100 + delta // 400)\n year = year + off\n\n # Move to around the year 2000\n year = year + ((2000 - year)//28)*28\n timetuple = dt.timetuple()\n s1 = time.strftime(fmt, (year,) + timetuple[1:])\n sites1 = self._findall(s1, str(year))\n\n s2 = time.strftime(fmt, (year+28,) + timetuple[1:])\n sites2 = self._findall(s2, str(year+28))\n\n sites = []\n for site in sites1:\n if site in sites2:\n sites.append(site)\n\n s = s1\n syear = \"%4d\" % (dt.year,)\n for site in sites:\n s = s[:site] + syear + s[site+4:]\n\n return cbook.unicode_safe(s)\n\n", "This is the \"feature\" of the ctime library (UTF).\nAlso You may have problem above 2038.\n" ]
[ 15, 11, 3, 3, 1 ]
[]
[]
[ "datetime", "oracle", "python", "sql_loader", "strftime" ]
stackoverflow_0001643967_datetime_oracle_python_sql_loader_strftime.txt
Q: Unable to find solution for a practice problem in codechef Here is the problem from code chef : A set of N dignitaries have arrived in close succession at Delhi and are awaiting transportation to Roorkee to participate in the inaugural ceremony of Cognizance. Being big sponsors of Cognizance, it has been deemed unsuitable by the organizing team to arrange for more than one dignitary to travel in a single vehicle. In addition, each dignitary has specified his preferred modes of transport to the organizing team well in advance. Consequently, the organizing team has drawn up a list specifying the set of acceptable transport mechanisms for each dignitary. Given such a list of preferences and the list of N available vehicles at Delhi, you need to specify if an allocation of vehicles can be so made that everyone’s preferences are satisfied. Each dignitary takes up atmost 1 transportation unit. Input Format: Line 1: N - The number of dignitaries. 1 <= N <= 100 Line 2-N+1: The names of the dignitaries – 1 per line. Line N+2 to 2*N+1: The names of the vehicles, 1 per line Line N+2 onwards: K – The size of the preference list for each dignitary, followed by K space separated names of acceptable transport means. K <= N Note: None of the names will have a length > 100. Output Format: Line 1: Yes/No. Sample Input: 4 Divye Rohit Akshat Parth Scorpio BMW Ford Chevrolet 1 BMW 1 Ford 2 Scorpio Chevrolet 1 Ford Sample Output: No Link to the problem : http://www.codechef.com/problems/INSOMB6/ Here is the code i wrote : #!/usr/bin/python ceo = [] cars = [] error = True n = int(raw_input()) for i in range(n): ceo.append(raw_input().lower()) for i in range(n): cars.append(raw_input().lower()) for i in range(n): test = raw_input().lower().split() if int(test[0]) is not len(test[1:]): error = False continue if test[0] != '0': for i in test[1:]: try : cars.remove(i) except ValueError : error = False if error and len(cars) is 0 : print "Yes" else : print "No" It gives correct output for sample input. But it fails somewhere. It will be great if you guys can point out a situation where this code fails! A: One first error (I doubt it's blocking you): if int(test[0]) is not len(test[1:]): It's never correct to use is / is not to test immutables (such as numbers): use = or != instead. The code you've written may accidentally work as an implementation-based artifact in versions of Python that "cache" small integers, but it's still wrong;-). Similarly for the len(cars) is 0 later. Using a variable named error to indicate the lack of error is peculiar and confusing (though technically not wrong code;-). The algorithmic bug is: all you're checking is that each car is liked by exactly 1 dignitary. This is very different from "there exists a 1-1 assignment of cars to dignitaries satisfying the preferences". For example, if all dignitaries liked all cars, you'd say "No" (because you remove all cars on the first leg of the loop, then get a ValueError the second time and thus set error to False) while most obviously the answer must be "Yes"! So, rethink the algorithm from scratch. Consider using sets or dicts, they may make your life easier (they won't change the algorithm but may make it easier to see/conceptualize). A: Consider the test case: 2 A B 1 2 2 1 2 1 1 The answer is yes because person A can use car 2, and person B will use car 1. I believe your solution will put person A in car 1, then be unable to place person 2. If you want a hint, this problem reduces to whether there exists a perfect matching of a bipartite graph.
Unable to find solution for a practice problem in codechef
Here is the problem from code chef : A set of N dignitaries have arrived in close succession at Delhi and are awaiting transportation to Roorkee to participate in the inaugural ceremony of Cognizance. Being big sponsors of Cognizance, it has been deemed unsuitable by the organizing team to arrange for more than one dignitary to travel in a single vehicle. In addition, each dignitary has specified his preferred modes of transport to the organizing team well in advance. Consequently, the organizing team has drawn up a list specifying the set of acceptable transport mechanisms for each dignitary. Given such a list of preferences and the list of N available vehicles at Delhi, you need to specify if an allocation of vehicles can be so made that everyone’s preferences are satisfied. Each dignitary takes up atmost 1 transportation unit. Input Format: Line 1: N - The number of dignitaries. 1 <= N <= 100 Line 2-N+1: The names of the dignitaries – 1 per line. Line N+2 to 2*N+1: The names of the vehicles, 1 per line Line N+2 onwards: K – The size of the preference list for each dignitary, followed by K space separated names of acceptable transport means. K <= N Note: None of the names will have a length > 100. Output Format: Line 1: Yes/No. Sample Input: 4 Divye Rohit Akshat Parth Scorpio BMW Ford Chevrolet 1 BMW 1 Ford 2 Scorpio Chevrolet 1 Ford Sample Output: No Link to the problem : http://www.codechef.com/problems/INSOMB6/ Here is the code i wrote : #!/usr/bin/python ceo = [] cars = [] error = True n = int(raw_input()) for i in range(n): ceo.append(raw_input().lower()) for i in range(n): cars.append(raw_input().lower()) for i in range(n): test = raw_input().lower().split() if int(test[0]) is not len(test[1:]): error = False continue if test[0] != '0': for i in test[1:]: try : cars.remove(i) except ValueError : error = False if error and len(cars) is 0 : print "Yes" else : print "No" It gives correct output for sample input. But it fails somewhere. It will be great if you guys can point out a situation where this code fails!
[ "One first error (I doubt it's blocking you):\nif int(test[0]) is not len(test[1:]):\n\nIt's never correct to use is / is not to test immutables (such as numbers): use = or != instead. The code you've written may accidentally work as an implementation-based artifact in versions of Python that \"cache\" small integers, but it's still wrong;-). Similarly for the len(cars) is 0 later.\nUsing a variable named error to indicate the lack of error is peculiar and confusing (though technically not wrong code;-).\nThe algorithmic bug is: all you're checking is that each car is liked by exactly 1 dignitary. This is very different from \"there exists a 1-1 assignment of cars to dignitaries satisfying the preferences\". For example, if all dignitaries liked all cars, you'd say \"No\" (because you remove all cars on the first leg of the loop, then get a ValueError the second time and thus set error to False) while most obviously the answer must be \"Yes\"! So, rethink the algorithm from scratch. Consider using sets or dicts, they may make your life easier (they won't change the algorithm but may make it easier to see/conceptualize).\n", "Consider the test case:\n2\nA\nB\n1\n2\n2 1 2\n1 1\n\nThe answer is yes because person A can use car 2, and person B will use car 1. \nI believe your solution will put person A in car 1, then be unable to place person 2.\nIf you want a hint, this problem reduces to whether there exists a perfect matching of a bipartite graph.\n" ]
[ 2, 1 ]
[]
[]
[ "algorithm", "python" ]
stackoverflow_0001644456_algorithm_python.txt
Q: Customising Install location for Django (or any Python module) I'd like to to install Django into a custom location, I've read the distutils documentation and it suggests that I should be able to do something like the following to install under my home directory (when run from an unpacked django tarball). > python setup.py install --home=~/code/packages/install --install-purelib=modules --install-platlib=modules --install-scripts=scripts --install-data=data However, every time I run this, it doesn't seem to concatenate the home path with the separate element paths, and so I simply end up with modules/ scripts/ data/ In the unpacked tar ball directory. I.e. it seems to be treating modules, scripts etc as simply relative paths to local directory and not relative to the --home specified. I've tried setting the root with --prefix, and using a setup.cfg and nothing seems to work. --prefix and and --home on their own with no other overrides work, but when used together with --install-xxx overrides it doesn't. I'm either probably doing something stupid, or the documentation is wrong, or their is a bug. Any help much obliged. A: I would strongly suggest that you look at Virtualenv and Pip for creating basically silos of python packages. The Pinax project uses this exclusively now for bundling requirements together for other people to use, and it's becoming more and more of a defacto standard in the reusable apps space. A: Ok, so I've been looking at the distutils source code to see what is going on - distutils.command.install does all of the pathname manipulation. It turns out that the documentation is actually incorrect. Whenever an --install-xxxx style option is provided it completely overrides any value that might be derived from --home or --prefix - the code not does do any straightforward concatenation of paths. However, what it does do is variable substitution of a set of special variables. The one of interest to me specifically is $base. Using it on the command line you can define the overrides, and distutils will replace all occurrences with what was specified for --home etc. But note you must quote your filenames so BASH does not try expand it as a environment variable. So the command line that I had initially, becomes: python setup.py install --home=/home/andre/code/packages/install --install-purelib='$base/modules' \ --install-platlib='$base/modules' --install-scripts='$base/scripts' --install-data='$base/data' Hope someone other than me finds that useful! A: As a quick check, I'd suggest replacing ~/code/packages/install with /full_path_to_your_user/code/packages/install A: If you just want it in your home directory, there's no need to install it at all. Just make sure that the container directory is on your pythonpath somewhere, and move the scripts in django/bin into somewhere on your main PATH (or add that dir to your path).
Customising Install location for Django (or any Python module)
I'd like to to install Django into a custom location, I've read the distutils documentation and it suggests that I should be able to do something like the following to install under my home directory (when run from an unpacked django tarball). > python setup.py install --home=~/code/packages/install --install-purelib=modules --install-platlib=modules --install-scripts=scripts --install-data=data However, every time I run this, it doesn't seem to concatenate the home path with the separate element paths, and so I simply end up with modules/ scripts/ data/ In the unpacked tar ball directory. I.e. it seems to be treating modules, scripts etc as simply relative paths to local directory and not relative to the --home specified. I've tried setting the root with --prefix, and using a setup.cfg and nothing seems to work. --prefix and and --home on their own with no other overrides work, but when used together with --install-xxx overrides it doesn't. I'm either probably doing something stupid, or the documentation is wrong, or their is a bug. Any help much obliged.
[ "I would strongly suggest that you look at Virtualenv and Pip for creating basically silos of python packages.\nThe Pinax project uses this exclusively now for bundling requirements together for other people to use, and it's becoming more and more of a defacto standard in the reusable apps space.\n", "Ok, so I've been looking at the distutils source code to see what is going on - distutils.command.install does all of the pathname manipulation. \nIt turns out that the documentation is actually incorrect. Whenever an --install-xxxx style option is provided it completely overrides any value that might be derived from --home or --prefix - the code not does do any straightforward concatenation of paths.\nHowever, what it does do is variable substitution of a set of special variables. The one of interest to me specifically is $base. Using it on the command line you can define the overrides, and distutils will replace all occurrences with what was specified for --home etc. But note you must quote your filenames so BASH does not try expand it as a environment variable. \nSo the command line that I had initially, becomes: \npython setup.py install --home=/home/andre/code/packages/install --install-purelib='$base/modules' \\\n --install-platlib='$base/modules' --install-scripts='$base/scripts' --install-data='$base/data'\n\nHope someone other than me finds that useful! \n", "As a quick check, I'd suggest replacing\n~/code/packages/install\nwith\n/full_path_to_your_user/code/packages/install\n", "If you just want it in your home directory, there's no need to install it at all. Just make sure that the container directory is on your pythonpath somewhere, and move the scripts in django/bin into somewhere on your main PATH (or add that dir to your path).\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "distutils", "django", "python" ]
stackoverflow_0001643540_distutils_django_python.txt
Q: How to handle Unicode (non-ASCII) characters in Python? I'm programming in Python and I'm obtaining information from a web page through the urllib2 library. The problem is that that page can provide me with non-ASCII characters, like 'Γ±', 'Γ‘', etc. In the very moment urllib2 gets this character, it provokes an exception, like this: File "c:\Python25\lib\httplib.py", line 711, in send self.sock.sendall(str) File "<string>", line 1, in sendall: UnicodeEncodeError: 'ascii' codec can't encode character u'\xf1' in position 74: ordinal not in range(128) I need to handle those characters. I mean, I don't want to handle the exception but to continue the program. Is there any way to, for example (I don't know if this is something stupid), use another codec rather than the ASCII? Because I have to work with those characters, insert them in a database, etc. A: You just read a set of bytes from the socket. If you want a string you have to decode it: yourstring = receivedbytes.decode("utf-8") (substituting whatever encoding you're using for utf-8) Then you have to do the reverse to send it back out: outbytes = yourstring.encode("utf-8") A: You want to use unicode for all your work if you can. You probably will find this question/answer useful: urllib2 read to Unicode A: You might want to look into using an actual parsing library to find this information. lxml, for instance, already addresses Unicode encode/decode using the declared character set.
How to handle Unicode (non-ASCII) characters in Python?
I'm programming in Python and I'm obtaining information from a web page through the urllib2 library. The problem is that that page can provide me with non-ASCII characters, like 'Γ±', 'Γ‘', etc. In the very moment urllib2 gets this character, it provokes an exception, like this: File "c:\Python25\lib\httplib.py", line 711, in send self.sock.sendall(str) File "<string>", line 1, in sendall: UnicodeEncodeError: 'ascii' codec can't encode character u'\xf1' in position 74: ordinal not in range(128) I need to handle those characters. I mean, I don't want to handle the exception but to continue the program. Is there any way to, for example (I don't know if this is something stupid), use another codec rather than the ASCII? Because I have to work with those characters, insert them in a database, etc.
[ "You just read a set of bytes from the socket. If you want a string you have to decode it:\nyourstring = receivedbytes.decode(\"utf-8\") \n\n(substituting whatever encoding you're using for utf-8)\nThen you have to do the reverse to send it back out:\noutbytes = yourstring.encode(\"utf-8\")\n\n", "You want to use unicode for all your work if you can. \nYou probably will find this question/answer useful:\nurllib2 read to Unicode\n", "You might want to look into using an actual parsing library to find this information. lxml, for instance, already addresses Unicode encode/decode using the declared character set.\n" ]
[ 11, 6, 0 ]
[]
[]
[ "character_encoding", "python", "unicode" ]
stackoverflow_0001644640_character_encoding_python_unicode.txt
Q: Django db reset without loading fixtures Is there an easy way to reset a django database (i.e. drop all data/tables, create new tables and create indexes) without loading fixture data afterwords? What I want to have is just an empty database because all data is loaded from another source (a kind of a post-processed backup). I know that this could be achieved by piping the output of the manage sql... commands to manage dbshell, but this relies on manage dbshelland is kind of hacky... Are there any other ways to do this? Edit: manage reset will do it, but is there a command like reset that doesn't need the application names as parameters? A: shouldn't you be able do do this with manage.py's reset option? A: As far as I know, the fixtures (in initial_data file) are automatically loaded after manage.py syndcb and not after reset. So, if you do a manage.py reset yourapp it should not load the fixtures. Hmm?
Django db reset without loading fixtures
Is there an easy way to reset a django database (i.e. drop all data/tables, create new tables and create indexes) without loading fixture data afterwords? What I want to have is just an empty database because all data is loaded from another source (a kind of a post-processed backup). I know that this could be achieved by piping the output of the manage sql... commands to manage dbshell, but this relies on manage dbshelland is kind of hacky... Are there any other ways to do this? Edit: manage reset will do it, but is there a command like reset that doesn't need the application names as parameters?
[ "shouldn't you be able do do this with manage.py's reset option?\n", "As far as I know, the fixtures (in initial_data file) are automatically loaded after manage.py syndcb and not after reset. So, if you do a manage.py reset yourapp it should not load the fixtures. Hmm?\n" ]
[ 2, 2 ]
[]
[]
[ "database", "django", "fixtures", "python" ]
stackoverflow_0001645310_database_django_fixtures_python.txt
Q: How this keyword is provided for object instances in C#? When you have an object instance in C#, you can use the this keyword inside the instance scope. How does the compiler handles it? Is there any assistance for this at runtime? I am mainly wondering how C# does it vs in python you have to provide self for every function manually. A: This is supported at the CLR level. The argument variable at slot 0 represents the "this" pointer. C# essentially generates calls to this as ldarg.0 A: The compiler always creates IL that sets the field using the class name, in any case - whether you specify this or not. The this. is optional unless there is another variable in your scope with the same name as an instance variable, in which case the compiler knows to set the instance variable. For example, if you have a class in the TestProject namespace named TestClass, and it contains a field named testOne, the following: TestClass(string value) // Constructor { this.testOne = value; } Gets compiled into IL like so: L_0000: ldarg.0 // ... other initialization stuff L_0004: ldarg.1 L_0005: stfld string TestProject.TestClass::testOne The instance variable is always set using the full class information, whether or not "this" is specified. Edit for comments: In C#, you can always use this in a method as a keyword because the first argument in the argument list is "this", even if its not specified. For example, say we make a method like so: class Test { void TestMethod(Test instance) { // Do something } void CallTestMethod() { TestMethod(this); } When you look at the IL for CallTestMethod, it will look like this: .method public hidebysig instance void CallTestMethod() cil managed { .maxstack 8 L_0000: nop L_0001: ldarg.0 L_0002: ldarg.0 L_0003: call instance void CSharpConsoleApplication.Test::TestMethod(class CSharpConsoleApplication.Test) L_0008: nop L_0009: ret } In this case, the compiler loads ldarg.0 onto the stack twice, which becomes the argument passed into TestMethod (it will become its ldarg.1). Basically, there is always a "this" internally, in any class method.
How this keyword is provided for object instances in C#?
When you have an object instance in C#, you can use the this keyword inside the instance scope. How does the compiler handles it? Is there any assistance for this at runtime? I am mainly wondering how C# does it vs in python you have to provide self for every function manually.
[ "This is supported at the CLR level. The argument variable at slot 0 represents the \"this\" pointer. C# essentially generates calls to this as ldarg.0\n", "The compiler always creates IL that sets the field using the class name, in any case - whether you specify this or not. The this. is optional unless there is another variable in your scope with the same name as an instance variable, in which case the compiler knows to set the instance variable.\nFor example, if you have a class in the TestProject namespace named TestClass, and it contains a field named testOne, the following:\nTestClass(string value) // Constructor\n{\n this.testOne = value;\n}\n\nGets compiled into IL like so:\nL_0000: ldarg.0\n// ... other initialization stuff \nL_0004: ldarg.1\nL_0005: stfld string TestProject.TestClass::testOne\n\nThe instance variable is always set using the full class information, whether or not \"this\" is specified.\n\nEdit for comments:\nIn C#, you can always use this in a method as a keyword because the first argument in the argument list is \"this\", even if its not specified. For example, say we make a method like so:\nclass Test\n{\n void TestMethod(Test instance) { \n // Do something\n }\n void CallTestMethod() {\n TestMethod(this);\n }\n\nWhen you look at the IL for CallTestMethod, it will look like this:\n.method public hidebysig instance void CallTestMethod() cil managed\n{\n .maxstack 8\n L_0000: nop \n L_0001: ldarg.0 \n L_0002: ldarg.0 \n L_0003: call instance void CSharpConsoleApplication.Test::TestMethod(class CSharpConsoleApplication.Test)\n L_0008: nop \n L_0009: ret \n}\n\nIn this case, the compiler loads ldarg.0 onto the stack twice, which becomes the argument passed into TestMethod (it will become its ldarg.1). Basically, there is always a \"this\" internally, in any class method.\n" ]
[ 3, 2 ]
[]
[]
[ ".net", "c#", "clr", "python" ]
stackoverflow_0001645550_.net_c#_clr_python.txt
Q: Threading in a PyQt application: Use Qt threads or Python threads? I'm writing a GUI application that regularly retrieves data through a web connection. Since this retrieval takes a while, this causes the UI to be unresponsive during the retrieval process (it cannot be split into smaller parts). This is why I'd like to outsource the web connection to a separate worker thread. [Yes, I know, now I have two problems.] Anyway, the application uses PyQt4, so I'd like to know what the better choice is: Use Qt's threads or use the Python threading module? What are advantages / disadvantages of each? Or do you have a totally different suggestion? Edit (re bounty): While the solution in my particular case will probably be using a non-blocking network request like Jeff Ober and LukΓ‘Ε‘ LalinskΓ½ suggested (so basically leaving the concurrency problems to the networking implementation), I'd still like a more in-depth answer to the general question: What are advantages and disadvantages of using PyQt4's (i.e. Qt's) threads over native Python threads (from the threading module)? Edit 2: Thanks all for you answers. Although there's no 100% agreement, there seems to be widespread consensus that the answer is "use Qt", since the advantage of that is integration with the rest of the library, while causing no real disadvantages. For anyone looking to choose between the two threading implementations, I highly recommend they read all the answers provided here, including the PyQt mailing list thread that abbot links to. There were several answers I considered for the bounty; in the end I chose abbot's for the very relevant external reference; it was, however, a close call. Thanks again. A: This was discussed not too long ago in PyQt mailing list. Quoting Giovanni Bajo's comments on the subject: It's mostly the same. The main difference is that QThreads are better integrated with Qt (asynchrnous signals/slots, event loop, etc.). Also, you can't use Qt from a Python thread (you can't for instance post event to the main thread through QApplication.postEvent): you need a QThread for that to work. A general rule of thumb might be to use QThreads if you're going to interact somehow with Qt, and use Python threads otherwise. And some earlier comment on this subject from PyQt's author: "they are both wrappers around the same native thread implementations". And both implementations use GIL in the same way. A: Python's threads will be simpler and safer, and since it is for an I/O-based application, they are able to bypass the GIL. That said, have you considered non-blocking I/O using Twisted or non-blocking sockets/select? EDIT: more on threads Python threads Python's threads are system threads. However, Python uses a global interpreter lock (GIL) to ensure that the interpreter is only ever executing a certain size block of byte-code instructions at a time. Luckily, Python releases the GIL during input/output operations, making threads useful for simulating non-blocking I/O. Important caveat: This can be misleading, since the number of byte-code instructions does not correspond to the number of lines in a program. Even a single assignment may not be atomic in Python, so a mutex lock is necessary for any block of code that must be executed atomically, even with the GIL. QT threads When Python hands off control to a 3rd party compiled module, it releases the GIL. It becomes the responsibility of the module to ensure atomicity where required. When control is passed back, Python will use the GIL. This can make using 3rd party libraries in conjunction with threads confusing. It is even more difficult to use an external threading library because it adds uncertainty as to where and when control is in the hands of the module vs the interpreter. QT threads operate with the GIL released. QT threads are able to execute QT library code (and other compiled module code that does not acquire the GIL) concurrently. However, the Python code executed within the context of a QT thread still acquires the GIL, and now you have to manage two sets of logic for locking your code. In the end, both QT threads and Python threads are wrappers around system threads. Python threads are marginally safer to use, since those parts that are not written in Python (implicitly using the GIL) use the GIL in any case (although the caveat above still applies.) Non-blocking I/O Threads add extraordinarily complexity to your application. Especially when dealing with the already complex interaction between the Python interpreter and compiled module code. While many find event-based programming difficult to follow, event-based, non-blocking I/O is often much less difficult to reason about than threads. With asynchronous I/O, you can always be sure that, for each open descriptor, the path of execution is consistent and orderly. There are, obviously, issues that must be addressed, such as what to do when code depending on one open channel further depends on the results of code to be called when another open channel returns data. One nice solution for event-based, non-blocking I/O is the new Diesel library. It is restricted to Linux at the moment, but it is extraordinarily fast and quite elegant. It is also worth your time to learn pyevent, a wrapper around the wonderful libevent library, which provides a basic framework for event-based programming using the fastest available method for your system (determined at compile time). A: The advantage of QThread is that it's integrated with the rest of the Qt library. That is, thread-aware methods in Qt will need to know in which thread they run, and to move objects between threads, you will need to use QThread. Another useful feature is running your own event loop in a thread. If you are accessing a HTTP server, you should consider QNetworkAccessManager. A: I asked myself the same question when I was working to PyTalk. If you are using Qt, you need to use QThread to be able to use the Qt framework and expecially the signal/slot system. With the signal/slot engine, you will be able to talk from a thread to another and with every part of your project. Moreover, there is not very performance question about this choice since both are a C++ bindings. Here is my experience of PyQt and thread. I encourage you to use QThread. A: Jeff has some good points. Only one main thread can do any GUI updates. If you do need to update the GUI from within the thread, Qt-4's queued connection signals make it easy to send data across threads and will automatically be invoked if you're using QThread; I'm not sure if they will be if you're using Python threads, although it's easy to add a parameter to connect(). A: I can't really recommend either, but I can try describing differences between CPython and Qt threads. First of all, CPython threads do not run concurrently, at least not Python code. Yes, they do create system threads for each Python thread, however only the thread currently holding Global Interpreter Lock is allowed to run (C extensions and FFI code might bypass it, but Python bytecode is not executed while thread doesn't hold GIL). On the other hand, we have Qt threads, which are basically common layer over system threads, don't have Global Interpreter Lock, and thus are capable of running concurrently. I'm not sure how PyQt deals with it, however unless your Qt threads call Python code, they should be able to run concurrently (bar various extra locks that might be implemented in various structures). For extra fine-tuning, you can modify the amount of bytecode instructions that are interpreted before switching ownership of GIL - lower values mean more context switching (and possibly higher responsiveness) but lower performance per individual thread (context switches have their cost - if you try switching every few instructions it doesn't help speed.) Hope it helps with your problems :) A: I can't comment on the exact differences between Python and PyQt threads, but I've been doing what you're attempting to do using QThread, QNetworkAcessManager and making sure to call QApplication.processEvents() while the thread is alive. If GUI responsiveness is really the issue you're trying to solve, the later will help.
Threading in a PyQt application: Use Qt threads or Python threads?
I'm writing a GUI application that regularly retrieves data through a web connection. Since this retrieval takes a while, this causes the UI to be unresponsive during the retrieval process (it cannot be split into smaller parts). This is why I'd like to outsource the web connection to a separate worker thread. [Yes, I know, now I have two problems.] Anyway, the application uses PyQt4, so I'd like to know what the better choice is: Use Qt's threads or use the Python threading module? What are advantages / disadvantages of each? Or do you have a totally different suggestion? Edit (re bounty): While the solution in my particular case will probably be using a non-blocking network request like Jeff Ober and LukΓ‘Ε‘ LalinskΓ½ suggested (so basically leaving the concurrency problems to the networking implementation), I'd still like a more in-depth answer to the general question: What are advantages and disadvantages of using PyQt4's (i.e. Qt's) threads over native Python threads (from the threading module)? Edit 2: Thanks all for you answers. Although there's no 100% agreement, there seems to be widespread consensus that the answer is "use Qt", since the advantage of that is integration with the rest of the library, while causing no real disadvantages. For anyone looking to choose between the two threading implementations, I highly recommend they read all the answers provided here, including the PyQt mailing list thread that abbot links to. There were several answers I considered for the bounty; in the end I chose abbot's for the very relevant external reference; it was, however, a close call. Thanks again.
[ "This was discussed not too long ago in PyQt mailing list. Quoting Giovanni Bajo's comments on the subject:\n\nIt's mostly the same. The main difference is that QThreads are better\n integrated with Qt (asynchrnous signals/slots, event loop, etc.).\n Also, you can't use Qt from a Python thread (you can't for instance\n post event to the main thread through QApplication.postEvent): you\n need a QThread for that to work.\nA general rule of thumb might be to use QThreads if you're going to interact somehow with Qt, and use Python threads otherwise.\n\nAnd some earlier comment on this subject from PyQt's author: \"they are both wrappers around the same native thread implementations\". And both implementations use GIL in the same way.\n", "Python's threads will be simpler and safer, and since it is for an I/O-based application, they are able to bypass the GIL. That said, have you considered non-blocking I/O using Twisted or non-blocking sockets/select?\nEDIT: more on threads\nPython threads\nPython's threads are system threads. However, Python uses a global interpreter lock (GIL) to ensure that the interpreter is only ever executing a certain size block of byte-code instructions at a time. Luckily, Python releases the GIL during input/output operations, making threads useful for simulating non-blocking I/O.\nImportant caveat: This can be misleading, since the number of byte-code instructions does not correspond to the number of lines in a program. Even a single assignment may not be atomic in Python, so a mutex lock is necessary for any block of code that must be executed atomically, even with the GIL.\nQT threads\nWhen Python hands off control to a 3rd party compiled module, it releases the GIL. It becomes the responsibility of the module to ensure atomicity where required. When control is passed back, Python will use the GIL. This can make using 3rd party libraries in conjunction with threads confusing. It is even more difficult to use an external threading library because it adds uncertainty as to where and when control is in the hands of the module vs the interpreter.\nQT threads operate with the GIL released. QT threads are able to execute QT library code (and other compiled module code that does not acquire the GIL) concurrently. However, the Python code executed within the context of a QT thread still acquires the GIL, and now you have to manage two sets of logic for locking your code.\nIn the end, both QT threads and Python threads are wrappers around system threads. Python threads are marginally safer to use, since those parts that are not written in Python (implicitly using the GIL) use the GIL in any case (although the caveat above still applies.)\nNon-blocking I/O\nThreads add extraordinarily complexity to your application. Especially when dealing with the already complex interaction between the Python interpreter and compiled module code. While many find event-based programming difficult to follow, event-based, non-blocking I/O is often much less difficult to reason about than threads.\nWith asynchronous I/O, you can always be sure that, for each open descriptor, the path of execution is consistent and orderly. There are, obviously, issues that must be addressed, such as what to do when code depending on one open channel further depends on the results of code to be called when another open channel returns data.\nOne nice solution for event-based, non-blocking I/O is the new Diesel library. It is restricted to Linux at the moment, but it is extraordinarily fast and quite elegant.\nIt is also worth your time to learn pyevent, a wrapper around the wonderful libevent library, which provides a basic framework for event-based programming using the fastest available method for your system (determined at compile time).\n", "The advantage of QThread is that it's integrated with the rest of the Qt library. That is, thread-aware methods in Qt will need to know in which thread they run, and to move objects between threads, you will need to use QThread. Another useful feature is running your own event loop in a thread.\nIf you are accessing a HTTP server, you should consider QNetworkAccessManager.\n", "I asked myself the same question when I was working to PyTalk.\nIf you are using Qt, you need to use QThread to be able to use the Qt framework and expecially the signal/slot system.\nWith the signal/slot engine, you will be able to talk from a thread to another and with every part of your project.\nMoreover, there is not very performance question about this choice since both are a C++ bindings.\nHere is my experience of PyQt and thread.\nI encourage you to use QThread.\n", "Jeff has some good points. Only one main thread can do any GUI updates. If you do need to update the GUI from within the thread, Qt-4's queued connection signals make it easy to send data across threads and will automatically be invoked if you're using QThread; I'm not sure if they will be if you're using Python threads, although it's easy to add a parameter to connect().\n", "I can't really recommend either, but I can try describing differences between CPython and Qt threads.\nFirst of all, CPython threads do not run concurrently, at least not Python code. Yes, they do create system threads for each Python thread, however only the thread currently holding Global Interpreter Lock is allowed to run (C extensions and FFI code might bypass it, but Python bytecode is not executed while thread doesn't hold GIL).\nOn the other hand, we have Qt threads, which are basically common layer over system threads, don't have Global Interpreter Lock, and thus are capable of running concurrently. I'm not sure how PyQt deals with it, however unless your Qt threads call Python code, they should be able to run concurrently (bar various extra locks that might be implemented in various structures).\nFor extra fine-tuning, you can modify the amount of bytecode instructions that are interpreted before switching ownership of GIL - lower values mean more context switching (and possibly higher responsiveness) but lower performance per individual thread (context switches have their cost - if you try switching every few instructions it doesn't help speed.)\nHope it helps with your problems :)\n", "I can't comment on the exact differences between Python and PyQt threads, but I've been doing what you're attempting to do using QThread, QNetworkAcessManager and making sure to call QApplication.processEvents() while the thread is alive. If GUI responsiveness is really the issue you're trying to solve, the later will help.\n" ]
[ 120, 38, 22, 14, 9, 5, 0 ]
[]
[]
[ "multithreading", "pyqt", "python" ]
stackoverflow_0001595649_multithreading_pyqt_python.txt
Q: insert string in the middle of a file given a file object I am working on a problem and got stuck at a wall I have a (potentially large) set of text files, and I need to apply a sequence of filters and transformations to it and export it to some other places. so I roughly have def apply_filter_transformer(basepath = None, newpath = None, fts= None): #because all the raw studies in basepath should not be modified, so I first cp all to newpath for i in listdir(basepath): file(path.join(newpath, i), "wb").writelines(file(path.join(basepath, i)).readlines()) for i in listdir(newpath): fileobj = open(path.join(newpath, i), "r+") for fcn in fts: fileobj = fcn(fileobj) if fileobj is not None: fileobj.writelines(fileobj.readlines()) try: fileobj.close() except: print i, "at", fcn pass def main(): apply_filter_transformer(path.join(pardir, pardir, "studies"), path.abspath(path.join(pardir, pardir, "filtered_studies")), [ #transformer_addMemo, filter_executable, transformer_identity, filter_identity, ]) and fts in apply_filter_transformer is a list of function that takes a python file object and return a python file object. The problem that I went into is that when I want to insert strings into a text object, I get uninformative error and got stuck for all morning. def transformer_addMemo(fileobj): STYLUSMEMO =r"""hellow world""" study = fileobj.read() location = re.search(r"</BasicOptions>", study) print fileobj.name print fileobj.mode fileobj.seek(0) fileobj.write(study[:location.end()] + STYLUSMEMO + study[location.end():]) return fileobj and this gives me Traceback (most recent call last): File "E:\mypy\reg_test\src\preprocessor\preprocessor.py", line 292, in <module> main() File "E:\mypy\reg_test\src\preprocessor\preprocessor.py", line 288, in main filter_identity, File "E:\mypy\reg_test\src\preprocessor\preprocessor.py", line 276, in apply_filter_transformer fileobj.writelines(fileobj.readlines()) IOError: [Errno 0] Error If anyone can give me more info on the error, I would appreciate very very much. A: There is handy python module for modifing or reading a group of files: fileinput I'm not sure what is causing this error. But you are reading the whole file into memory which is a bad idea in your case because the files are potentially large. Using fileinput you can replace the files easily. For example: import fileinput import sys for line in fileinput.input(list_of_files, inplace=True): sys.stdout.write(line) if keyword in line: sys.stdout.write(my_text) A: It's not really possible to tell what's causing the error from the code you posted. The problem may be in the protocol you've adopted for your transformation functions. I'll simplify the code a bit: fileobj = file.open(path, mode) fileobj = fcn(fileobj) fileobj.writelines(fileobj.readlines()) What assurance do I have that fcn returns a file that's open in the mode that my original file was? That it returns a file that's open at all? That it returns a file? Well, I don't. It doesn't seem like there's any reason for you to even be using file objects in your process. Since you're reading the entire file into memory, why not just make your transformation functions take and return strings? So your code would look like this: with open(filename, "r") as f: s = f.read() for transform_function in transforms: s = transform_function(s) with open(filename, "w") as f: f.write(s) Among other things, this totally decouples the file I/O part of your program from the data-transformation part, so that problems in one don't affect the other.
insert string in the middle of a file given a file object
I am working on a problem and got stuck at a wall I have a (potentially large) set of text files, and I need to apply a sequence of filters and transformations to it and export it to some other places. so I roughly have def apply_filter_transformer(basepath = None, newpath = None, fts= None): #because all the raw studies in basepath should not be modified, so I first cp all to newpath for i in listdir(basepath): file(path.join(newpath, i), "wb").writelines(file(path.join(basepath, i)).readlines()) for i in listdir(newpath): fileobj = open(path.join(newpath, i), "r+") for fcn in fts: fileobj = fcn(fileobj) if fileobj is not None: fileobj.writelines(fileobj.readlines()) try: fileobj.close() except: print i, "at", fcn pass def main(): apply_filter_transformer(path.join(pardir, pardir, "studies"), path.abspath(path.join(pardir, pardir, "filtered_studies")), [ #transformer_addMemo, filter_executable, transformer_identity, filter_identity, ]) and fts in apply_filter_transformer is a list of function that takes a python file object and return a python file object. The problem that I went into is that when I want to insert strings into a text object, I get uninformative error and got stuck for all morning. def transformer_addMemo(fileobj): STYLUSMEMO =r"""hellow world""" study = fileobj.read() location = re.search(r"</BasicOptions>", study) print fileobj.name print fileobj.mode fileobj.seek(0) fileobj.write(study[:location.end()] + STYLUSMEMO + study[location.end():]) return fileobj and this gives me Traceback (most recent call last): File "E:\mypy\reg_test\src\preprocessor\preprocessor.py", line 292, in <module> main() File "E:\mypy\reg_test\src\preprocessor\preprocessor.py", line 288, in main filter_identity, File "E:\mypy\reg_test\src\preprocessor\preprocessor.py", line 276, in apply_filter_transformer fileobj.writelines(fileobj.readlines()) IOError: [Errno 0] Error If anyone can give me more info on the error, I would appreciate very very much.
[ "There is handy python module for modifing or reading a group of files: fileinput\nI'm not sure what is causing this error. But you are reading the whole file into memory which is a bad idea in your case because the files are potentially large. Using fileinput you can replace the files easily. For example:\nimport fileinput\nimport sys\n\nfor line in fileinput.input(list_of_files, inplace=True):\n sys.stdout.write(line)\n if keyword in line:\n sys.stdout.write(my_text)\n\n", "It's not really possible to tell what's causing the error from the code you posted. The problem may be in the protocol you've adopted for your transformation functions. \nI'll simplify the code a bit:\nfileobj = file.open(path, mode)\nfileobj = fcn(fileobj)\nfileobj.writelines(fileobj.readlines())\n\nWhat assurance do I have that fcn returns a file that's open in the mode that my original file was? That it returns a file that's open at all? That it returns a file? Well, I don't.\nIt doesn't seem like there's any reason for you to even be using file objects in your process. Since you're reading the entire file into memory, why not just make your transformation functions take and return strings? So your code would look like this:\nwith open(filename, \"r\") as f:\n s = f.read()\nfor transform_function in transforms:\n s = transform_function(s)\nwith open(filename, \"w\") as f:\n f.write(s)\n\nAmong other things, this totally decouples the file I/O part of your program from the data-transformation part, so that problems in one don't affect the other.\n" ]
[ 1, 1 ]
[]
[]
[ "file", "python", "string" ]
stackoverflow_0001645384_file_python_string.txt
Q: Django: Extending Querysets / Connect multiple filters with OR I have to work with a queryset, that is already filtered, eg. qs = queryset.filter(language='de') but in some further operation i need to undo some of the already applied filtering, eg not to take only the rows with language='de' but entries in all languages. Is there a way to apply filter again and have the new parameters connected to the already existing ones using OR not add, eg. if the queryset is already filtered for language='de' and i would be able to connect an 'OR language='en' to that, it would give me what i'm looking for! Thanks! A: I don't believe it is possible to do what you are asking. The way you do ORs in django is like this: Model.objects.filter(Q(question__startswith='Who') | Q(question__startswith='What')) so if you actually wanted to do this: Model.objects.filter(Q(language='de') | Q(language='en')) you would need to put them both in the same filter() call so you wouldn't be able to add the other or clause in a later filter() call. I think the reason you may be trying to do this would be that you are concerned about hitting the database again but the only way to get accurate results would be to hit the database again. If you are simply concerned about producing clean, DRY code, you can put all the filters that are common to both queries at the top and then "fork" that query set later, like this: shared_qs = Model.objects.filter(active=True) german_entries = shared_qs.filter(language='de') german_and_english = shared_qs.filter(Q(language='de') | Q(language='en'))
Django: Extending Querysets / Connect multiple filters with OR
I have to work with a queryset, that is already filtered, eg. qs = queryset.filter(language='de') but in some further operation i need to undo some of the already applied filtering, eg not to take only the rows with language='de' but entries in all languages. Is there a way to apply filter again and have the new parameters connected to the already existing ones using OR not add, eg. if the queryset is already filtered for language='de' and i would be able to connect an 'OR language='en' to that, it would give me what i'm looking for! Thanks!
[ "I don't believe it is possible to do what you are asking.\nThe way you do ORs in django is like this:\nModel.objects.filter(Q(question__startswith='Who') | Q(question__startswith='What'))\n\nso if you actually wanted to do this:\nModel.objects.filter(Q(language='de') | Q(language='en'))\n\nyou would need to put them both in the same filter() call so you wouldn't be able to add the other or clause in a later filter() call.\nI think the reason you may be trying to do this would be that you are concerned about hitting the database again but the only way to get accurate results would be to hit the database again.\nIf you are simply concerned about producing clean, DRY code, you can put all the filters that are common to both queries at the top and then \"fork\" that query set later, like this:\nshared_qs = Model.objects.filter(active=True)\ngerman_entries = shared_qs.filter(language='de')\ngerman_and_english = shared_qs.filter(Q(language='de') | Q(language='en'))\n\n" ]
[ 3 ]
[]
[]
[ "django", "django_queryset", "filter", "python" ]
stackoverflow_0001645778_django_django_queryset_filter_python.txt
Q: how convert list of int to list of tuples I want to convert a list like this l1 = [1,2,3,4,5,6,7,8] to l2 = [(1,2),(3,4),(5,6),(7,8)] because want to loop for x,y in l2: draw_thing(x,y) A: One good way is: from itertools import izip it = iter([1, 2, 3, 4]) for x, y in izip(it, it): print x, y Output: 1 2 3 4 >>> A: Building on Nick D's answer: >>> from itertools import izip >>> t = [1,2,3,4,5,6,7,8,9,10,11,12] >>> for a, b in izip(*[iter(t)]*2): ... print a, b ... 1 2 3 4 5 6 7 8 9 10 11 12 >>> for a, b, c in izip(*[iter(t)]*3): ... print a, b, c ... 1 2 3 4 5 6 7 8 9 10 11 12 >>> for a, b, c, d in izip(*[iter(t)]*4): ... print a, b, c, d ... 1 2 3 4 5 6 7 8 9 10 11 12 >>> for a, b, c, d, e, f in izip(*[iter(t)]*6): ... print a, b, c, d, e, f ... 1 2 3 4 5 6 7 8 9 10 11 12 >>> Not quite as readable, but it shows a compact way to get any size tuple you want. A: Kind of easy with python's slicing operator: l2 = zip(l1[0::2], l1[1::2]) A: Take a look at grouper function from itertools docs. from itertools import izip_longest def grouper(n, iterable, fillvalue=None): "grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx" args = [iter(iterable)] * n return izip_longest(fillvalue=fillvalue, *args) In your case use it like this: l1 = [1,2,3,4,5,6,7,8] for (x, y) in grouper(2, l1): draw_thing(x, y) A: You can do: l2 = [] for y in range(0, len(l1), 2): l2.append((l1[y], l1[y+1])) I'm not doing any checks to make sure l1 has an even number of entries and such-like. A: Not the most elegant solution l2 = [(l1[i], l1[i+1]) for i in xrange(0,len(l1),2)] A: No need to construct a new list. You can just iterate over the list by steps of 2 instead of 1. I use len(L) - 1 as the upper-bound so you ensure that you don't try to access past the end of the list. for i in range(0, len(L) - 1, 2): draw_thing(L[i], L[i + 1]) A: list = [1,2,3,4,5,6] it = iter(list) newlist = [(x, y) for x, y in zip(it, it)]
how convert list of int to list of tuples
I want to convert a list like this l1 = [1,2,3,4,5,6,7,8] to l2 = [(1,2),(3,4),(5,6),(7,8)] because want to loop for x,y in l2: draw_thing(x,y)
[ "One good way is:\nfrom itertools import izip\nit = iter([1, 2, 3, 4])\nfor x, y in izip(it, it):\n print x, y\n\nOutput:\n1 2\n3 4\n>>> \n\n", "Building on Nick D's answer:\n>>> from itertools import izip\n>>> t = [1,2,3,4,5,6,7,8,9,10,11,12]\n>>> for a, b in izip(*[iter(t)]*2):\n... print a, b\n...\n1 2\n3 4\n5 6\n7 8\n9 10\n11 12\n>>> for a, b, c in izip(*[iter(t)]*3):\n... print a, b, c\n...\n1 2 3\n4 5 6\n7 8 9\n10 11 12\n>>> for a, b, c, d in izip(*[iter(t)]*4):\n... print a, b, c, d\n...\n1 2 3 4\n5 6 7 8\n9 10 11 12\n>>> for a, b, c, d, e, f in izip(*[iter(t)]*6):\n... print a, b, c, d, e, f\n...\n1 2 3 4 5 6\n7 8 9 10 11 12\n>>>\n\nNot quite as readable, but it shows a compact way to get any size tuple you want.\n", "Kind of easy with python's slicing operator:\nl2 = zip(l1[0::2], l1[1::2])\n\n", "Take a look at grouper function from itertools docs.\nfrom itertools import izip_longest\ndef grouper(n, iterable, fillvalue=None):\n \"grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx\"\n args = [iter(iterable)] * n\n return izip_longest(fillvalue=fillvalue, *args)\n\nIn your case use it like this:\nl1 = [1,2,3,4,5,6,7,8]\nfor (x, y) in grouper(2, l1):\n draw_thing(x, y)\n\n", "You can do:\nl2 = []\nfor y in range(0, len(l1), 2):\n l2.append((l1[y], l1[y+1]))\n\nI'm not doing any checks to make sure l1 has an even number of entries and such-like.\n", "Not the most elegant solution\nl2 = [(l1[i], l1[i+1]) for i in xrange(0,len(l1),2)]\n\n", "No need to construct a new list. You can just iterate over the list by steps of 2 instead of 1. I use len(L) - 1 as the upper-bound so you ensure that you don't try to access past the end of the list.\nfor i in range(0, len(L) - 1, 2):\n draw_thing(L[i], L[i + 1])\n\n", " list = [1,2,3,4,5,6]\n it = iter(list)\n newlist = [(x, y) for x, y in zip(it, it)] \n\n" ]
[ 10, 7, 5, 2, 0, 0, 0, 0 ]
[ "What's wrong with just accessing the correct index and incrementing?\nfor (int i=0;i<myList.Length;i++)\n{\n draw_thing(myList[i],myList[++i]);\n}\nOops - sorry, in C# mode. I'm sure you get the idea.\n" ]
[ -3 ]
[ "python" ]
stackoverflow_0001645673_python.txt
Q: Python, subprocess, devenv, why no output? I build a Visual Studio solution from a Python script. Everything works nicely, except that I am unable to capture the build output. p = subprocess.Popen(['devenv', 'solution.sln', '/build'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) (out, err) = p.communicate() ret = p.returncode Here, both out and err are always empty. This happens regardless of the build success as seen in p.returncode. A: Change it from 'devenv' to 'devenv.com'. Apparenty Popen looks for .EXEs first but the shell looks for .COMs first. Switching to 'devenv.com' worked for me. devenv is significantly faster then msbuild for incremental builds. I just did a build with an up to date project, meaning nothing should happen. devenv 23 seconds msbuild 55 seconds. A: You should build the solution with msbuild.exe instead, which is designed to give feedback to stdout and stderr. msbuild.exe is located at C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\msbuild.exe (to build a VS2005 solution) or C:\WINDOWS\Microsoft.NET\Framework\v3.5\msbuild.exe (to build a VS2008 solution) Note that msbuild.exe does not take a /build switch like devenv.exe. A: That's probably because the software you're running doesn't write to stdout or stderr. Maybe it writes directly to the terminal/console. If that's the case you'll need some win32 api calls to capture the output.
Python, subprocess, devenv, why no output?
I build a Visual Studio solution from a Python script. Everything works nicely, except that I am unable to capture the build output. p = subprocess.Popen(['devenv', 'solution.sln', '/build'], stdout=subprocess.PIPE, stderr=subprocess.PIPE) (out, err) = p.communicate() ret = p.returncode Here, both out and err are always empty. This happens regardless of the build success as seen in p.returncode.
[ "Change it from 'devenv' to 'devenv.com'. Apparenty Popen looks for .EXEs first but the shell looks for .COMs first. Switching to 'devenv.com' worked for me.\ndevenv is significantly faster then msbuild for incremental builds. I just did a build with an up to date project, meaning nothing should happen.\ndevenv 23 seconds\nmsbuild 55 seconds.\n", "You should build the solution with msbuild.exe instead, which is designed to give feedback to stdout and stderr. msbuild.exe is located at\nC:\\WINDOWS\\Microsoft.NET\\Framework\\v2.0.50727\\msbuild.exe (to build a VS2005 solution)\nor C:\\WINDOWS\\Microsoft.NET\\Framework\\v3.5\\msbuild.exe (to build a VS2008 solution)\nNote that msbuild.exe does not take a /build switch like devenv.exe.\n", "That's probably because the software you're running doesn't write to stdout or stderr. Maybe it writes directly to the terminal/console.\nIf that's the case you'll need some win32 api calls to capture the output.\n" ]
[ 26, 2, 0 ]
[ "Probably your problem is the same that the pipe's buffer fills up. Check this question for a good answer.\n" ]
[ -2 ]
[ "python", "subprocess" ]
stackoverflow_0001525696_python_subprocess.txt
Q: psyco complains about unsupported opcode 54, what is it? The Psyco log output look like this: 21:08:47.56 Logging started, 10/29/09 %%%%%%%%%%%%%%%%%%%% 21:08:47.56 unsupported opcode 54 at create_l0:124 % % 21:08:47.56 unsupported opcode 54 at create_lx:228 % % the lines in question class File: def __init__(self, path, header): self.path = path self.header = header self.file = path + '/' + header.to_filename() self.pfile = None def add_entry(self, entry): # line 124 self.pfile.write(entry.to_binary()) def open(self): self.pfile = open(self.file, 'wb') self.pfile.write(self.header.to_binary()) def close(self): self.pfile.close() def write(self, data): self.pfile.write(data) next one: nat_file = File(target + '/' + name, nat_header) nat_file.open() # add first value nat_file.add_entry(DataBlock(t, q, 0.0, 1, v)) # add all others while True: try: t, v, q = f.next() except StopIteration: break nat_file.add_entry(DataBlock(t, q, 0.0, 1, v)) nat_file.close() # line 228 I'm a bit at loss what the problem may be. Any ideas? A: Finding the name of the opcode using the number is actually pretty easy (below uses Python 2.6.2 on Ubuntu, you may get different results): >>> import dis >>> dis.opname[54] 'STORE_MAP' Of course, finding out what exactly this means is another question entirely. :-) A: Did you compile with another Psyco version than you are running the script with?
psyco complains about unsupported opcode 54, what is it?
The Psyco log output look like this: 21:08:47.56 Logging started, 10/29/09 %%%%%%%%%%%%%%%%%%%% 21:08:47.56 unsupported opcode 54 at create_l0:124 % % 21:08:47.56 unsupported opcode 54 at create_lx:228 % % the lines in question class File: def __init__(self, path, header): self.path = path self.header = header self.file = path + '/' + header.to_filename() self.pfile = None def add_entry(self, entry): # line 124 self.pfile.write(entry.to_binary()) def open(self): self.pfile = open(self.file, 'wb') self.pfile.write(self.header.to_binary()) def close(self): self.pfile.close() def write(self, data): self.pfile.write(data) next one: nat_file = File(target + '/' + name, nat_header) nat_file.open() # add first value nat_file.add_entry(DataBlock(t, q, 0.0, 1, v)) # add all others while True: try: t, v, q = f.next() except StopIteration: break nat_file.add_entry(DataBlock(t, q, 0.0, 1, v)) nat_file.close() # line 228 I'm a bit at loss what the problem may be. Any ideas?
[ "Finding the name of the opcode using the number is actually pretty easy (below uses Python 2.6.2 on Ubuntu, you may get different results):\n>>> import dis\n>>> dis.opname[54]\n'STORE_MAP'\n\nOf course, finding out what exactly this means is another question entirely. :-)\n", "Did you compile with another Psyco version than you are running the script with?\n" ]
[ 2, 0 ]
[]
[]
[ "psyco", "python" ]
stackoverflow_0001646260_psyco_python.txt
Q: Setting mod_python's interperter I have mod_python installed on a debian box with python 2.4 and 2.6 installed. I want mod_python to use 2.6 but it is finding 2.4. How can set it to use the other version. A: The version of Python used is set when mod_python is compiled. If you need to use a version other than the default, you'll need to recompile it, or you may be able to find a different package from the repository.
Setting mod_python's interperter
I have mod_python installed on a debian box with python 2.4 and 2.6 installed. I want mod_python to use 2.6 but it is finding 2.4. How can set it to use the other version.
[ "The version of Python used is set when mod_python is compiled. If you need to use a version other than the default, you'll need to recompile it, or you may be able to find a different package from the repository.\n" ]
[ 1 ]
[]
[]
[ "apache", "mod_python", "python" ]
stackoverflow_0001646017_apache_mod_python_python.txt
Q: Python on AIX: What are my options? I need to make some Python applications for a work project. The target platform is AIX 5.3. My question is: What version of Python should I be using? My requirements are: The Python version must be easy to install on the target machines. Others will do that according to instructions that I write, so no compiling from source or anything like that. The Python version must have ncurses or curses support (I'm making a form handler). I've found two different precompiled versions of Python for AIX, but one (2.1.something) didn't include the curses module, and the other (2.3.4, RPM format) had prerequisites that I failed to fulfill). Any help would be greatly appreciated. A: Use the AS Package of Python 2.6.3.7 from Activestate. They have a binary package for AIX on their download site. If you don't have an AIX machine to test it on, the install works the same way on Solaris or Linux, so you could write your documentation based on that. Basically, you ungzip the tarball file, use tar to unpack the archive, change directory to the unpacked folder, run a shell script to install it, tell the shell script what directory to place it in, and wait. Normally this would be used to install into a user directory, without superuser permissions, but you could install it anywhere that you like. You might also need to edit the system profile in order to make sure that all users can find the Python binary. I suggest the latest Python 2.6, because it has a lot of bugfixes, and there is now a critical mass of 3rd party libraries ported to it. Also, the standard library includes a lot of useful stuff that you used to have to collect separately. Curses is in the standard library of Python 2.6. Make sure to avoid Python 3.1 since it has not yet matured enough and provides few benefits for most business applications development. A: I'd compile it from source myself and tell them where to download it from in the instructions A: We've used ActiveState's Python as well as Pware's compiled version. Both have worked well. For AS, we've used 2.5 and 2.6. For Pware, just 2.6. Both 2.5 and 2.6 from AS support curses on our machine. I've compiled from source but usually wind up having trouble with with ctypes or SSL. Currently I have the Frankenstein option going of AS Python2.6 installed but I pulled out a couple of *.so files from Pware's. I'm using GCC since we've never ponied up for a compiler but depending on what you need from Python, it's definitely doable if I can do it. I will mention that AS Python claims to be 100% compatible with standard Python and it has been for everything we've done so far (mostly web applications).
Python on AIX: What are my options?
I need to make some Python applications for a work project. The target platform is AIX 5.3. My question is: What version of Python should I be using? My requirements are: The Python version must be easy to install on the target machines. Others will do that according to instructions that I write, so no compiling from source or anything like that. The Python version must have ncurses or curses support (I'm making a form handler). I've found two different precompiled versions of Python for AIX, but one (2.1.something) didn't include the curses module, and the other (2.3.4, RPM format) had prerequisites that I failed to fulfill). Any help would be greatly appreciated.
[ "Use the AS Package of Python 2.6.3.7 from Activestate. They have a binary package for AIX on their download site.\nIf you don't have an AIX machine to test it on, the install works the same way on Solaris or Linux, so you could write your documentation based on that. Basically, you ungzip the tarball file, use tar to unpack the archive, change directory to the unpacked folder, run a shell script to install it, tell the shell script what directory to place it in, and wait.\nNormally this would be used to install into a user directory, without superuser permissions, but you could install it anywhere that you like. You might also need to edit the system profile in order to make sure that all users can find the Python binary.\nI suggest the latest Python 2.6, because it has a lot of bugfixes, and there is now a critical mass of 3rd party libraries ported to it. Also, the standard library includes a lot of useful stuff that you used to have to collect separately. Curses is in the standard library of Python 2.6.\nMake sure to avoid Python 3.1 since it has not yet matured enough and provides few benefits for most business applications development.\n", "I'd compile it from source myself and tell them where to download it from in the instructions\n", "We've used ActiveState's Python as well as Pware's compiled version. Both have worked well. For AS, we've used 2.5 and 2.6. For Pware, just 2.6. Both 2.5 and 2.6 from AS support curses on our machine.\nI've compiled from source but usually wind up having trouble with with ctypes or SSL. Currently I have the Frankenstein option going of AS Python2.6 installed but I pulled out a couple of *.so files from Pware's. I'm using GCC since we've never ponied up for a compiler but depending on what you need from Python, it's definitely doable if I can do it.\nI will mention that AS Python claims to be 100% compatible with standard Python and it has been for everything we've done so far (mostly web applications).\n" ]
[ 7, 4, 1 ]
[]
[]
[ "aix", "curses", "ncurses", "python" ]
stackoverflow_0001646293_aix_curses_ncurses_python.txt
Q: referencing a key/value in django-templates after applying a filter say I have the following list that I provide to a django template stuff= [ { 'a':2 , 'b':4 } , { 'a',7} ] I want to access the 'a' attribute of the first element. I can't quite get the syntax right. {{stuff|first}} gives me the first element, but {{stuff|first.a}} is a dead end ( and weird ) and I can't seem to find a attribute filter. Short of writing one myself, is there template language syntax for what I want to do ? stuff[0].a is no go as well A: This is off the top of my head, but I think it is {{ stuff.0.a }}
referencing a key/value in django-templates after applying a filter
say I have the following list that I provide to a django template stuff= [ { 'a':2 , 'b':4 } , { 'a',7} ] I want to access the 'a' attribute of the first element. I can't quite get the syntax right. {{stuff|first}} gives me the first element, but {{stuff|first.a}} is a dead end ( and weird ) and I can't seem to find a attribute filter. Short of writing one myself, is there template language syntax for what I want to do ? stuff[0].a is no go as well
[ "This is off the top of my head, but I think it is\n{{ stuff.0.a }}\n\n" ]
[ 4 ]
[]
[]
[ "django", "django_templates", "python" ]
stackoverflow_0001646757_django_django_templates_python.txt
Q: Is it OK to extend both old and new style classes? When I extended some tool generated classes, I didn't realize that they are old style classes until I tried to use super(). The super() doesn't work with old style classes, so I got this error: TypeError: super() argument 1 must be type, not classobj E.g., try this snippet: >>> class A: ... def greet(self): ... print "A says hi" ... >>> class B(A): ... def greet(self): ... print "B says hi" ... >>> super(B, B()).greet() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: super() argument 1 must be type, not classobj I was just curious what would happen if I extended B from object as well to make it a new style class, and it seemed to make super() work. >>> class B(A, object): ... def greet(self): ... print "B says hi" ... >>> super(B, B()).greet() A says hi Is this an appropriate workaround or will I have some unwanted consequences later? A: New-style classes have been recommended practice in Python since they were introduced in Python 2.2. In Python 3.x, only new-style classes are available. Therefore, I recommend you switch your classes to new-style. I am not aware of any real problems you might have from this. For the most part, new-style classes simply bring new features, such as super() actually working. If you have code that relies on the semantics of old-style classes in tricky ways, that code would of course break. (The only example that comes to mind is the famous Borg pattern by the famous Alex Martelli.) Here is a link to the Python reference manual for a discussion of new style classes vs. old-style ("classic") classes. And here is a link to the classic essay where Guido van Rossum explains why new-style classes were introduced. I use only new-style classes in my code and I encourage you to do the same. A: If you cannot change the generator to generate new-style classes, you could post-process the file to make them new-style: import re pat = re.compile("^(.*class\s+\w+)(:.*)$") out_file = open("edited_file.py", "w") for line in open("generated_file.py"): m = pat.match(line) if m: line = m.group(1) + "(object)" + m.group(2) + "\n" out_file.write(line) out_file.close() If you don't want to do that for some reason, then sure, go ahead and subclass both the original class and object. To the best of my knowledge, that should work just fine; your new class will be a new-style class. The book Python in a Nutshell (by Alex Martelli, published by O'Reilly) says that if you declare a class with base classes and at least one of the base classes is new-style, it will always be a new-style class. It does not list any special warnings or dangers related to this action. I think you are good to go. A: I see no problem mixing classic and new-style classes unless some methods rely on classic semantic or your classic base uses metaclass (metaclass of offspring will be always a new-style class). On the other hand I see no reason to use classic classes anymore, so changing class A: to class A(object): is preferred.
Is it OK to extend both old and new style classes?
When I extended some tool generated classes, I didn't realize that they are old style classes until I tried to use super(). The super() doesn't work with old style classes, so I got this error: TypeError: super() argument 1 must be type, not classobj E.g., try this snippet: >>> class A: ... def greet(self): ... print "A says hi" ... >>> class B(A): ... def greet(self): ... print "B says hi" ... >>> super(B, B()).greet() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: super() argument 1 must be type, not classobj I was just curious what would happen if I extended B from object as well to make it a new style class, and it seemed to make super() work. >>> class B(A, object): ... def greet(self): ... print "B says hi" ... >>> super(B, B()).greet() A says hi Is this an appropriate workaround or will I have some unwanted consequences later?
[ "New-style classes have been recommended practice in Python since they were introduced in Python 2.2. In Python 3.x, only new-style classes are available. Therefore, I recommend you switch your classes to new-style.\nI am not aware of any real problems you might have from this. For the most part, new-style classes simply bring new features, such as super() actually working. If you have code that relies on the semantics of old-style classes in tricky ways, that code would of course break. (The only example that comes to mind is the famous Borg pattern by the famous Alex Martelli.)\nHere is a link to the Python reference manual for a discussion of new style classes vs. old-style (\"classic\") classes. And here is a link to the classic essay where Guido van Rossum explains why new-style classes were introduced.\nI use only new-style classes in my code and I encourage you to do the same.\n", "If you cannot change the generator to generate new-style classes, you could post-process the file to make them new-style:\nimport re\npat = re.compile(\"^(.*class\\s+\\w+)(:.*)$\")\nout_file = open(\"edited_file.py\", \"w\")\nfor line in open(\"generated_file.py\"):\n m = pat.match(line)\n if m:\n line = m.group(1) + \"(object)\" + m.group(2) + \"\\n\"\n out_file.write(line)\n\nout_file.close()\n\nIf you don't want to do that for some reason, then sure, go ahead and subclass both the original class and object. To the best of my knowledge, that should work just fine; your new class will be a new-style class.\nThe book Python in a Nutshell (by Alex Martelli, published by O'Reilly) says that if you declare a class with base classes and at least one of the base classes is new-style, it will always be a new-style class. It does not list any special warnings or dangers related to this action. I think you are good to go.\n", "I see no problem mixing classic and new-style classes unless some methods rely on classic semantic or your classic base uses metaclass (metaclass of offspring will be always a new-style class).\nOn the other hand I see no reason to use classic classes anymore, so changing class A: to class A(object): is preferred.\n" ]
[ 4, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001641958_python.txt
Q: Adapt an existing database to a django app I have a Postgresql databese with data. I want to create a django app with that database. How can i import the tables to django models and/or views? A: There is a utility called manage.py inspectdb to generate models from your existing database. It works pretty well. $ python manage.py inspectdb > models.py A: If your database is not very simple -- or very well designed -- you'll find it a poor fit with Django. While the reverse engineering works well, you may find that the original database design was flawed and you have lots of clumsy workarounds. The question is one of "legacy software" that works with the old data model. I'd suggest you do the following. Design the correct data model, using Django. Map the correct model to whatever it is you have. Write a conversion script that uses simple, direct SQL and the Django ORM to migrate data from non-Django-friendly to a better model. If you have legacy software, you'll have to work out an appropriate data movement schedule. If you don't have any legacy software, you'll run this conversion once.
Adapt an existing database to a django app
I have a Postgresql databese with data. I want to create a django app with that database. How can i import the tables to django models and/or views?
[ "There is a utility called manage.py inspectdb to generate models from your existing database. It works pretty well.\n$ python manage.py inspectdb > models.py\n\n", "If your database is not very simple -- or very well designed -- you'll find it a poor fit with Django.\nWhile the reverse engineering works well, you may find that the original database design was flawed and you have lots of clumsy workarounds.\nThe question is one of \"legacy software\" that works with the old data model. \nI'd suggest you do the following.\n\nDesign the correct data model, using Django.\nMap the correct model to whatever it is you have.\nWrite a conversion script that uses simple, direct SQL and the Django ORM to migrate data from non-Django-friendly to a better model.\n\nIf you have legacy software, you'll have to work out an appropriate data movement schedule.\nIf you don't have any legacy software, you'll run this conversion once.\n\n\n" ]
[ 19, 3 ]
[]
[]
[ "database", "django_models", "django_views", "postgresql", "python" ]
stackoverflow_0001646786_database_django_models_django_views_postgresql_python.txt
Q: django @login_required decorator error I'm running django 1.1rc. All of my code works correctly using django's built in development server; however, when I move it into production using Apache's mod_python, I get the following error on all of my views: Caught an exception while rendering: Reverse for '<django.contrib.auth.decorators._CheckLogin What might I look for that's causing this error? Update: What's strange is that I can access the views account/login and also the admin site just fine. I tried removing the @login_required decorator on all of my views and it generates the same type of exception. Update2: So it seems like there is a problem with any view in my custom package: booster. The django.contrib works fine. I'm serving the app at http://server_name/booster. However, the built-in auth login view redirects to http://server_name/accounts/login. Does this give a clue to what may be wrong? Traceback: Environment: Request Method: GET Request URL: http://lghbb/booster/hospitalists/ Django Version: 1.1 rc 1 Python Version: 2.5.4 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.admin', 'booster.core', 'booster.hospitalists'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware') Template error: In template c:\booster\templates\hospitalists\my_patients.html, error at line 23 Caught an exception while rendering: Reverse for '<django.contrib.auth.decorators._CheckLogin object at 0x05016DD0>' with arguments '(7L,)' and keyword arguments '{}' not found. 13 : <th scope="col">Name</th> 14 : <th scope="col">DOB</th> 15 : <th scope="col">IC</th> 16 : <th scope="col">Type</th> 17 : <th scope="col">LOS</th> 18 : <th scope="col">PCP</th> 19 : <th scope="col">Service</th> 20 : </tr> 21 : </thead> 22 : <tbody> 23 : {% for patient in patients %} 24 : <tr class="{{ patient.gender }} select"> 25 : <td>{{ patient.bed }}</td> 26 : <td>{{ patient.mr }}</td> 27 : <td>{{ patient.acct }}</td> 28 : <td><a href="{% url hospitalists.views.patient patient.id %}">{{ patient }}</a></td> 29 : <td>{{ patient.dob }}</td> 30 : <td class="{% if patient.infections.count %}infection{% endif %}"> 31 : {% for infection in patient.infections.all %} 32 : {{ infection.short_name }} &nbsp; 33 : {% endfor %} Traceback: File "C:\Python25\Lib\site-packages\django\core\handlers\base.py" in get_response 92. response = callback(request, *callback_args, **callback_kwargs) File "C:\Python25\Lib\site-packages\django\contrib\auth\decorators.py" in __call__ 78. return self.view_func(request, *args, **kwargs) File "c:/booster\hospitalists\views.py" in index 50. return render_to_response('hospitalists/my_patients.html', RequestContext(request, {'patients': patients, 'user' : request.user})) File "C:\Python25\Lib\site-packages\django\shortcuts\__init__.py" in render_to_response 20. return HttpResponse(loader.render_to_string(*args, **kwargs), **httpresponse_kwargs) File "C:\Python25\Lib\site-packages\django\template\loader.py" in render_to_string 108. return t.render(context_instance) File "C:\Python25\Lib\site-packages\django\template\__init__.py" in render 178. return self.nodelist.render(context) File "C:\Python25\Lib\site-packages\django\template\__init__.py" in render 779. bits.append(self.render_node(node, context)) File "C:\Python25\Lib\site-packages\django\template\debug.py" in render_node 71. result = node.render(context) File "C:\Python25\Lib\site-packages\django\template\loader_tags.py" in render 97. return compiled_parent.render(context) File "C:\Python25\Lib\site-packages\django\template\__init__.py" in render 178. return self.nodelist.render(context) File "C:\Python25\Lib\site-packages\django\template\__init__.py" in render 779. bits.append(self.render_node(node, context)) File "C:\Python25\Lib\site-packages\django\template\debug.py" in render_node 71. result = node.render(context) File "C:\Python25\Lib\site-packages\django\template\loader_tags.py" in render 24. result = self.nodelist.render(context) File "C:\Python25\Lib\site-packages\django\template\__init__.py" in render 779. bits.append(self.render_node(node, context)) File "C:\Python25\Lib\site-packages\django\template\debug.py" in render_node 81. raise wrapped Exception Type: TemplateSyntaxError at /hospitalists/ Exception Value: Caught an exception while rendering: Reverse for '<django.contrib.auth.decorators._CheckLogin object at 0x05016DD0>' with arguments '(7L,)' and keyword arguments '{}' not found. Original Traceback (most recent call last): File "C:\Python25\Lib\site-packages\django\template\debug.py", line 71, in render_node result = node.render(context) File "C:\Python25\Lib\site-packages\django\template\defaulttags.py", line 155, in render nodelist.append(node.render(context)) File "C:\Python25\Lib\site-packages\django\template\defaulttags.py", line 382, in render raise e NoReverseMatch: Reverse for '<django.contrib.auth.decorators._CheckLogin object at 0x05016DD0>' with arguments '(7L,)' and keyword arguments '{}' not found. Thanks for your help, Pete A: Having googled on this a bit, it sounds like you may need to delete any .pyc files on the server and let it recompile them the first time they're accessed. A: I had a problem with my apache configuration: I changed this: SetEnv DJANGO_SETTINGS_MODULE settings to this: SetEnv DJANGO_SETTINGS_MODULE booster.settings To solve the defualt auth login problem, I added the setting settings.LOGIN_URL. A: This is a pretty common 'phantom error' in Django. In other words, there's a bug in your code, but the debug page is spitting back a misleading exception. Usually when I see this error, it's because I've screwed something up in a url tag in one of my templates - most commonly a misspelled url or a url for a view that I haven't written yet. A lot of the times, the Django debug page will even highlight the url that the error is coming from.
django @login_required decorator error
I'm running django 1.1rc. All of my code works correctly using django's built in development server; however, when I move it into production using Apache's mod_python, I get the following error on all of my views: Caught an exception while rendering: Reverse for '<django.contrib.auth.decorators._CheckLogin What might I look for that's causing this error? Update: What's strange is that I can access the views account/login and also the admin site just fine. I tried removing the @login_required decorator on all of my views and it generates the same type of exception. Update2: So it seems like there is a problem with any view in my custom package: booster. The django.contrib works fine. I'm serving the app at http://server_name/booster. However, the built-in auth login view redirects to http://server_name/accounts/login. Does this give a clue to what may be wrong? Traceback: Environment: Request Method: GET Request URL: http://lghbb/booster/hospitalists/ Django Version: 1.1 rc 1 Python Version: 2.5.4 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.admin', 'booster.core', 'booster.hospitalists'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware') Template error: In template c:\booster\templates\hospitalists\my_patients.html, error at line 23 Caught an exception while rendering: Reverse for '<django.contrib.auth.decorators._CheckLogin object at 0x05016DD0>' with arguments '(7L,)' and keyword arguments '{}' not found. 13 : <th scope="col">Name</th> 14 : <th scope="col">DOB</th> 15 : <th scope="col">IC</th> 16 : <th scope="col">Type</th> 17 : <th scope="col">LOS</th> 18 : <th scope="col">PCP</th> 19 : <th scope="col">Service</th> 20 : </tr> 21 : </thead> 22 : <tbody> 23 : {% for patient in patients %} 24 : <tr class="{{ patient.gender }} select"> 25 : <td>{{ patient.bed }}</td> 26 : <td>{{ patient.mr }}</td> 27 : <td>{{ patient.acct }}</td> 28 : <td><a href="{% url hospitalists.views.patient patient.id %}">{{ patient }}</a></td> 29 : <td>{{ patient.dob }}</td> 30 : <td class="{% if patient.infections.count %}infection{% endif %}"> 31 : {% for infection in patient.infections.all %} 32 : {{ infection.short_name }} &nbsp; 33 : {% endfor %} Traceback: File "C:\Python25\Lib\site-packages\django\core\handlers\base.py" in get_response 92. response = callback(request, *callback_args, **callback_kwargs) File "C:\Python25\Lib\site-packages\django\contrib\auth\decorators.py" in __call__ 78. return self.view_func(request, *args, **kwargs) File "c:/booster\hospitalists\views.py" in index 50. return render_to_response('hospitalists/my_patients.html', RequestContext(request, {'patients': patients, 'user' : request.user})) File "C:\Python25\Lib\site-packages\django\shortcuts\__init__.py" in render_to_response 20. return HttpResponse(loader.render_to_string(*args, **kwargs), **httpresponse_kwargs) File "C:\Python25\Lib\site-packages\django\template\loader.py" in render_to_string 108. return t.render(context_instance) File "C:\Python25\Lib\site-packages\django\template\__init__.py" in render 178. return self.nodelist.render(context) File "C:\Python25\Lib\site-packages\django\template\__init__.py" in render 779. bits.append(self.render_node(node, context)) File "C:\Python25\Lib\site-packages\django\template\debug.py" in render_node 71. result = node.render(context) File "C:\Python25\Lib\site-packages\django\template\loader_tags.py" in render 97. return compiled_parent.render(context) File "C:\Python25\Lib\site-packages\django\template\__init__.py" in render 178. return self.nodelist.render(context) File "C:\Python25\Lib\site-packages\django\template\__init__.py" in render 779. bits.append(self.render_node(node, context)) File "C:\Python25\Lib\site-packages\django\template\debug.py" in render_node 71. result = node.render(context) File "C:\Python25\Lib\site-packages\django\template\loader_tags.py" in render 24. result = self.nodelist.render(context) File "C:\Python25\Lib\site-packages\django\template\__init__.py" in render 779. bits.append(self.render_node(node, context)) File "C:\Python25\Lib\site-packages\django\template\debug.py" in render_node 81. raise wrapped Exception Type: TemplateSyntaxError at /hospitalists/ Exception Value: Caught an exception while rendering: Reverse for '<django.contrib.auth.decorators._CheckLogin object at 0x05016DD0>' with arguments '(7L,)' and keyword arguments '{}' not found. Original Traceback (most recent call last): File "C:\Python25\Lib\site-packages\django\template\debug.py", line 71, in render_node result = node.render(context) File "C:\Python25\Lib\site-packages\django\template\defaulttags.py", line 155, in render nodelist.append(node.render(context)) File "C:\Python25\Lib\site-packages\django\template\defaulttags.py", line 382, in render raise e NoReverseMatch: Reverse for '<django.contrib.auth.decorators._CheckLogin object at 0x05016DD0>' with arguments '(7L,)' and keyword arguments '{}' not found. Thanks for your help, Pete
[ "Having googled on this a bit, it sounds like you may need to delete any .pyc files on the server and let it recompile them the first time they're accessed.\n", "I had a problem with my apache configuration:\nI changed this:\nSetEnv DJANGO_SETTINGS_MODULE settings\nto this:\nSetEnv DJANGO_SETTINGS_MODULE booster.settings\nTo solve the defualt auth login problem, I added the setting settings.LOGIN_URL.\n", "This is a pretty common 'phantom error' in Django. In other words, there's a bug in your code, but the debug page is spitting back a misleading exception. Usually when I see this error, it's because I've screwed something up in a url tag in one of my templates - most commonly a misspelled url or a url for a view that I haven't written yet. A lot of the times, the Django debug page will even highlight the url that the error is coming from. \n" ]
[ 0, 0, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001195432_django_python.txt
Q: Django Year/Month based posts archive i'm new to Django and started an application, i did the models, views, templates, but i want to add some kind of archive to the bottom of the page, something like this http://www.flickr.com/photos/ionutgabriel/3990015411/. So i want to list all years and next to them all the months from that year. The months who have posts to be links and other no. Also i want to translate the months names cause i need them in romanian. What i've done so far is: in my view: def archive(request): arch = Post.objects.dates('date', 'month', order='DESC') archives = {} for i in arch: tp = i.timetuple() year = tp[0] month = tp[1] if year not in archives: archives[year] = [] archives[year].append(month) else: if month not in archives[year]: archives[year].append(month) return render_to_response('blog/arhiva.html', {'archives':archives}) and in my template: {% for years, months in archives.items %} {{ years }} {% for month in months %} <a href="{{ years }}/{{ month }}">{{ month }}</a> {% endfor %} <br /> {% endfor %} this returns something like: 2008 10 2009 10 9 2007 10 but i can't sort them at all...by year or by anything, and also i don't know how to add all months(the names), i want them like this: 2009 Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec 2008 Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec 2007 Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec with link on the months who have entries. Thank you for your help! p.s. sorry for my English LE: Maybe i put the question in a wrong way, i know how to obtain dates, but i don't know how to format them to look like these: 2009 Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec 2008 Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec 2007 Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec all i can get from arch = Post.objects.dates('date', 'month', order='DESC') with {{ archives }} in template is something like: [datetime.datetime(2009, 10, 1, 0, 0), datetime.datetime(2009, 9, 1, 0, 0), datetime.datetime(2008, 10, 1, 0, 0), datetime.datetime(2007, 10, 1, 0, 0)] then i've tried a loop: {% for archive in archives %} {{ archive }} <br /> {% endfor %} and got: 2009-10-01 00:00:00 2009-09-01 00:00:00 2008-10-01 00:00:00 2007-10-01 00:00:00 After that tried something like this: {% for archive in archives %} {{ archive|date:"Y: m" }} <br /> {% endfor %} and got: 2009: 10 2009: 09 2008: 10 2007: 10 Here i'm stuck and don't know how to format the data so i can get distinct years with all the months and only the months who have entries to be links... Any ideas? Thank you in advance! A: Firstly, the datetime format strings are given in the django docs. I think you want capital instead of lowercase 'M'. Since you want to display all 12 months of a year, even if only some have posts, we'll create an archives object to pass to the template. I've chosen to use a dictionary where the keys are the years the values are a list of 12 [datetime, bool] pairs, where datetime represents a month, and bool is True if there are posts for that month. Here's how we build the archives object in the view. from datetime import date def archive(request): arch = Post.objects.dates('date', 'month', order='DESC') archives = {} for i in arch: year = i.year month = i.month try: archives[year][month-1][1]=True except KeyError: # catch the KeyError, and set up list for that year archives[year]=[[date(y,m,1),False] for m in xrange(1,13)] archives[year][month-1][1]=True return render_to_response('blog/arhiva.html', {'archives':sorted(archives.items(),reverse=True)}) In the template, we loop through the months for each year, and display the link if appropriate. {% for year, month_list in archives %} {{ year }} archives: {% for month, has_link in month_list %} {% if has_link %}<a href="/{{ month.year }}/{{ month.month }}/">{% endif %} {{ month|date:"M" }} {% if has_link %}</a>{% endif %} {% endfor %} {% endfor %} I haven't checked all the code so there might be a couple of bugs. It would be better to use the url template tag for the link, instead of hardcoding the url format. I have a feeling my answer might be overly complicated, but I've spent a while typing it up, so I may as well share it with the world. Internationalization I haven't used the internationalization features of Django, so I can't really help with the translation. I recommend you have a look at the documentation, and ask another question if there's a particular bit you don't understand. Having said that, if you want to display the months is Romanian only, here's an ugly way to do it. First, add the following line to the top of your archive function in the view. rom_months = ['Ian', 'Feb', 'Mar', 'Apr', 'Mai', 'Iun', 'Iul', 'Aug', 'Sept', 'Oct', 'Noi', 'Dec'] Then substitute the following line into your view archives[year]=[[date(y,k+1,1),False,rom] for k, rom in enumerate(rom_months)] Finally substitute the following into the template ... {% for month, has_link, rom_month in month_list %} {% if has_link %}<a href="/{{ month.year }}/{{ month.month }}/">{% endif %} {{ rom_month }} ... A: You might want to consider starting with a generic view and building off that. A: Ok... so the final code that works for me is: in view: rom_months = ['Ian', 'Feb', 'Mar', 'Apr', 'Mai', 'Iun', 'Iul', 'Aug', 'Sept', 'Oct', 'Noi', 'Dec'] def arhiva(request): arch = Post.objects.dates('data', 'month', order='DESC') archives = {} for i in arch: year = i.year month = i.month try: archives[year][month-1][1] = True except KeyError: archives[year]=[[datetime.date(year,k+1,1),False,rom] for k, rom in enumerate(rom_months)] archives[year][month-1][1] = True return render_to_response('blog/arhiva.html', {'archives':sorted(archives.items(),reverse=True)}) and in template: {% for year, month_list in archives %} {{ year }} Arhive: {% for month, has_link, rom_month in month_list %} {% if has_link %}<a href="/{{ month.year }}/{{ month.month }}/">{% endif %} {{ rom_month }} {% if has_link %}</a>{% endif %} {% endfor %} <br /> {% endfor %} and the result: 2009 Arhive: Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec 2008 Arhive: Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec 2007 Arhive: Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec 2003 Arhive: Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec Thanks a lot again for help. You're the best! I'm the n00b! :)
Django Year/Month based posts archive
i'm new to Django and started an application, i did the models, views, templates, but i want to add some kind of archive to the bottom of the page, something like this http://www.flickr.com/photos/ionutgabriel/3990015411/. So i want to list all years and next to them all the months from that year. The months who have posts to be links and other no. Also i want to translate the months names cause i need them in romanian. What i've done so far is: in my view: def archive(request): arch = Post.objects.dates('date', 'month', order='DESC') archives = {} for i in arch: tp = i.timetuple() year = tp[0] month = tp[1] if year not in archives: archives[year] = [] archives[year].append(month) else: if month not in archives[year]: archives[year].append(month) return render_to_response('blog/arhiva.html', {'archives':archives}) and in my template: {% for years, months in archives.items %} {{ years }} {% for month in months %} <a href="{{ years }}/{{ month }}">{{ month }}</a> {% endfor %} <br /> {% endfor %} this returns something like: 2008 10 2009 10 9 2007 10 but i can't sort them at all...by year or by anything, and also i don't know how to add all months(the names), i want them like this: 2009 Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec 2008 Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec 2007 Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec with link on the months who have entries. Thank you for your help! p.s. sorry for my English LE: Maybe i put the question in a wrong way, i know how to obtain dates, but i don't know how to format them to look like these: 2009 Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec 2008 Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec 2007 Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec all i can get from arch = Post.objects.dates('date', 'month', order='DESC') with {{ archives }} in template is something like: [datetime.datetime(2009, 10, 1, 0, 0), datetime.datetime(2009, 9, 1, 0, 0), datetime.datetime(2008, 10, 1, 0, 0), datetime.datetime(2007, 10, 1, 0, 0)] then i've tried a loop: {% for archive in archives %} {{ archive }} <br /> {% endfor %} and got: 2009-10-01 00:00:00 2009-09-01 00:00:00 2008-10-01 00:00:00 2007-10-01 00:00:00 After that tried something like this: {% for archive in archives %} {{ archive|date:"Y: m" }} <br /> {% endfor %} and got: 2009: 10 2009: 09 2008: 10 2007: 10 Here i'm stuck and don't know how to format the data so i can get distinct years with all the months and only the months who have entries to be links... Any ideas? Thank you in advance!
[ "Firstly, the datetime format strings are given in the django docs. I think you want capital instead of lowercase 'M'.\nSince you want to display all 12 months of a year, even if only some have posts, we'll create an archives object to pass to the template. I've chosen to use a dictionary where\n\nthe keys are the years\nthe values are a list of 12 [datetime, bool] pairs, where datetime represents a month, and bool is True if there are posts for that month.\n\nHere's how we build the archives object in the view.\nfrom datetime import date\n\ndef archive(request):\n arch = Post.objects.dates('date', 'month', order='DESC')\n\n archives = {}\n\n for i in arch:\n year = i.year\n month = i.month\n try:\n archives[year][month-1][1]=True\n except KeyError:\n # catch the KeyError, and set up list for that year\n archives[year]=[[date(y,m,1),False] for m in xrange(1,13)]\n archives[year][month-1][1]=True\n\n return render_to_response('blog/arhiva.html', \n {'archives':sorted(archives.items(),reverse=True)})\n\nIn the template, we loop through the months for each year, and display the link if appropriate.\n{% for year, month_list in archives %}\n {{ year }} archives: \n {% for month, has_link in month_list %}\n {% if has_link %}<a href=\"/{{ month.year }}/{{ month.month }}/\">{% endif %}\n {{ month|date:\"M\" }}\n {% if has_link %}</a>{% endif %}\n {% endfor %}\n{% endfor %}\n\nI haven't checked all the code so there might be a couple of bugs. It would be better to use the url template tag for the link, instead of hardcoding the url format. I have a feeling my answer might be overly complicated, but I've spent a while typing it up, so I may as well share it with the world.\n\nInternationalization\nI haven't used the internationalization features of Django, so I can't really help with the translation. I recommend you have a look at the documentation, and ask another question if there's a particular bit you don't understand.\nHaving said that, if you want to display the months is Romanian only, here's an ugly way to do it.\nFirst, add the following line to the top of your archive function in the view.\nrom_months = ['Ian', 'Feb', 'Mar', 'Apr', 'Mai', 'Iun', \n 'Iul', 'Aug', 'Sept', 'Oct', 'Noi', 'Dec']\n\nThen substitute the following line into your view\narchives[year]=[[date(y,k+1,1),False,rom] for k, rom in enumerate(rom_months)]\n\nFinally substitute the following into the template\n...\n{% for month, has_link, rom_month in month_list %}\n {% if has_link %}<a href=\"/{{ month.year }}/{{ month.month }}/\">{% endif %}\n {{ rom_month }}\n...\n\n", "You might want to consider starting with a generic view and building off that.\n", "Ok... so the final code that works for me is:\nin view:\n rom_months = ['Ian', 'Feb', 'Mar', 'Apr', 'Mai', 'Iun', \n'Iul', 'Aug', 'Sept', 'Oct', 'Noi', 'Dec']\n\ndef arhiva(request):\n arch = Post.objects.dates('data', 'month', order='DESC')\n\n archives = {}\n\n for i in arch:\n year = i.year\n month = i.month\n try:\n archives[year][month-1][1] = True\n except KeyError:\n\n archives[year]=[[datetime.date(year,k+1,1),False,rom] for k, rom in enumerate(rom_months)]\n archives[year][month-1][1] = True\n\n return render_to_response('blog/arhiva.html', {'archives':sorted(archives.items(),reverse=True)})\n\nand in template:\n{% for year, month_list in archives %}\n {{ year }} Arhive: \n {% for month, has_link, rom_month in month_list %}\n {% if has_link %}<a href=\"/{{ month.year }}/{{ month.month }}/\">{% endif %}\n {{ rom_month }}\n {% if has_link %}</a>{% endif %} \n {% endfor %}\n <br />\n{% endfor %}\n\nand the result: \n2009 Arhive: Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec \n2008 Arhive: Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec \n2007 Arhive: Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec \n2003 Arhive: Ian Feb Mar Apr Mai Iun Iul Aug Sept Oct Noi Dec \n\nThanks a lot again for help. You're the best! I'm the n00b! :)\n" ]
[ 12, 2, 2 ]
[]
[]
[ "django", "python" ]
stackoverflow_0001645962_django_python.txt
Q: Correct way to access related objects I have the following models class Person(models.Model): name = models.CharField(max_length=100) class Employee(Person): job = model.Charfield(max_length=200) class PhoneNumber(models.Model): person = models.ForeignKey(Person) How do I access the PhoneNumbers associated with an employee if I have the employee id? Currently I am using phones = PhoneNumbers.objects.filter(person__id=employee.id) and it works only because I know that the employee.id and person.id are the same value, but I am sure this is the incorrect way to do it. Thanks Andrew A: You can (and should) filter without knowing the foreign key field: PhoneNumber.objects.filter(employee=your_employee).all() A: You could do: employees = Employee.objects.filter(id=your_id).select_related() if employees.count() == 1: phone_numbers = employees[0].phonenumber_set.all() That should get you all your phone numbers in one db query. By default you can access models related through a foreignkey on the "opposite" side by using "model name in all lower case" followed by "_set". You can change the name of that accessor by setting the related name property of the foreignkey.
Correct way to access related objects
I have the following models class Person(models.Model): name = models.CharField(max_length=100) class Employee(Person): job = model.Charfield(max_length=200) class PhoneNumber(models.Model): person = models.ForeignKey(Person) How do I access the PhoneNumbers associated with an employee if I have the employee id? Currently I am using phones = PhoneNumbers.objects.filter(person__id=employee.id) and it works only because I know that the employee.id and person.id are the same value, but I am sure this is the incorrect way to do it. Thanks Andrew
[ "You can (and should) filter without knowing the foreign key field:\nPhoneNumber.objects.filter(employee=your_employee).all()\n\n", "You could do:\nemployees = Employee.objects.filter(id=your_id).select_related()\nif employees.count() == 1:\n phone_numbers = employees[0].phonenumber_set.all()\n\nThat should get you all your phone numbers in one db query.\nBy default you can access models related through a foreignkey on the \"opposite\" side by using \"model name in all lower case\" followed by \"_set\". You can change the name of that accessor by setting the related name property of the foreignkey.\n" ]
[ 5, 4 ]
[]
[]
[ "django", "django_models", "python" ]
stackoverflow_0001647043_django_django_models_python.txt
Q: Multi Language Starter Templates I'm currently working through some code katas in multiple languages (Ruby, Perl, Python)/frameworks (Rails, Django, Mojo). It seems every time I start a new project from scratch I end up tweaking files to my liking, even after using things like newgem, module-starter, script/generate, startapp, etc. For those who program in many different languages, do you have some sort of toolset, scripts, etc that generate start code to your liking? I'm contemplating setting up git repo of all of my start code and some sort of script that pulls/renames/tweaks when starting new projects but I don't want to reinvent too many wheels. I've also considered making a personalized Textmate Bundle that does this and/or has custom snippets/template that have the same shortcut keys/commands across all the languages I use. It seems I'm also wasting brain time on trying to remember which command/snippet-tab combos are valid for the language/bundle I'm working in. What are other multi-programming language people doing to quickstart development in different languages/tools? A: Just use the templating capabilities of your editor. For vim, check out this example. Update: Which editor? The choice of editor is too deeply personal and reliant on individual preference for me to recommend any single editor. Pick a cross platform editor that is powerful enough (like Vim or Emacs), learn to really use it, and use it everywhere. This will improve your productivity beyond the gains templates will give you.
Multi Language Starter Templates
I'm currently working through some code katas in multiple languages (Ruby, Perl, Python)/frameworks (Rails, Django, Mojo). It seems every time I start a new project from scratch I end up tweaking files to my liking, even after using things like newgem, module-starter, script/generate, startapp, etc. For those who program in many different languages, do you have some sort of toolset, scripts, etc that generate start code to your liking? I'm contemplating setting up git repo of all of my start code and some sort of script that pulls/renames/tweaks when starting new projects but I don't want to reinvent too many wheels. I've also considered making a personalized Textmate Bundle that does this and/or has custom snippets/template that have the same shortcut keys/commands across all the languages I use. It seems I'm also wasting brain time on trying to remember which command/snippet-tab combos are valid for the language/bundle I'm working in. What are other multi-programming language people doing to quickstart development in different languages/tools?
[ "Just use the templating capabilities of your editor.\nFor vim, check out this example.\nUpdate:\nWhich editor? The choice of editor is too deeply personal and reliant on individual preference for me to recommend any single editor. Pick a cross platform editor that is powerful enough (like Vim or Emacs), learn to really use it, and use it everywhere. This will improve your productivity beyond the gains templates will give you.\n" ]
[ 2 ]
[]
[]
[ "perl", "python", "ruby", "starter_kits", "templates" ]
stackoverflow_0001647614_perl_python_ruby_starter_kits_templates.txt
Q: How can I use Microsoft Word's spelling/grammar checker programmatically? I want to process a medium to large number of text snippets using a spelling/grammar checker to get a rough approximation and ranking of their "quality." Speed is not really of concern either, so I think the easiest way is to write a script that passes off the snippets to Microsoft Word (2007) and runs its spelling and grammar checker on them. Is there a way to do this from a script (specifically, Python)? What is a good resource for learning about controlling Word programmatically? If not, I suppose I can try something from Open Source Grammar Checker (SO). Update In response to Chris' answer, is there at least a way to a) open a file (containing the snippet(s)), b) run a VBA script from inside Word that calls the spelling and grammar checker, and c) return some indication of the "score" of the snippet(s)? Update 2 I've added an answer which seems to work, but if anyone has other suggestions I'll keep this question open for some time. A: It took some digging, but I think I found a useful solution. Following the advice at http://www.nabble.com/Edit-a-Word-document-programmatically-td19974320.html I'm using the win32com module (if the SourceForge link doesn't work, according to this Stack Overflow answer you can use pip to get the module), which allows access to Word's COM objects. The following code demonstrates this nicely: import win32com.client, os wdDoNotSaveChanges = 0 path = os.path.abspath('snippet.txt') snippet = 'Jon Skeet lieks ponies. I can haz reputashunz? ' snippet += 'This is a correct sentence.' file = open(path, 'w') file.write(snippet) file.close() app = win32com.client.gencache.EnsureDispatch('Word.Application') doc = app.Documents.Open(path) print "Grammar: %d" % (doc.GrammaticalErrors.Count,) print "Spelling: %d" % (doc.SpellingErrors.Count,) app.Quit(wdDoNotSaveChanges) which produces Grammar: 2 Spelling: 3 which match the results when invoking the check manually from Word.
How can I use Microsoft Word's spelling/grammar checker programmatically?
I want to process a medium to large number of text snippets using a spelling/grammar checker to get a rough approximation and ranking of their "quality." Speed is not really of concern either, so I think the easiest way is to write a script that passes off the snippets to Microsoft Word (2007) and runs its spelling and grammar checker on them. Is there a way to do this from a script (specifically, Python)? What is a good resource for learning about controlling Word programmatically? If not, I suppose I can try something from Open Source Grammar Checker (SO). Update In response to Chris' answer, is there at least a way to a) open a file (containing the snippet(s)), b) run a VBA script from inside Word that calls the spelling and grammar checker, and c) return some indication of the "score" of the snippet(s)? Update 2 I've added an answer which seems to work, but if anyone has other suggestions I'll keep this question open for some time.
[ "It took some digging, but I think I found a useful solution. Following the advice at http://www.nabble.com/Edit-a-Word-document-programmatically-td19974320.html I'm using the win32com module (if the SourceForge link doesn't work, according to this Stack Overflow answer you can use pip to get the module), which allows access to Word's COM objects. The following code demonstrates this nicely:\nimport win32com.client, os\n\nwdDoNotSaveChanges = 0\npath = os.path.abspath('snippet.txt')\n\nsnippet = 'Jon Skeet lieks ponies. I can haz reputashunz? '\nsnippet += 'This is a correct sentence.'\nfile = open(path, 'w')\nfile.write(snippet)\nfile.close()\n\napp = win32com.client.gencache.EnsureDispatch('Word.Application')\ndoc = app.Documents.Open(path)\nprint \"Grammar: %d\" % (doc.GrammaticalErrors.Count,)\nprint \"Spelling: %d\" % (doc.SpellingErrors.Count,)\n\napp.Quit(wdDoNotSaveChanges)\n\nwhich produces\n\nGrammar: 2\nSpelling: 3\n\nwhich match the results when invoking the check manually from Word.\n" ]
[ 9 ]
[]
[]
[ "com", "ms_word", "python", "win32com", "word_2007" ]
stackoverflow_0001646801_com_ms_word_python_win32com_word_2007.txt
Q: Python question on elementwise operation I have two srtings of integer a=[-1,0,-1,0,1] and b=[1] and i want to subtract b from a as elementwise operation but the answer shoud be string contaning element -1 or 0 or 1 A: Maybe you mean this: def elementwise_subtraction_of_strings_of_integer(a, b): c = b * (len(a) // len(b)) return [aa - bb for aa, bb in zip(a, c)] if __name__ == '__main__': a=[-1,0,-1,0,1] b=[1] print elementwise_subtraction_of_strings_of_integer(a, b) It produces this: [-2, -1, -2, -1, 0] If this is not what you want, please rephrase the question as several commenters have suggested.
Python question on elementwise operation
I have two srtings of integer a=[-1,0,-1,0,1] and b=[1] and i want to subtract b from a as elementwise operation but the answer shoud be string contaning element -1 or 0 or 1
[ "Maybe you mean this:\ndef elementwise_subtraction_of_strings_of_integer(a, b):\n c = b * (len(a) // len(b))\n return [aa - bb for aa, bb in zip(a, c)]\n\nif __name__ == '__main__':\n a=[-1,0,-1,0,1]\n b=[1]\n print elementwise_subtraction_of_strings_of_integer(a, b)\n\nIt produces this:\n[-2, -1, -2, -1, 0]\n\nIf this is not what you want, please rephrase the question as several commenters have suggested.\n" ]
[ 2 ]
[]
[]
[ "python" ]
stackoverflow_0001647112_python.txt
Q: Python PyQT4 - Adding an unknown number of QComboBox widgets to QGridLayout I want to retrieve a list of people's names from a queue and, for each person, place a checkbox with their name to a QGridLayout using the addWidget() function. I can successfully place the items in a QListView, but they just write over the top of each other rather than creating a new row. Does anyone have any thoughts on how I could fix this? self.chk_People = QtGui.QListView() items = self.jobQueue.getPeopleOffQueue() for item in items: QtGui.QCheckBox('%s' % item, self.chk_People) self.jobQueue.getPeopleOffQueue() would return something like ['Bob', 'Sally', 'Jimmy'] if that helps. A: This line: QtGui.QCheckBox('%s' % item, self.chk_People) Doesn't add the check box to the list view, it only creates it with the list view as the parent, and there's a big difference. The simplest way to use a list view is the QListWidget convenience class. For that, create your checkboxes as instances of QListWidgetItem and then use addItem on the list widget to really add them to it. Did you have a problem adding them to a grid layout? Usually, if the amount of checkboxes you have is small a grid layout might be better - it all depends on how you want your app to look. But if you might have a lot of such objects, then a list widget/view is the best. A: I can't tell you the solution in PyQt but the structure you need to follow is the following. The QListView can do what you need, you don't need to create separate checkboxes. Create a subclass of QAbstractItemModel or QStandardItemModel (depending on how much coding you will want to do) override the flags() method to return the appropriate flags including Qt::ItemIsUserCheckable, in the columncount() method add an extra column to include the checkbox and in the data method for the column where you want your checkbox to appear return the checked state Qt::Checked, Qt::Unchecked for the Qt::CheckStateRole. This can also be accomplished using the QListWidget where you can use QListWidgetItem for adding data and do not need to create a model. On QListWidgetItem you can use setFlags() and setData(QVariant(bool, Qt::CheckStateRole) without having to subclass a model
Python PyQT4 - Adding an unknown number of QComboBox widgets to QGridLayout
I want to retrieve a list of people's names from a queue and, for each person, place a checkbox with their name to a QGridLayout using the addWidget() function. I can successfully place the items in a QListView, but they just write over the top of each other rather than creating a new row. Does anyone have any thoughts on how I could fix this? self.chk_People = QtGui.QListView() items = self.jobQueue.getPeopleOffQueue() for item in items: QtGui.QCheckBox('%s' % item, self.chk_People) self.jobQueue.getPeopleOffQueue() would return something like ['Bob', 'Sally', 'Jimmy'] if that helps.
[ "This line:\nQtGui.QCheckBox('%s' % item, self.chk_People)\n\nDoesn't add the check box to the list view, it only creates it with the list view as the parent, and there's a big difference. \nThe simplest way to use a list view is the QListWidget convenience class. For that, create your checkboxes as instances of QListWidgetItem and then use addItem on the list widget to really add them to it.\nDid you have a problem adding them to a grid layout? Usually, if the amount of checkboxes you have is small a grid layout might be better - it all depends on how you want your app to look. But if you might have a lot of such objects, then a list widget/view is the best.\n", "I can't tell you the solution in PyQt but the structure you need to follow is the following. The QListView can do what you need, you don't need to create separate checkboxes. Create a subclass of QAbstractItemModel or QStandardItemModel (depending on how much coding you will want to do) override the flags() method to return the appropriate flags including Qt::ItemIsUserCheckable, in the columncount() method add an extra column to include the checkbox and in the data method for the column where you want your checkbox to appear return the checked state Qt::Checked, Qt::Unchecked for the Qt::CheckStateRole. \nThis can also be accomplished using the QListWidget where you can use QListWidgetItem for adding data and do not need to create a model. On QListWidgetItem you can use setFlags() and setData(QVariant(bool, Qt::CheckStateRole) without having to subclass a model\n" ]
[ 1, 0 ]
[]
[]
[ "pyqt", "python", "qcombobox", "qlistview" ]
stackoverflow_0001647664_pyqt_python_qcombobox_qlistview.txt
Q: How to deploy Python to Windows users? I'm soon to launch a beta app and this have the option to create custom integration scripts on Python. The app will target Mac OS X and Windows, and my problem is with Windows where Python normally is not present. My actual aproach is silently run the Python 2.6 install. However I face the problem that is not activated by default and the path is not set when use the command line options. And I fear that if Python is installed before and I upgrade to a new version this could break something else... So, I wonder how this can be done cleanly. Is it OK if I copy the whole Python 2.6 directory, and put it in a sub-directory of my app and install everything there? Or with virtualenv is posible run diferents versions of Python (if Python is already installed in the machine?). I also play before embedding Python with a DLL, and found it easy but I lost the ability to debug, so I switch to command-line plug-ins. I execute the plug-ins from command line and read the STDOUT and STDERR output. The app is made with Delphi/Lazarus. I install others modules like JSON and RPC clients, Win32com, ORM, etc. I create the installer with bitrock. UPDATE: The end-users are small business owners, and the Python scripts are made by developers. I want to avoid any additional step in the deployment, so I want a fully integrated setup. A: Copy a Portable Python folder out of your installer, into the same folder as your Delphi/Lazarus app. Set all paths appropriately for that. A: You might try using py2exe. It creates a .exe file with Python already included! A: Integrate the python interpreter into your Delphi app with P4D. These components actually work, and in both directions too (Delphi classes exposed to Python as binary extensions, and Python interpreter inside Delphi). I also saw a patch for Lazarus compatibility on the Google Code "issues" page, but it seems there might be some unresolved issues there. A: I think there's no problem combining .EXE packaging with a tool like PyInstaller or py2exe and Python-written plugins. The created .EXE can easily detect where it's installed and the code inside can then simply import files from some pre-determined plugin directory. Don't forget that once you package a Python script into an executable, it also packages the Python interpreter inside, so there you have it - a full Python environment customized with your own code.
How to deploy Python to Windows users?
I'm soon to launch a beta app and this have the option to create custom integration scripts on Python. The app will target Mac OS X and Windows, and my problem is with Windows where Python normally is not present. My actual aproach is silently run the Python 2.6 install. However I face the problem that is not activated by default and the path is not set when use the command line options. And I fear that if Python is installed before and I upgrade to a new version this could break something else... So, I wonder how this can be done cleanly. Is it OK if I copy the whole Python 2.6 directory, and put it in a sub-directory of my app and install everything there? Or with virtualenv is posible run diferents versions of Python (if Python is already installed in the machine?). I also play before embedding Python with a DLL, and found it easy but I lost the ability to debug, so I switch to command-line plug-ins. I execute the plug-ins from command line and read the STDOUT and STDERR output. The app is made with Delphi/Lazarus. I install others modules like JSON and RPC clients, Win32com, ORM, etc. I create the installer with bitrock. UPDATE: The end-users are small business owners, and the Python scripts are made by developers. I want to avoid any additional step in the deployment, so I want a fully integrated setup.
[ "Copy a Portable Python folder out of your installer, into the same folder as your Delphi/Lazarus app. Set all paths appropriately for that.\n", "You might try using py2exe. It creates a .exe file with Python already included!\n", "Integrate the python interpreter into your Delphi app with P4D. These components actually work, and in both directions too (Delphi classes exposed to Python as binary extensions, and Python interpreter inside Delphi). I also saw a patch for Lazarus compatibility on the Google Code \"issues\" page, but it seems there might be some unresolved issues there.\n", "I think there's no problem combining .EXE packaging with a tool like PyInstaller or py2exe and Python-written plugins. The created .EXE can easily detect where it's installed and the code inside can then simply import files from some pre-determined plugin directory. Don't forget that once you package a Python script into an executable, it also packages the Python interpreter inside, so there you have it - a full Python environment customized with your own code.\n" ]
[ 15, 13, 4, 1 ]
[]
[]
[ "delphi", "deployment", "lazarus", "python", "windows" ]
stackoverflow_0001646326_delphi_deployment_lazarus_python_windows.txt
Q: mod_python django logging problem I use logging settings as below in the settings.py file: logging.basicConfig(level=LOG_LEVEL, format=LOG_FORMAT); handler = logging.handlers.RotatingFileHandler( LOG_FILE_PATH, 'a', LOG_FILE_SIZE,LOG_FILE_NUM ); formatter = logging.Formatter ( LOG_FORMAT ); handler.setFormatter(formatter); logging.getLogger().addHandler(handler) and i use mod_python with apache2. the problem is: when the log rotate, i got many log files created at the same time. for example, i set 5 work-process in apache, and i got log.1, log.2 ... log.5 when it rotate. any suggestions? A: RotatingFileHandler is not designed to work in multiprocess system. Each process you have notice that file is too large and starts new log, so you get up to 5 new logs. It's not as easy to implement it properly: you have to obtain interprocess lock before creating new file and inform each process to reopen it. You'd better use external (provided with your OS) rotation with server restart or setup single-process logging server.
mod_python django logging problem
I use logging settings as below in the settings.py file: logging.basicConfig(level=LOG_LEVEL, format=LOG_FORMAT); handler = logging.handlers.RotatingFileHandler( LOG_FILE_PATH, 'a', LOG_FILE_SIZE,LOG_FILE_NUM ); formatter = logging.Formatter ( LOG_FORMAT ); handler.setFormatter(formatter); logging.getLogger().addHandler(handler) and i use mod_python with apache2. the problem is: when the log rotate, i got many log files created at the same time. for example, i set 5 work-process in apache, and i got log.1, log.2 ... log.5 when it rotate. any suggestions?
[ "RotatingFileHandler is not designed to work in multiprocess system. Each process you have notice that file is too large and starts new log, so you get up to 5 new logs. It's not as easy to implement it properly: you have to obtain interprocess lock before creating new file and inform each process to reopen it. You'd better use external (provided with your OS) rotation with server restart or setup single-process logging server.\n" ]
[ 2 ]
[]
[]
[ "django", "logging", "python" ]
stackoverflow_0001647974_django_logging_python.txt
Q: How to encode an RSA key using PKCS12 in Python? I'm using Python (under Google App Engine), and I have some RSA private keys that I need to export in PKCS#12 format. Is there anything out there that will assist me with this? I'm using PyCrypto/KeyCzar, and I've figured out how to import/export RSA keys in PKCS8 format, but I really need it in PKCS12. Can anybody point me in the right direction? If it helps, the reason I need them in PKCS12 format is so that I can import them on the iPhone, which seems to only allow key-import in that format. A: If you can handle some ASN.1 generation, you can relatively easily convert a PKCS#8-file into a PKCS#12-file. A PKCS#12-file is basically a wrapper around a PKCS#8 and a certificate, so to make a PKCS#12-file, you just have to add some additional data around your PKCS#8-file and your certificate. Usually a PKCS#12-file will contain the certificate(s) in an encrypted structure, but all compliant parsers should be able to read it from an unencrypted structure. Also, PKCS#12-files will usually contain a MacData-structure for integrity-check, but this is optional and a compliant parser should work fine without it. A: The standard tool for the job is typically OpenSSL. See the openssl pkcs12 command. A: This mailing list posting tends to suggest that PKCS12 is not planned for a future feature of that package, and is not currently implemented. http://lists.dlitz.net/pipermail/pycrypto/2009q2/000104.html
How to encode an RSA key using PKCS12 in Python?
I'm using Python (under Google App Engine), and I have some RSA private keys that I need to export in PKCS#12 format. Is there anything out there that will assist me with this? I'm using PyCrypto/KeyCzar, and I've figured out how to import/export RSA keys in PKCS8 format, but I really need it in PKCS12. Can anybody point me in the right direction? If it helps, the reason I need them in PKCS12 format is so that I can import them on the iPhone, which seems to only allow key-import in that format.
[ "If you can handle some ASN.1 generation, you can relatively easily convert a PKCS#8-file into a PKCS#12-file. A PKCS#12-file is basically a wrapper around a PKCS#8 and a certificate, so to make a PKCS#12-file, you just have to add some additional data around your PKCS#8-file and your certificate.\nUsually a PKCS#12-file will contain the certificate(s) in an encrypted structure, but all compliant parsers should be able to read it from an unencrypted structure. Also, PKCS#12-files will usually contain a MacData-structure for integrity-check, but this is optional and a compliant parser should work fine without it.\n", "The standard tool for the job is typically OpenSSL.\nSee the openssl pkcs12 command.\n", "This mailing list posting tends to suggest that PKCS12 is not planned for a future feature of that package, and is not currently implemented.\nhttp://lists.dlitz.net/pipermail/pycrypto/2009q2/000104.html\n" ]
[ 2, 0, 0 ]
[]
[]
[ "cryptography", "google_app_engine", "pkcs#12", "python", "rsa" ]
stackoverflow_0001647568_cryptography_google_app_engine_pkcs#12_python_rsa.txt
Q: Google wave robot inline reply I've been working on my first robot for google wave recently, a vital part of what it does is to insert inline replies into a blip. I can't for the life of me figure out how to do this! The API docs have a function InsertInlineBlip which sounded promising, however calling that doesn't appear to do anything! EDIT:: It seems that this is a known bug. However, the question still stands what is the correct way to insert an inline blip? I'm assuming something like this: inline = blip.GetDocument().InsertInlineBlip(positionInText) inline.GetDocument().SetText("some text") A: If you look at the sourcecode for OpBasedDocument.InsertInlineBlip() you will see the following: 412 - def InsertInlineBlip(self, position): 413 """Inserts an inline blip into this blip at a specific position. 414 415 Args: 416 position: Position to insert the blip at. 417 418 Returns: 419 The JSON data of the blip that was created. 420 """ 421 blip_data = self.__context.builder.DocumentInlineBlipInsert( 422 self._blip.waveId, 423 self._blip.waveletId, 424 self._blip.blipId, 425 position) 426 # TODO(davidbyttow): Add local blip element. 427 return self.__context.AddBlip(blip_data) I think the TODO comment suggests this feature is not yet active. The method should be callable and return correctly, however I suspect that the document operation is not applied to the global document. The syntax you included in your post looks correct. As you can see above, InsertInlineBlip() returns the value of AddBlip(), which is ...dun, dun, dun... a blip. 543 - def AddBlip(self, blip_data): 544 """Adds a transient blip based on the data supplied. 545 546 Args: 547 blip_data: JSON data describing this blip. 548 549 Returns: 550 An OpBasedBlip that may have operations applied to it. 551 """ 552 blip = OpBasedBlip(blip_data, self) 553 self.blips[blip.GetId()] = blip 554 return blip EDIT: It is interesting to note that the method signature of the Insert method InsertInlineBlip(self, position) is significantly different from the Insert method InsertElement(self, position, element). InsertInlineBlip() doesn't take an element parameter to insert. It seems the current logic for InsertInlineBlip() is more like Blip.CreateChild(), which returns a new child blip with which to work. From this we can suspect that this API will change as the functionality is added. A: It could be a possible bug. A: This appears to have previously been a bug, however, an update today has hopefully fixed it: http://code.google.com/p/google-wave-resources/wiki/WaveAPIsChangeLog
Google wave robot inline reply
I've been working on my first robot for google wave recently, a vital part of what it does is to insert inline replies into a blip. I can't for the life of me figure out how to do this! The API docs have a function InsertInlineBlip which sounded promising, however calling that doesn't appear to do anything! EDIT:: It seems that this is a known bug. However, the question still stands what is the correct way to insert an inline blip? I'm assuming something like this: inline = blip.GetDocument().InsertInlineBlip(positionInText) inline.GetDocument().SetText("some text")
[ "If you look at the sourcecode for OpBasedDocument.InsertInlineBlip() you will see the following:\n 412 - def InsertInlineBlip(self, position): \n 413 \"\"\"Inserts an inline blip into this blip at a specific position. \n 414 \n 415 Args: \n 416 position: Position to insert the blip at. \n 417 \n 418 Returns: \n 419 The JSON data of the blip that was created. \n 420 \"\"\" \n 421 blip_data = self.__context.builder.DocumentInlineBlipInsert( \n 422 self._blip.waveId, \n 423 self._blip.waveletId, \n 424 self._blip.blipId, \n 425 position) \n 426 # TODO(davidbyttow): Add local blip element. \n 427 return self.__context.AddBlip(blip_data) \n\nI think the TODO comment suggests this feature is not yet active. The method should be callable and return correctly, however I suspect that the document operation is not applied to the global document.\nThe syntax you included in your post looks correct. As you can see above, InsertInlineBlip() returns the value of AddBlip(), which is ...dun, dun, dun... a blip.\n 543 - def AddBlip(self, blip_data): \n 544 \"\"\"Adds a transient blip based on the data supplied. \n 545 \n 546 Args: \n 547 blip_data: JSON data describing this blip. \n 548 \n 549 Returns: \n 550 An OpBasedBlip that may have operations applied to it. \n 551 \"\"\" \n 552 blip = OpBasedBlip(blip_data, self) \n 553 self.blips[blip.GetId()] = blip \n 554 return blip \n\nEDIT:\nIt is interesting to note that the method signature of the Insert method InsertInlineBlip(self, position) is significantly different from the Insert method InsertElement(self, position, element). InsertInlineBlip() doesn't take an element parameter to insert. It seems the current logic for InsertInlineBlip() is more like Blip.CreateChild(), which returns a new child blip with which to work. From this we can suspect that this API will change as the functionality is added.\n", "It could be a possible bug.\n", "This appears to have previously been a bug, however, an update today has hopefully fixed it:\nhttp://code.google.com/p/google-wave-resources/wiki/WaveAPIsChangeLog\n" ]
[ 4, 2, 1 ]
[]
[]
[ "google_wave", "python" ]
stackoverflow_0001561655_google_wave_python.txt
Q: How to configure format of Python 2.3 logging messages? In Python 2.4 and later, configuring the logging module to have a more basic formatting is easy: logging.basicConfig(level=opts.LOGLEVEL, format="%(message)s") but for applications which need to support Python 2.3 it seems more difficult, because the logging API was overhauled in Py2.4. In particular, basicConfig doesn't take any arguments. Trying a variation on the sole example in the Py2.3 documentation, I get this: try: logging.basicConfig(level=opts.LOGLEVEL, format="%(message)s") except: logging.getLogger().setLevel(opts.LOGLEVEL) h = logging.StreamHandler() h.setFormatter(logging.Formatter("%(message)s")) logging.getLogger().addHandler(h) but calling this root logger in Py2.3, e.g. logging.info("Foo") gives duplicated output: Foo INFO:root:Foo I can't find a way to modify the format of the existing handler on the root logger in Py2.3 (the "except" block above), hence the "addHandler" call that's producing the duplicated output. Is there a way to set the format of the root logger without this duplication? Thanks! A: except: without exception class[es] is a good way to get in trouble. I believe logging module in Python 2.3 has basicConfig() function, but with less options. Since it accepts **kwargs it may fail at any moment after doing some job. I think it already installed a handler with default format then failed to configure something. After catching exception you have installed another handler. Having 2 handlers you get 2 messages for each event. The simplest way in your case: avoid using basicConfig() at all and configure logging manually. And never use except: if you don't reraise or log caught exception.
How to configure format of Python 2.3 logging messages?
In Python 2.4 and later, configuring the logging module to have a more basic formatting is easy: logging.basicConfig(level=opts.LOGLEVEL, format="%(message)s") but for applications which need to support Python 2.3 it seems more difficult, because the logging API was overhauled in Py2.4. In particular, basicConfig doesn't take any arguments. Trying a variation on the sole example in the Py2.3 documentation, I get this: try: logging.basicConfig(level=opts.LOGLEVEL, format="%(message)s") except: logging.getLogger().setLevel(opts.LOGLEVEL) h = logging.StreamHandler() h.setFormatter(logging.Formatter("%(message)s")) logging.getLogger().addHandler(h) but calling this root logger in Py2.3, e.g. logging.info("Foo") gives duplicated output: Foo INFO:root:Foo I can't find a way to modify the format of the existing handler on the root logger in Py2.3 (the "except" block above), hence the "addHandler" call that's producing the duplicated output. Is there a way to set the format of the root logger without this duplication? Thanks!
[ "except: without exception class[es] is a good way to get in trouble. I believe logging module in Python 2.3 has basicConfig() function, but with less options. Since it accepts **kwargs it may fail at any moment after doing some job. I think it already installed a handler with default format then failed to configure something. After catching exception you have installed another handler. Having 2 handlers you get 2 messages for each event. The simplest way in your case: avoid using basicConfig() at all and configure logging manually. And never use except: if you don't reraise or log caught exception.\n" ]
[ 4 ]
[]
[]
[ "formatting", "legacy", "logging", "python" ]
stackoverflow_0001646470_formatting_legacy_logging_python.txt
Q: Python library path I have a python file "testHTTPAuth.py" which uses module deliciousapi and is kept in "deliciousapi.py". I have kept the files like testHTTPAuth.py lib deliciousapi.py But when i run: "python testHTTPAuth.py" it's giving error import deliciousapi ImportError: No module named deliciousapi How can handle these python libraries? Because later I have put the code together with libraries as Google app. So I can't keep the library in normal library path. A: You need to add the 'lib' directory to your path - otherwise, Python can't find your source. The following (included in a module such as testHTTPAuth.py) will do that: sys.path.append(os.path.join(os.path.dirname(__file__), 'lib') Ned's suggestion of changing your imports may work, but if anything in the lib directory imports submodules with absolute paths (most large modules do this), then it'll break. A: If you add an empty __init__.py to your lib directory, you can change your import statement to: from lib import deliciousapi
Python library path
I have a python file "testHTTPAuth.py" which uses module deliciousapi and is kept in "deliciousapi.py". I have kept the files like testHTTPAuth.py lib deliciousapi.py But when i run: "python testHTTPAuth.py" it's giving error import deliciousapi ImportError: No module named deliciousapi How can handle these python libraries? Because later I have put the code together with libraries as Google app. So I can't keep the library in normal library path.
[ "You need to add the 'lib' directory to your path - otherwise, Python can't find your source. The following (included in a module such as testHTTPAuth.py) will do that:\nsys.path.append(os.path.join(os.path.dirname(__file__), 'lib')\n\nNed's suggestion of changing your imports may work, but if anything in the lib directory imports submodules with absolute paths (most large modules do this), then it'll break.\n", "If you add an empty __init__.py to your lib directory, you can change your import statement to:\nfrom lib import deliciousapi\n\n" ]
[ 9, 1 ]
[]
[]
[ "google_app_engine", "python", "shared_libraries" ]
stackoverflow_0001649186_google_app_engine_python_shared_libraries.txt
Q: Open Source Profiling Frameworks? Have you ever wanted to test and quantitatively show whether your application would perform better as a static build or shared build, stripped or non-stripped, upx or no upx, gcc -O2 or gcc -O3, hash or btree, etc etc. If so this is the thread for you. There are hundreds of ways to tune an application, but how do we collect, organize, process, visualize the consequences of each experiment. I have been looking for several months for an open source application performance engineering/profiling framework similar in concept to Mozilla's Perftastic where I can develop/build/test/profile hundreds of incarnations of different tuning experiments. Some requirements: Platform SUSE32 and SUSE64 Data Format Very flexible, compact, simple, hierarchical. There are several possibilities including Custom CSV RRD Protocol Buffers JSON No XML. There is lots of data and XML is tooo verbose Data Acquisition Flexible and Customizable plugins. There is lots of data to collect from the application including performance data from /proc, sys time, wall time, cpu utilization, memory profile, leaks, valgrind logs, arena fragmentation, I/O, localhost sockets, binary size, open fds, etc. And some from the host system. My language of choice for this is Python, and I would develop these plugins to monitor and/or parse data in all different formats and store them in the data format of the framework. Tagging All experiments would be tagged including data like GCC version and compile options, platform, host, app options, experiment, build tag, etc. Graphing History, Comparative, Hierarchical, Dynamic and Static. The application builds are done by a custom CI sever which releases a new app version several times per day the last 3 years straight. This is why we need a continuous trend analysis. When we add new features, make bug fixes, change build options, we want to automatically gather profiling data and see the trend. This is where generating various static builds is needed. For analysis Mozilla dynamic graphs are great for doing comparative graphing. It would be great to have comparative graphing between different tags. For example compare N build versions, compare platforms, compare build options, etc. We have a test suite of 3K tests, data will be gathered per test, and grouped from inter-test data, to per test, to per tagged group, to complete regression suite. Possibilities include RRDTool, Orca, Graphite Analysis on a grouping basis Min Max Median Avg Standard Deviation etc Presentation All of this would be presented and controlled through a app server, preferably Django or TG would be best. Inspiration Centreon Cacti A: There was a talk at PyCon this week discussing the various profiling methods on Python today. I don't think anything is as complete as what your looking for, but it may be worth a look. http://us.pycon.org/2009/conference/schedule/event/15/ You should be able to find the actual talk later this week on blip.tv http://blip.tv/search?q=pycon&x=0&y=0 A: I'm not sure what your question is precisely, but for profiling Java (web)applications you can use the netbeans profiler and profiler4j (available on sourceforge). I have used both and can recommend them over eclipse tptp. See How to set up Eclipse TPTP and http://profiler4j.sourceforge.net/ edit: Sorry, just noticed you tagged this as Python question, so this must not be a valid answer for you. A: You may have to build what you're looking for, but you might start from Valgrind Luke Stackwalker lots of other open-source projects Also, when the purpose is not so much to measure performance as to improve it, you might get some ideas from this, where this is an example of its use.
Open Source Profiling Frameworks?
Have you ever wanted to test and quantitatively show whether your application would perform better as a static build or shared build, stripped or non-stripped, upx or no upx, gcc -O2 or gcc -O3, hash or btree, etc etc. If so this is the thread for you. There are hundreds of ways to tune an application, but how do we collect, organize, process, visualize the consequences of each experiment. I have been looking for several months for an open source application performance engineering/profiling framework similar in concept to Mozilla's Perftastic where I can develop/build/test/profile hundreds of incarnations of different tuning experiments. Some requirements: Platform SUSE32 and SUSE64 Data Format Very flexible, compact, simple, hierarchical. There are several possibilities including Custom CSV RRD Protocol Buffers JSON No XML. There is lots of data and XML is tooo verbose Data Acquisition Flexible and Customizable plugins. There is lots of data to collect from the application including performance data from /proc, sys time, wall time, cpu utilization, memory profile, leaks, valgrind logs, arena fragmentation, I/O, localhost sockets, binary size, open fds, etc. And some from the host system. My language of choice for this is Python, and I would develop these plugins to monitor and/or parse data in all different formats and store them in the data format of the framework. Tagging All experiments would be tagged including data like GCC version and compile options, platform, host, app options, experiment, build tag, etc. Graphing History, Comparative, Hierarchical, Dynamic and Static. The application builds are done by a custom CI sever which releases a new app version several times per day the last 3 years straight. This is why we need a continuous trend analysis. When we add new features, make bug fixes, change build options, we want to automatically gather profiling data and see the trend. This is where generating various static builds is needed. For analysis Mozilla dynamic graphs are great for doing comparative graphing. It would be great to have comparative graphing between different tags. For example compare N build versions, compare platforms, compare build options, etc. We have a test suite of 3K tests, data will be gathered per test, and grouped from inter-test data, to per test, to per tagged group, to complete regression suite. Possibilities include RRDTool, Orca, Graphite Analysis on a grouping basis Min Max Median Avg Standard Deviation etc Presentation All of this would be presented and controlled through a app server, preferably Django or TG would be best. Inspiration Centreon Cacti
[ "There was a talk at PyCon this week discussing the various profiling methods on Python today. I don't think anything is as complete as what your looking for, but it may be worth a look.\nhttp://us.pycon.org/2009/conference/schedule/event/15/\nYou should be able to find the actual talk later this week on blip.tv\nhttp://blip.tv/search?q=pycon&x=0&y=0\n", "I'm not sure what your question is precisely, but for profiling Java (web)applications you can use the netbeans profiler and profiler4j (available on sourceforge). I have used both and can recommend them over eclipse tptp.\nSee How to set up Eclipse TPTP\nand http://profiler4j.sourceforge.net/\nedit: Sorry, just noticed you tagged this as Python question, so this must not be a valid answer for you.\n", "You may have to build what you're looking for, but you might start from\n\nValgrind\nLuke Stackwalker\nlots of other open-source projects\n\nAlso, when the purpose is not so much to measure performance as to improve it, you might get some ideas from this, where this is an example of its use.\n" ]
[ 2, 0, 0 ]
[]
[]
[ "profiling", "python" ]
stackoverflow_0000224735_profiling_python.txt
Q: Overriding Django views with decorators I have a situation that requires redirecting users who are already logged in away from the login page to another page. I have seen mention that this can be accomplished with decorators which makes sense, but I am fairly new to using them. However, I am using the django login and a third party view (from django-registration). I do not want to change any of the code in django.contrib.auth or django-registration. How can I apply a decorator to a view that is not to be modified in order to get the desired behavior. Thanks in advance! UPDATE: I discovered that I mistakenly associated the login function with the registration module. django-registration has nothing to do with this issue. However, I still need to be able to override default login() behavior. Any thoughts? A: Three more ways to do it, though you'll need to use your own urlconf for these: Add the decorator to the view directly in the urlconf: ... (regexp, decorator(view)), ... You need to import the view and the decorator into the urlconf though, which is why I don't like this one. I prefer to have as few imports in my urls.py's as possible. Import the view into an <app>/views.py and add the decorator there: import view view = decorator(view) Pretty much like Vinay's method though more explicit since you need an urlconf for it. Wrap the view in a new view: import view @decorator def wrapperview(request, *args, **kwargs): ... other stuff ... return view(request, *args, **kwargs) The last one is very handy when you need to change generic views. This is what I often end up doing anyway. Whenever you use an urlconf, order of patterns matter, so you might need to shuffle around on which pattern gets called first. A: If you have the decorator function and you know which view in django-registration you want to decorate, you could just do registration.view_func = decorator_func(registration.view_func) where registration is the module in django-registration which contains the view function you want to decorate, view_func is the view function you want to decorate, and decorator_func is the decorator.
Overriding Django views with decorators
I have a situation that requires redirecting users who are already logged in away from the login page to another page. I have seen mention that this can be accomplished with decorators which makes sense, but I am fairly new to using them. However, I am using the django login and a third party view (from django-registration). I do not want to change any of the code in django.contrib.auth or django-registration. How can I apply a decorator to a view that is not to be modified in order to get the desired behavior. Thanks in advance! UPDATE: I discovered that I mistakenly associated the login function with the registration module. django-registration has nothing to do with this issue. However, I still need to be able to override default login() behavior. Any thoughts?
[ "Three more ways to do it, though you'll need to use your own urlconf for these:\n\nAdd the decorator to the view directly in the urlconf:\n...\n(regexp, decorator(view)),\n...\n\nYou need to import the view and the decorator into the urlconf though, which is why I don't like this one. I prefer to have as few imports in my urls.py's as possible.\nImport the view into an <app>/views.py and add the decorator there:\nimport view\n\nview = decorator(view)\n\nPretty much like Vinay's method though more explicit since you need an urlconf for it.\nWrap the view in a new view:\nimport view\n\n@decorator\ndef wrapperview(request, *args, **kwargs):\n ... other stuff ...\n return view(request, *args, **kwargs)\n\nThe last one is very handy when you need to change generic views. This is what I often end up doing anyway.\n\nWhenever you use an urlconf, order of patterns matter, so you might need to shuffle around on which pattern gets called first.\n", "If you have the decorator function and you know which view in django-registration you want to decorate, you could just do\nregistration.view_func = decorator_func(registration.view_func)\n\nwhere registration is the module in django-registration which contains the view function you want to decorate, view_func is the view function you want to decorate, and decorator_func is the decorator.\n" ]
[ 6, 2 ]
[]
[]
[ "django", "django_views", "python" ]
stackoverflow_0001649351_django_django_views_python.txt
Q: How to handle multiple Set-Cookie header in HTTP response I'm trying to write simple proxy server for some purpose. In it I use httplib to access remote web-server. But there's one problem: web server returns TWO Set-Cookie headers in one response, and httplib mangles them together in httplib.HTTPResponse.getheaders(), effectively joining cookies with comma [which is strange, because getheaders returns a LIST, not DICT, so I thought they wrote it with multiple headers of the same name). So, when I send this joined header back to client, it confuses client. How can I obtain full list of headers in httplib (without just splitting Set-Cookie header on commas)? A: HTTPResponse.getheaders() returns a list of combined headers (actually my calling dict.items()). The only place where incoming headers are stored untouched is HTTPResponse.msg.headers.
How to handle multiple Set-Cookie header in HTTP response
I'm trying to write simple proxy server for some purpose. In it I use httplib to access remote web-server. But there's one problem: web server returns TWO Set-Cookie headers in one response, and httplib mangles them together in httplib.HTTPResponse.getheaders(), effectively joining cookies with comma [which is strange, because getheaders returns a LIST, not DICT, so I thought they wrote it with multiple headers of the same name). So, when I send this joined header back to client, it confuses client. How can I obtain full list of headers in httplib (without just splitting Set-Cookie header on commas)?
[ "HTTPResponse.getheaders() returns a list of combined headers (actually my calling dict.items()). The only place where incoming headers are stored untouched is HTTPResponse.msg.headers.\n" ]
[ 4 ]
[]
[]
[ "httplib", "python" ]
stackoverflow_0001649401_httplib_python.txt
Q: How to test for multiple command line arguments (sys.argv I want to test againts multiple command line arguments in a loop > python Read_xls_files.py group1 group2 group3 No this code tests only for the first one (group1). hlo = [] for i in range(len(sh.col_values(8))): if sh.cell(i, 1).value == sys.argv[1]: hlo.append(sh.cell(i, 8).value) How should I modify this that I can test against one, two or all of these arguments? So, if there is group1 in one sh.cell(i, 1), the list is appended and if there is group1, group2 etc., the hlo is appended. A: You can iterate over sys.argv[1:], e.g. via something like: for grp in sys.argv[1:]: for i in range(len(sh.col_values(8))): if sh.cell(i, 1).value == grp: hlo.append(sh.cell(i, 8).value) A: outputList = [x for x in values if x in sys.argv[1:]] Substitute the bits that are relevant for your (spreadsheet?) situation. This is a list comprehension. You can also investigate the optparse module which has been in the standard library since 2.3. A: I would recommend taking a look at Python's optparse module. It's a nice helper to parse sys.argv. A: argparse is another powerful, easy to use module that parses sys.argv for you. Very useful for creating command line scripts. A: I believe this would work, and would avoid iterating over sys.argv: hlo = [] for i in range(len(sh.col_values(8))): if sh.cell(i, 1).value in sys.argv[1:]: hlo.append(sh.cell(i, 8).value) A: # First thing is to get a set of your query strings. queries = set(argv[1:]) # If using optparse or argparse, queries = set(something_else) hlo = [] for i in range(len(sh.col_values(8))): if sh.cell(i, 1).value in queries: hlo.append(sh.cell(i, 8).value) === end of answer to question === Aside: the OP is using xlrd ... here are a couple of performance hints. Doesn't matter too much with this simple example, but if you are going to do a lot of coordinate-based accessing of cell values, you can do better than that by using Sheet.cell_value(rowx, colx) instead of Sheet.cell(rowx, colx).value which builds a Cell object on the fly: queries = set(argv[1:]) hlo = [] for i in range(len(sh.nrows)): # all columns have the same size if sh.cell_value(i, 1) in queries: hlo.append(sh.cell_value(i, 8)) or you could use a list comprehension along with the Sheet.col_values(colx) method: hlo = [ v8 for v1, v8 in zip(sh.col_values(1), sh.col_values(8)) if v1 in queries ]
How to test for multiple command line arguments (sys.argv
I want to test againts multiple command line arguments in a loop > python Read_xls_files.py group1 group2 group3 No this code tests only for the first one (group1). hlo = [] for i in range(len(sh.col_values(8))): if sh.cell(i, 1).value == sys.argv[1]: hlo.append(sh.cell(i, 8).value) How should I modify this that I can test against one, two or all of these arguments? So, if there is group1 in one sh.cell(i, 1), the list is appended and if there is group1, group2 etc., the hlo is appended.
[ "You can iterate over sys.argv[1:], e.g. via something like:\nfor grp in sys.argv[1:]:\n for i in range(len(sh.col_values(8))):\n if sh.cell(i, 1).value == grp:\n hlo.append(sh.cell(i, 8).value)\n\n", "outputList = [x for x in values if x in sys.argv[1:]]\n\nSubstitute the bits that are relevant for your (spreadsheet?) situation. This is a list comprehension. You can also investigate the optparse module which has been in the standard library since 2.3.\n", "I would recommend taking a look at Python's optparse module. It's a nice helper to parse sys.argv.\n", "argparse is another powerful, easy to use module that parses sys.argv for you. Very useful for creating command line scripts.\n", "I believe this would work, and would avoid iterating over sys.argv:\nhlo = []\nfor i in range(len(sh.col_values(8))):\n if sh.cell(i, 1).value in sys.argv[1:]:\n hlo.append(sh.cell(i, 8).value)\n\n", "# First thing is to get a set of your query strings.\nqueries = set(argv[1:])\n# If using optparse or argparse, queries = set(something_else)\nhlo = []\nfor i in range(len(sh.col_values(8))):\n if sh.cell(i, 1).value in queries:\n hlo.append(sh.cell(i, 8).value)\n\n=== end of answer to question ===\nAside: the OP is using xlrd ... here are a couple of performance hints.\nDoesn't matter too much with this simple example, but if you are going to do a lot of coordinate-based accessing of cell values, you can do better than that by using Sheet.cell_value(rowx, colx) instead of Sheet.cell(rowx, colx).value which builds a Cell object on the fly:\nqueries = set(argv[1:])\nhlo = []\nfor i in range(len(sh.nrows)): # all columns have the same size\n if sh.cell_value(i, 1) in queries:\n hlo.append(sh.cell_value(i, 8))\n\nor you could use a list comprehension along with the Sheet.col_values(colx) method:\nhlo = [\n v8\n for v1, v8 in zip(sh.col_values(1), sh.col_values(8))\n if v1 in queries\n ]\n\n" ]
[ 7, 3, 2, 1, 0, 0 ]
[]
[]
[ "excel", "loops", "python" ]
stackoverflow_0001643643_excel_loops_python.txt
Q: How can I send an iCalendar email attachment with Django? I want to send an iCalendar http://en.wikipedia.org/wiki/ICalendar email attachment using Django. Is there an open source library to build an iCalendar file in Python and/or available for Django? A: As stated before, there is vobject, that is working fine (I have used it recently). You can find good information about ical, vobject and django in this blog post : http://blog.thescoop.org/archives/2007/07/31/django-ical-and-vobject/ A: I've used MaxM's icalendar module. It can build and parse iCalendar files. A: There's also vobject which was developed for the Chandler project and seems to be more actively maintained. It's also BSD-licensed which might be important for your use case.
How can I send an iCalendar email attachment with Django?
I want to send an iCalendar http://en.wikipedia.org/wiki/ICalendar email attachment using Django. Is there an open source library to build an iCalendar file in Python and/or available for Django?
[ "As stated before, there is vobject, that is working fine (I have used it recently). \nYou can find good information about ical, vobject and django in this blog post : \nhttp://blog.thescoop.org/archives/2007/07/31/django-ical-and-vobject/\n", "I've used MaxM's icalendar module. It can build and parse iCalendar files. \n", "There's also vobject which was developed for the Chandler project and seems to be more actively maintained. It's also BSD-licensed which might be important for your use case.\n" ]
[ 7, 4, 3 ]
[]
[]
[ "django", "icalendar", "python" ]
stackoverflow_0001647597_django_icalendar_python.txt
Q: Generating a dynamic time delta: python Here's my situation: import foo, bar, etc frequency = ["hours","days","weeks"] class geoProcessClass(): def __init__(self,geoTaskHandler,startDate,frequency,frequencyMultiple=1,*args): self.interval = self.__determineTimeDelta(frequency,frequencyMultiple) def __determineTimeDelta(self,frequency,frequencyMultiple): if frequency in frequency: interval = datetime.timedelta(print eval(frequency + "=" + str(frequencyMultiple))) return interval else: interval = datetime.timedelta("days=1") return interval I want to dynamically define a time interval with timedelta, but this does not seem to work. Is there any specific way to make this work? I'm getting invalid syntax here. Are there any better ways to do it? A: You can call a function with dynamic arguments using syntax like func(**kwargs) where kwargs is dictionary of name/value mappings for the named arguments. I also renamed the global frequency list to frequencies since the line if frequency in frequency didn't make a whole lot of sense. class geoProcessClass(): def __init__(self, geoTaskHandler, startDate, frequency, frequencyMultiple=1, *args): self.interval = self.determineTimeDelta(frequency, frequencyMultiple) def determineTimeDelta(self, frequency, frequencyMultiple): frequencies = ["hours", "days", "weeks"] if frequency in frequencies: kwargs = {frequency: frequencyMultiple} else: kwargs = {"days": 1} return datetime.timedelta(**kwargs) For what it's worth, stylistically it's usually frowned upon to silently correct errors a caller makes. If the caller calls you with invalid arguments you should probably fail immediately and loudly rather than try to keep chugging. I'd recommend against that if statement. For more information on variable-length and keyword argument lists, see: The Official Python Tutorial PEP 3102: Keyword-Only Arguments A: Your use of print eval(...) looks a bit over-complicated (and wrong, as you mention). If you want to pass a keyword argument to a function, just do it: interval = datetime.timedelta(frequency = str(frequencyMultiple) I don't see a keyword argument called frequency though, so that might be a separate problem.
Generating a dynamic time delta: python
Here's my situation: import foo, bar, etc frequency = ["hours","days","weeks"] class geoProcessClass(): def __init__(self,geoTaskHandler,startDate,frequency,frequencyMultiple=1,*args): self.interval = self.__determineTimeDelta(frequency,frequencyMultiple) def __determineTimeDelta(self,frequency,frequencyMultiple): if frequency in frequency: interval = datetime.timedelta(print eval(frequency + "=" + str(frequencyMultiple))) return interval else: interval = datetime.timedelta("days=1") return interval I want to dynamically define a time interval with timedelta, but this does not seem to work. Is there any specific way to make this work? I'm getting invalid syntax here. Are there any better ways to do it?
[ "You can call a function with dynamic arguments using syntax like func(**kwargs) where kwargs is dictionary of name/value mappings for the named arguments.\nI also renamed the global frequency list to frequencies since the line if frequency in frequency didn't make a whole lot of sense.\nclass geoProcessClass():\n def __init__(self, geoTaskHandler, startDate, frequency, frequencyMultiple=1, *args):\n self.interval = self.determineTimeDelta(frequency, frequencyMultiple)\n\n def determineTimeDelta(self, frequency, frequencyMultiple):\n frequencies = [\"hours\", \"days\", \"weeks\"]\n\n if frequency in frequencies:\n kwargs = {frequency: frequencyMultiple}\n else:\n kwargs = {\"days\": 1}\n\n return datetime.timedelta(**kwargs)\n\nFor what it's worth, stylistically it's usually frowned upon to silently correct errors a caller makes. If the caller calls you with invalid arguments you should probably fail immediately and loudly rather than try to keep chugging. I'd recommend against that if statement.\nFor more information on variable-length and keyword argument lists, see:\n\nThe Official Python Tutorial\nPEP 3102: Keyword-Only Arguments\n\n", "Your use of print eval(...) looks a bit over-complicated (and wrong, as you mention).\nIf you want to pass a keyword argument to a function, just do it:\ninterval = datetime.timedelta(frequency = str(frequencyMultiple)\n\nI don't see a keyword argument called frequency though, so that might be a separate problem.\n" ]
[ 7, 0 ]
[]
[]
[ "datetime", "eval", "python", "timedelta" ]
stackoverflow_0001649753_datetime_eval_python_timedelta.txt
Q: Django ignoring my DATABASE_ENGINE setting -- sometimes I've got several sites, each with a distinct settings file -- and with distinct names. There's a floral theme to all the variant settings. We have to keep the sites separate. C:\Proj-Carnation> echo %DJANGO_SETTINGS_MODULE% path.to.settings_carnation_win32 We have many test procedures which don't use the built-in django-admin.py test command because they're large batch jobs that are started by the Django front-end, and use the Django ORM. We need to use the django.db.connection.creation.create_test_db() method to create a new test database. We've been using this test procedure for quite a while. Currently, it's stopped working. We've made numerous code structure changes, upgraded to Django 1.1.1 and Python 2.6. All are possible culprits. When I run Python I see this. C:\Proj-Carnation> python Python 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from django.conf import settings >>> settings.DATABASE_ENGINE INSDIE django.db.__init__, settings.DATABASE_ENGINE='' 'sqlite3' >>> import django.db >>> django.db.connection <django.db.backends.dummy.base.DatabaseWrapper object at 0x00EE88B0> During the import of django.db, something the settings are clearly not set. I added a print statement (with misspelled "INSIDE") in django.db. The settings are not set. Eventually, settings.DATABASE_ENGINE becomes 'sqlite3'. To an extend this "eventually" behavior is expected: the settings module uses a lazy loader technique. The issue is this: The connection -- built from incomplete settings -- is the dummy database backend. Yet, the final settings show the engine to be 'sqlite3'. In another project (the "Root" Project), there are no issues. Things work perfectly. The DB settings create the proper sqlite3 backend instance. So, what's different? I'm stumped. It's the environment settings or the physical directory trees are the top potential issues. In the non-working C:\Proj-Carnation, the PYTHONPATH is C:\Proj-Carnation;C:\Proj-Root;C:\This;C:\That. In the working root project C:\Proj-Carnation, the PYTHONPATH is C:\Proj-Root;C:\This;C:\That. Am I looking for something in the "Carnation" project that has concealed something in the root project? Sadly, the Carnation project only has a few files and they're in a package (local) which assures that they names distinct from the root project. Is there some Django initialization in version 1.1.1 that's different? For instance, is there something in django.conf that's out of whack with Python 2.6 and Django 1.1.1? Is there some relative import issue that I've overlooked? A: Found it. When your settings module is inside a package, the top-level __init__.py member of that package cannot import any Django material of any kind. If the top-level __init__.py that contains your settings has a Django import, that Django import will (potentially) use the default settings before your settings are created. And since some things in Django (like the database connection) are singletons, the thing that got created while reading your settings is the only one that can ever exist. Do not put anything in the __init__.py in the package that contains settings modules.
Django ignoring my DATABASE_ENGINE setting -- sometimes
I've got several sites, each with a distinct settings file -- and with distinct names. There's a floral theme to all the variant settings. We have to keep the sites separate. C:\Proj-Carnation> echo %DJANGO_SETTINGS_MODULE% path.to.settings_carnation_win32 We have many test procedures which don't use the built-in django-admin.py test command because they're large batch jobs that are started by the Django front-end, and use the Django ORM. We need to use the django.db.connection.creation.create_test_db() method to create a new test database. We've been using this test procedure for quite a while. Currently, it's stopped working. We've made numerous code structure changes, upgraded to Django 1.1.1 and Python 2.6. All are possible culprits. When I run Python I see this. C:\Proj-Carnation> python Python 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from django.conf import settings >>> settings.DATABASE_ENGINE INSDIE django.db.__init__, settings.DATABASE_ENGINE='' 'sqlite3' >>> import django.db >>> django.db.connection <django.db.backends.dummy.base.DatabaseWrapper object at 0x00EE88B0> During the import of django.db, something the settings are clearly not set. I added a print statement (with misspelled "INSIDE") in django.db. The settings are not set. Eventually, settings.DATABASE_ENGINE becomes 'sqlite3'. To an extend this "eventually" behavior is expected: the settings module uses a lazy loader technique. The issue is this: The connection -- built from incomplete settings -- is the dummy database backend. Yet, the final settings show the engine to be 'sqlite3'. In another project (the "Root" Project), there are no issues. Things work perfectly. The DB settings create the proper sqlite3 backend instance. So, what's different? I'm stumped. It's the environment settings or the physical directory trees are the top potential issues. In the non-working C:\Proj-Carnation, the PYTHONPATH is C:\Proj-Carnation;C:\Proj-Root;C:\This;C:\That. In the working root project C:\Proj-Carnation, the PYTHONPATH is C:\Proj-Root;C:\This;C:\That. Am I looking for something in the "Carnation" project that has concealed something in the root project? Sadly, the Carnation project only has a few files and they're in a package (local) which assures that they names distinct from the root project. Is there some Django initialization in version 1.1.1 that's different? For instance, is there something in django.conf that's out of whack with Python 2.6 and Django 1.1.1? Is there some relative import issue that I've overlooked?
[ "Found it.\nWhen your settings module is inside a package, the top-level __init__.py member of that package cannot import any Django material of any kind.\nIf the top-level __init__.py that contains your settings has a Django import, that Django import will (potentially) use the default settings before your settings are created.\nAnd since some things in Django (like the database connection) are singletons, the thing that got created while reading your settings is the only one that can ever exist.\nDo not put anything in the __init__.py in the package that contains settings modules.\n" ]
[ 10 ]
[]
[]
[ "configuration", "django", "python" ]
stackoverflow_0001639451_configuration_django_python.txt
Q: Common errors when moving a django app from dev to prod? I am developping a django app on Windows, SQLite and the django dev server . I have deployed it to my host server which is running Linux, Apache, FastCgi, MySQL. Unfortunately, I have an error returned by the server on the prod while everything ok on the dev machine. I've asked my provider for a pre-production solution in order to be able to debug and understand the problem. Anyway, what are according to you the most likely errors that can happen when moving a django app from dev to prod? Best Update: I think that a pre-prod is the best way to address this kind of problem. But I would like to build a check list of what must be done before to put in production. Thanks for the very valuable answers that I received until now :) Update: FYI, I 've implemented the preprod server and the email notification as suggested by shanyu and I can see that the error comes from the smart_if templatetag that I am using on this new version. Any trick with template tags? Update: I think I've fixed the pb which was caused I think by the Filezilla FTP sending. I was using the "replace if newer" option which I guess is causing some unexpected results. Using the "replace all" option fix the issue. However, it was an opportunity for me to learn more about deployment. Thansk for your answers. A: Problems I typically have include: Misconfigured productions settings, whether in my production localsettings.py, wsgi/cgi, or apache site files in /etc/sites-available Database differences. I use South for migrations and have run into some subtle issues when performing my migration on PostgreSQL when it worked smoothly in sqlite. Static file hosting since I cheat and use the Django server in development Permissions, both on the file system and within the database Rare, but possible, network issues preventing me from getting my dependencies, whether on PyPi or some 3rd party site Ways that I have mitigated these issues: Use the same database in production and development (in your case, MySQL everywhere) I've found it is useful to have a "test" environment which mimics production in every way possible (it can be on lower end hardware, or even the same machine). This way, if there are any issues in this "production-like" enivornment, I can solve them without taking my production server offline. Script everything for repeatable deployments. I use fabric, but zc.buildout or Paver would also work. These tools help reduce typos while deploying and reduce the time to deploy my app. Use version control (mercurial, git, subversion) and a schema migration tool (like South), so if something does go wrong when you deploy to production, you have the possibility of backing out the changes and allowing production to run on the old code with the old database schema. I haven't set up an "egg proxy" yet, but I am considering it, to avoid issues when downloading dependencies. I've found pip's freezing dependencies to be useful, in case a new, incompatible change to a library occurred since I downloaded it initially Use a web testing framework like Windmill or Selenium to test my application in my "test" environment, so that I can get a lot of test coverage of my system very quickly. A: Regarding your case, I can think of 2 simple things that may help you: You can enable Django to send messages when exceptions occur giving details about them. Look at here for details. You'll be better off if you set up a test environment on the prod server (say, test.example.com) so that you can check if things will go smoothly or not before you deploy the app. A: I believe these were the podcasts I listened to recently (from Pycon 2009): Locate Django in the Real World (PyCon 2009): http://advocacy.python.org/podcasts/pycon.rss Parts 1 to 3 Very good introduction to designing your apps for deployment, in particular for reuse and redeployment. Regs.
Common errors when moving a django app from dev to prod?
I am developping a django app on Windows, SQLite and the django dev server . I have deployed it to my host server which is running Linux, Apache, FastCgi, MySQL. Unfortunately, I have an error returned by the server on the prod while everything ok on the dev machine. I've asked my provider for a pre-production solution in order to be able to debug and understand the problem. Anyway, what are according to you the most likely errors that can happen when moving a django app from dev to prod? Best Update: I think that a pre-prod is the best way to address this kind of problem. But I would like to build a check list of what must be done before to put in production. Thanks for the very valuable answers that I received until now :) Update: FYI, I 've implemented the preprod server and the email notification as suggested by shanyu and I can see that the error comes from the smart_if templatetag that I am using on this new version. Any trick with template tags? Update: I think I've fixed the pb which was caused I think by the Filezilla FTP sending. I was using the "replace if newer" option which I guess is causing some unexpected results. Using the "replace all" option fix the issue. However, it was an opportunity for me to learn more about deployment. Thansk for your answers.
[ "Problems I typically have include:\n\nMisconfigured productions settings, whether in my production localsettings.py, wsgi/cgi, or apache site files in /etc/sites-available\nDatabase differences. I use South for migrations and have run into some subtle issues when performing my migration on PostgreSQL when it worked smoothly in sqlite.\nStatic file hosting since I cheat and use the Django server in development\nPermissions, both on the file system and within the database\nRare, but possible, network issues preventing me from getting my dependencies, whether on PyPi or some 3rd party site\n\nWays that I have mitigated these issues:\n\nUse the same database in production and development (in your case, MySQL everywhere)\nI've found it is useful to have a \"test\" environment which mimics production in every way possible (it can be on lower end hardware, or even the same machine). This way, if there are any issues in this \"production-like\" enivornment, I can solve them without taking my production server offline.\nScript everything for repeatable deployments. I use fabric, but zc.buildout or Paver would also work. These tools help reduce typos while deploying and reduce the time to deploy my app.\nUse version control (mercurial, git, subversion) and a schema migration tool (like South), so if something does go wrong when you deploy to production, you have the possibility of backing out the changes and allowing production to run on the old code with the old database schema.\nI haven't set up an \"egg proxy\" yet, but I am considering it, to avoid issues when downloading dependencies.\nI've found pip's freezing dependencies to be useful, in case a new, incompatible change to a library occurred since I downloaded it initially\nUse a web testing framework like Windmill or Selenium to test my application in my \"test\" environment, so that I can get a lot of test coverage of my system very quickly.\n\n", "Regarding your case, I can think of 2 simple things that may help you:\n\nYou can enable Django to send messages when exceptions occur giving details about them. Look at here for details.\nYou'll be better off if you set up a test environment on the prod server (say, test.example.com) so that you can check if things will go smoothly or not before you deploy the app.\n\n", "I believe these were the podcasts I listened to recently (from Pycon 2009):\n\nLocate Django in the Real World (PyCon 2009):\nhttp://advocacy.python.org/podcasts/pycon.rss\nParts 1 to 3\n\nVery good introduction to designing your apps for deployment, in particular for reuse and redeployment.\nRegs.\n" ]
[ 7, 1, 0 ]
[]
[]
[ "django", "production_environment", "python" ]
stackoverflow_0001648349_django_production_environment_python.txt
Q: Is there a way to know if a list of elements is on a larger list without using 'in' keyword? I want to do this. I have two python lists, one larger than the other and I want to know is there is a way to check if the elements of the smaller list are in the big list in the exact same order for example: small_list = [4,2,5] big_list = [1,2,5,7,2,4,2,5,67,8,5,13,45] I tried using the in keyword but It did not worked :'( A: def in_list(small, big): l_sml = len(small) l_big = len(big) return any((big[i:i+l_sml]==small for i in xrange(l_big-l_sml+1))) print in_list([4,2,1], [1,2,3,4,2,1,0,5]) # True print in_list([1,2,3], [1,2,4]) # False A: Hmm, maybe it's overkill, but you can use the SequenceMatcher class from difflib: from difflib import SequenceMatcher small_list = [4,2,5] big_list = [1,2,5,7,2,4,2,5,67,8,5,13,45] print SequenceMatcher(None, small_list, big_list).get_matching_blocks() difflib documentation A: Rather non-optimized, demonstrates the general strategy simply: tuple(small_list) in zip(big_list[:], big_list[1:], big_list[2:]) The funky zip thing does this: >>> zip(big_list[:], big_list[1:], big_list[2:]) [(1, 2, 5), (2, 5, 7), (5, 7, 2), (7, 2, 4), (2, 4, 2), (4, 2, 5), (2, 5, 67), (5, 67, 8), (67, 8, 5), (8, 5, 13), (5, 13, 45)] A more optimized version: from itertools import izip, islice tuple(small_list) in izip(big_list, islice(big_list, 1, None), islice(big_list, 2, None)) To handle small_list length of any size: from itertools import izip, islice tuple(small_list) in izip(*(islice(big_list, i, None) for i in xrange(len(small_list)))) A: This problem is trickier than it seems. Unless I'm mistaken, it's a special case of the longest common substring problem. For the general case (arbitrarily large lists), I would use some kind of finite state automaton, akin to a regular expression. I believe the result could then be calculated in O(mn) time. A: That's because small_list in big_list checks whether an element in big_list is equal to small_list. What you want to do instead is see if a slice of big_list is the same as small_list. def isSubList(slice, L): n = len(slice) for i in range(0, len(L) - n): if slice == L[i:i+n]: return True return False isSubList(small_list, big_list) A: Edit: Leaving the answer here but I failed to note the requirement that they be in the same order. This does not meet that requirement Quick and dirty answer. Based it off of the answer for Python - Intersection of two lists small_list == filter( lambda x: x in big_list, small_list) A: If you know a reasonable bound of your numbers, you can convert them to a Python type whose 'in' operator does this automatically. The two I know are str and unicode. Then you ask the strings if the smaller is in the larger, this does a substring comparison: >>> small_list = [4,2,5] >>> big_list = [1,2,5,7,2,4,2,5,67,8,5,13,45] >>> >>> def encode(lst): return u"".join(unichr(c) for c in lst) >>> encode(small_list) in encode(big_list) True (You can "encode" to str if all numbers are in 0 <= x <= 255, you can "encode" to unicode if all are in 0 <= x <= sys.maxunicode ). A: There's no built in operator doing that particular comparison. I suggest a list comprehension or a quick for loop. A: you could use sets from sets import Set small_set = set(small_list) big_set = set(big_list) small_set <= big_set <= is the subset operator A: If you want to use the "in" keyword to do what you want, you can override contains using one of the solutions mentioned in the answers here: class mylist(list): def __contains__(self, lst): return ':'.join(map(str, lst)) in ':'.join(map(str, self)) small_list = mylist([4,2,5]) big_list = mylist([1,2,5,7,2,4,2,5,67,8,5,13,45]) print small_list in big_list Edit: Addresses Jeffrey's comment.
Is there a way to know if a list of elements is on a larger list without using 'in' keyword?
I want to do this. I have two python lists, one larger than the other and I want to know is there is a way to check if the elements of the smaller list are in the big list in the exact same order for example: small_list = [4,2,5] big_list = [1,2,5,7,2,4,2,5,67,8,5,13,45] I tried using the in keyword but It did not worked :'(
[ "def in_list(small, big):\n l_sml = len(small)\n l_big = len(big)\n return any((big[i:i+l_sml]==small for i in xrange(l_big-l_sml+1)))\n\nprint in_list([4,2,1], [1,2,3,4,2,1,0,5]) # True\nprint in_list([1,2,3], [1,2,4]) # False\n\n", "Hmm, maybe it's overkill, but you can use the SequenceMatcher class from difflib:\nfrom difflib import SequenceMatcher \nsmall_list = [4,2,5]\nbig_list = [1,2,5,7,2,4,2,5,67,8,5,13,45]\nprint SequenceMatcher(None, small_list, big_list).get_matching_blocks()\n\ndifflib documentation\n", "Rather non-optimized, demonstrates the general strategy simply:\ntuple(small_list) in zip(big_list[:], big_list[1:], big_list[2:])\n\nThe funky zip thing does this:\n>>> zip(big_list[:], big_list[1:], big_list[2:])\n[(1, 2, 5), (2, 5, 7), (5, 7, 2), (7, 2, 4), (2, 4, 2), (4, 2, 5), (2, 5, 67), (5, 67, 8), (67, 8, 5), (8, 5, 13), (5, 13, 45)]\n\nA more optimized version:\nfrom itertools import izip, islice\ntuple(small_list) in izip(big_list, islice(big_list, 1, None), islice(big_list, 2, None))\n\nTo handle small_list length of any size:\nfrom itertools import izip, islice\ntuple(small_list) in izip(*(islice(big_list, i, None) for i in xrange(len(small_list))))\n\n", "This problem is trickier than it seems. Unless I'm mistaken, it's a special case of the longest common substring problem.\nFor the general case (arbitrarily large lists), I would use some kind of finite state automaton, akin to a regular expression. I believe the result could then be calculated in O(mn) time.\n", "That's because small_list in big_list checks whether an element in big_list is equal to small_list. What you want to do instead is see if a slice of big_list is the same as small_list.\ndef isSubList(slice, L):\n n = len(slice)\n for i in range(0, len(L) - n):\n if slice == L[i:i+n]:\n return True\n return False\n\nisSubList(small_list, big_list)\n\n", "Edit: Leaving the answer here but I failed to note the requirement that they be in the same order. This does not meet that requirement\nQuick and dirty answer. Based it off of the answer for Python - Intersection of two lists\nsmall_list == filter( lambda x: x in big_list, small_list)\n\n", "If you know a reasonable bound of your numbers, you can convert them to a Python type whose 'in' operator does this automatically. The two I know are str and unicode.\nThen you ask the strings if the smaller is in the larger, this does a substring comparison:\n>>> small_list = [4,2,5]\n>>> big_list = [1,2,5,7,2,4,2,5,67,8,5,13,45]\n>>>\n>>> def encode(lst):\n return u\"\".join(unichr(c) for c in lst)\n\n>>> encode(small_list) in encode(big_list)\nTrue\n\n(You can \"encode\" to str if all numbers are in 0 <= x <= 255, you can \"encode\" to unicode if all are in 0 <= x <= sys.maxunicode ).\n", "There's no built in operator doing that particular comparison. I suggest a list comprehension or a quick for loop.\n", "you could use sets\nfrom sets import Set\nsmall_set = set(small_list)\nbig_set = set(big_list)\nsmall_set <= big_set\n\n<= is the subset operator\n", "If you want to use the \"in\" keyword to do what you want, you can override contains using one of the solutions mentioned in the answers here:\nclass mylist(list):\n def __contains__(self, lst):\n return ':'.join(map(str, lst)) in ':'.join(map(str, self))\n\nsmall_list = mylist([4,2,5])\nbig_list = mylist([1,2,5,7,2,4,2,5,67,8,5,13,45])\n\nprint small_list in big_list\n\nEdit: Addresses Jeffrey's comment.\n" ]
[ 7, 4, 3, 2, 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001646641_python.txt
Q: Convert & to & in Python I'm working on a simple crawler in Python. The aim is to create a sitemap.xml. (you can find the very alpha version here: http://code.google.com/p/sitemappy/) I noticed that if I generate the xml with URLs containing non HTML entities (such as &), the xml doesn't validate and it isn't accepted by Google Webmaster Tools. Is there a quick way to encode the querystring part of the URLs? Thank you! Matteo A: cgi.escape to the rescue: cgi.escape(s[, quote]) Convert the characters '&', '<' and '>' in string s to HTML-safe sequences. Use this if you need to display text that might contain such characters in HTML. If the optional flag quote is true, the quotation mark character ('"') is also translated; this helps for inclusion in an HTML attribute value, as in . If the value to be quoted might include single- or double-quote characters, or both, consider using the quoteattr() function in the xml.sax.saxutils module instead. Quick interactive check: >>> import cgi >>> cgi.escape('<&>') '&lt;&amp;&gt;' >>> A: Saxutils has an escaping function for XML entities: >>> from xml.sax import saxutils >>> saxutils.escape("&") '&amp;'
Convert & to & in Python
I'm working on a simple crawler in Python. The aim is to create a sitemap.xml. (you can find the very alpha version here: http://code.google.com/p/sitemappy/) I noticed that if I generate the xml with URLs containing non HTML entities (such as &), the xml doesn't validate and it isn't accepted by Google Webmaster Tools. Is there a quick way to encode the querystring part of the URLs? Thank you! Matteo
[ "cgi.escape to the rescue:\n\ncgi.escape(s[, quote])\nConvert the characters '&', '<' and '>' in string s to HTML-safe sequences. Use this if you need to display text that might contain such characters in HTML. If the optional flag quote is true, the quotation mark character ('\"') is also translated; this helps for inclusion in an HTML attribute value, as in . If the value to be quoted might include single- or double-quote characters, or both, consider using the quoteattr() function in the xml.sax.saxutils module instead.\n\nQuick interactive check:\n>>> import cgi\n>>> cgi.escape('<&>')\n'&lt;&amp;&gt;'\n>>> \n\n", "Saxutils has an escaping function for XML entities:\n>>> from xml.sax import saxutils\n>>> saxutils.escape(\"&\")\n'&amp;'\n\n" ]
[ 8, 7 ]
[]
[]
[ "python", "sitemap", "urlencode", "xml" ]
stackoverflow_0001650160_python_sitemap_urlencode_xml.txt
Q: Regular expression to extract "Boolean" and "MessageString" from {'result':(Boolean, MessageString)} How to parse the string " {'result':(Boolean, MessageString)} " using Python regular expressions to get Boolean and the MessageString separated into variables? A: This works: >>> x = re.search('\((.*),\s*(.*)\)', " {'result':(Boolean, MessageString)} ") >>> x.group(1) 'Boolean' >>> x.group(2) 'MessageString' A: It looks like a dictionary+tuple in Python syntax, so eval() would also work (if you trust the source!!!) A: Depending on what you need there can be other solution than regular expression. For this particular example the following code will work: >>> compile("{'result':(Boolean, MessageString)}", '<expr>', 'eval').co_names ('Boolean', 'MessageString') There is also parser module which might be useful for such tasks.
Regular expression to extract "Boolean" and "MessageString" from {'result':(Boolean, MessageString)}
How to parse the string " {'result':(Boolean, MessageString)} " using Python regular expressions to get Boolean and the MessageString separated into variables?
[ "This works:\n>>> x = re.search('\\((.*),\\s*(.*)\\)', \" {'result':(Boolean, MessageString)} \")\n>>> x.group(1)\n'Boolean'\n>>> x.group(2)\n'MessageString'\n\n", "It looks like a dictionary+tuple in Python syntax, so eval() would also work (if you trust the source!!!)\n", "Depending on what you need there can be other solution than regular expression. For this particular example the following code will work:\n>>> compile(\"{'result':(Boolean, MessageString)}\", '<expr>', 'eval').co_names\n('Boolean', 'MessageString')\n\nThere is also parser module which might be useful for such tasks.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001650244_python_regex.txt
Q: Doctest failing inspite of having correct output My function is def validate_latitude(lat): """Enforce latitude is in range >>> validate_latitude(65) 65 >>> validate_latitude(91) 90 >>> validate_latitude(-91) -90 """ lat = min(lat, 90) lat = max(lat, -90) return lat And the test fails with this output ********************************************************************** File "packages/utils.py", line 64, in __main__.validate_latitude Failed example: validate_latitude(-91) Expected: -90 Got: -90 ********************************************************************** Cant see why it fails in spite of having the deisred output A: Whitespace? If I highlight your output, I can see additional whitespace following the "Expected" value. Not sure whether this is relevant or not. A: In these two lines: >>> validate_latitude(-91) -90 You have a Tab character before the - in -90, and four space characters after the 0. When doctests runs this code the extra whitespace is of course not produced, so the equality comparison fails. Good editors, e.g. vim, have ways to highlight trailing spaces, and stray tabs, so that you don't fall afould of such accidents. Not sure what editor you're using or how you have set it up, so it's hard to give more specific advice (besides the obvious one of ensuring you use an editor WITH such capabilities, and enable the capabilities in question;-).
Doctest failing inspite of having correct output
My function is def validate_latitude(lat): """Enforce latitude is in range >>> validate_latitude(65) 65 >>> validate_latitude(91) 90 >>> validate_latitude(-91) -90 """ lat = min(lat, 90) lat = max(lat, -90) return lat And the test fails with this output ********************************************************************** File "packages/utils.py", line 64, in __main__.validate_latitude Failed example: validate_latitude(-91) Expected: -90 Got: -90 ********************************************************************** Cant see why it fails in spite of having the deisred output
[ "Whitespace?\nIf I highlight your output, I can see additional whitespace following the \"Expected\" value. Not sure whether this is relevant or not.\n", "In these two lines:\n>>> validate_latitude(-91)\n-90 \n\nYou have a Tab character before the - in -90, and four space characters after the 0. When doctests runs this code the extra whitespace is of course not produced, so the equality comparison fails.\nGood editors, e.g. vim, have ways to highlight trailing spaces, and stray tabs, so that you don't fall afould of such accidents. Not sure what editor you're using or how you have set it up, so it's hard to give more specific advice (besides the obvious one of ensuring you use an editor WITH such capabilities, and enable the capabilities in question;-).\n" ]
[ 3, 3 ]
[]
[]
[ "doctest", "python" ]
stackoverflow_0001650184_doctest_python.txt
Q: Python: Object identity assertions thrown by differences in import statement notations When checking an object's identity, I am getting assertion errors because the object creation code imports the object-defining module under one notation (base.other_stuff.BarCode) and the identity-checking code imports that same module under a different notation (other_stuff.BarCode). (Please see below for gory details.) It seems that the isinstance() call is sticky about the references to the object definition module, and wants it imported under the exact same notation. (I'm using version 2.5.) I suppose could fix this by changing the import notation in the code checking the identity, but I'm worried that I'll just propagate the same problem to other code that depends on it. And I'm sure there is some more elegant solution that I probably should be using in the first place. So how do I fix this? DETAILS PythonPath: '/', '/base/' Files: /__init__.py base/__init__.py base/other_stuff/__init__.py base/other_stuff/BarCode.py base/stuff/__init__.py camp/__init__.py Text of base/stuff/FooCode.py: import other_stuff.BarCode as bc class Foo: def __init__(self, barThing): assert isinstance(barThing, bc.Bar) Text of camp/new_code.py: import base.stuff.FooCode as fc import base.other_stuff.BarCode as bc thisBar = bc.Bar() assert isinstance(thisBar, bc.Bar) thisFoo = fc.Foo(barThing=thisBar) This fails. It survives its assertion test, but blows up on the assertion in the initial code. However, it works when I modify new_code to import BarCode.py with: import other_stuff.BarCode as bc . . . because both base/ and base/other_stuff are on the PythonPath. A: It looks like you have <root>/ and <root>/base in your sys.path, which is always bad. When you do import other_stuff.BarCode as bc from base/stuff/FooCode.py it imports other_stuff as root package, but not subpackage of base. So after doing import base.other_stuff.BarCode as bc you get BarCode module imported twice: with other_stuff.BarCode and base.other_stuff.BarCode. The best solution would be: Remove <root>/base from sys.path (or $PYTHONPATH). Use relative import in base/stuff/FooCode.py: from ..other_stuff import BarCode as bc. A: Your code layout is seriously broken. You should not have package directories in sys.path. In you situation, Python will use two different search paths to find BarCode.py, therefore loading it twice as separate modules, bar.other_stuff.BarCode and other_stuff.BarCode. This means that every object in this module exists twice, wasting memory, and natuarally the object identity will fail: >>> from base.other_stuff import BarCode as bc1 >>> from other_stuff import BarCode as bc2 >>> bc1 <module 'base.other_stuff.BarCode' from '.../base/other_stuff/BarCode.pyc'> >>> bc2 <module 'other_stuff.BarCode' from '.../other_stuff/BarCode.pyc'> >>> bc1 == b2 False >>> bc1 is bc2 False Although they originate from the same source file, Python treats bc1 and bc2 as different modules. Make sure that every module you are using can be identified uniquely by its full-qualified name, in your case: base.other_stuff.BarCode. If a module is part of a package, never add the package directory to sys.path. A: "Notation" is the least of issues -- different notations that are defined to semantically refer to the same module are guaranteed to produce the same object. E.g.: >>> import sys as foobar >>> import sys as zapzip >>> foobar is zapzip True The problem is, rather, that it's surely possible to import the same file more than once, in ways that don't let the import mechanism fully know what you're doing, and thus end up with distinct module objects. Overlapping paths like you're using could easily produce that, for example. One approach (if you insist on writing code, and/or laying out your filesystem, in such a potentially confusing/misleading way;-) is to set __builtin__.__import__ to your own function that, after calling the previous/normal version, checks the __file__ attribute of the newly imported module against those already in sys.modules (worth maintaining your dict of those, mapping file to canonical module object for that file) using os.path.normpath (or even stronger ways to detect synonyms for a single file, e.g. symlinks and hard links, via functionality in standard library module os). With this hook, you can make sure that all imports of any single given file will always result in a single canonical module object, almost no matter what gyrations occur in the paths and filesystem in question (would still be possible for a clever attacker to foil the checks by installing a tricky filesystem of their own devising, but I don't think you're actually trying to guard against deliberate cunning attacks;-). A: You are having problems because you have both base and other_stuff in sys.path. To the Python interpreter there are multiple BarCode modules: bar.other_stuff.BarCode and other_stuff.BarCode the first is located in the top level package: bar.other_stuff and the other is separate top level package other_stuff. When the python interpreter searches the sys.path it is finding two complete different modules. When you try use classes from these two separate modules interchangeably you get the errors you are seeing. You need to clean up your python path probably putting only the parent folder on base on the path.
Python: Object identity assertions thrown by differences in import statement notations
When checking an object's identity, I am getting assertion errors because the object creation code imports the object-defining module under one notation (base.other_stuff.BarCode) and the identity-checking code imports that same module under a different notation (other_stuff.BarCode). (Please see below for gory details.) It seems that the isinstance() call is sticky about the references to the object definition module, and wants it imported under the exact same notation. (I'm using version 2.5.) I suppose could fix this by changing the import notation in the code checking the identity, but I'm worried that I'll just propagate the same problem to other code that depends on it. And I'm sure there is some more elegant solution that I probably should be using in the first place. So how do I fix this? DETAILS PythonPath: '/', '/base/' Files: /__init__.py base/__init__.py base/other_stuff/__init__.py base/other_stuff/BarCode.py base/stuff/__init__.py camp/__init__.py Text of base/stuff/FooCode.py: import other_stuff.BarCode as bc class Foo: def __init__(self, barThing): assert isinstance(barThing, bc.Bar) Text of camp/new_code.py: import base.stuff.FooCode as fc import base.other_stuff.BarCode as bc thisBar = bc.Bar() assert isinstance(thisBar, bc.Bar) thisFoo = fc.Foo(barThing=thisBar) This fails. It survives its assertion test, but blows up on the assertion in the initial code. However, it works when I modify new_code to import BarCode.py with: import other_stuff.BarCode as bc . . . because both base/ and base/other_stuff are on the PythonPath.
[ "It looks like you have <root>/ and <root>/base in your sys.path, which is always bad. When you do import other_stuff.BarCode as bc from base/stuff/FooCode.py it imports other_stuff as root package, but not subpackage of base. So after doing import base.other_stuff.BarCode as bc you get BarCode module imported twice: with other_stuff.BarCode and base.other_stuff.BarCode.\nThe best solution would be:\n\nRemove <root>/base from sys.path (or $PYTHONPATH).\nUse relative import in base/stuff/FooCode.py: from ..other_stuff import BarCode as bc.\n\n", "Your code layout is seriously broken. You should not have package directories in sys.path.\nIn you situation, Python will use two different search paths to find BarCode.py, therefore loading it twice as separate modules, bar.other_stuff.BarCode and other_stuff.BarCode. This means that every object in this module exists twice, wasting memory, and natuarally the object identity will fail:\n>>> from base.other_stuff import BarCode as bc1\n>>> from other_stuff import BarCode as bc2\n>>> bc1\n<module 'base.other_stuff.BarCode' from '.../base/other_stuff/BarCode.pyc'>\n>>> bc2\n<module 'other_stuff.BarCode' from '.../other_stuff/BarCode.pyc'>\n>>> bc1 == b2\nFalse\n>>> bc1 is bc2\nFalse\n\nAlthough they originate from the same source file, Python treats bc1 and bc2 as different modules.\nMake sure that every module you are using can be identified uniquely by its full-qualified name, in your case: base.other_stuff.BarCode. If a module is part of a package, never add the package directory to sys.path.\n", "\"Notation\" is the least of issues -- different notations that are defined to semantically refer to the same module are guaranteed to produce the same object. E.g.:\n>>> import sys as foobar\n>>> import sys as zapzip\n>>> foobar is zapzip\nTrue\n\nThe problem is, rather, that it's surely possible to import the same file more than once, in ways that don't let the import mechanism fully know what you're doing, and thus end up with distinct module objects. Overlapping paths like you're using could easily produce that, for example.\nOne approach (if you insist on writing code, and/or laying out your filesystem, in such a potentially confusing/misleading way;-) is to set __builtin__.__import__ to your own function that, after calling the previous/normal version, checks the __file__ attribute of the newly imported module against those already in sys.modules (worth maintaining your dict of those, mapping file to canonical module object for that file) using os.path.normpath (or even stronger ways to detect synonyms for a single file, e.g. symlinks and hard links, via functionality in standard library module os). \nWith this hook, you can make sure that all imports of any single given file will always result in a single canonical module object, almost no matter what gyrations occur in the paths and filesystem in question (would still be possible for a clever attacker to foil the checks by installing a tricky filesystem of their own devising, but I don't think you're actually trying to guard against deliberate cunning attacks;-).\n", "You are having problems because you have both base and other_stuff in sys.path. \nTo the Python interpreter there are multiple BarCode modules: bar.other_stuff.BarCode and other_stuff.BarCode the first is located in the top level package: bar.other_stuff and the other is separate top level package other_stuff.\nWhen the python interpreter searches the sys.path it is finding two complete different modules. When you try use classes from these two separate modules interchangeably you get the errors you are seeing.\nYou need to clean up your python path probably putting only the parent folder on base on the path.\n" ]
[ 2, 1, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001650603_python.txt
Q: best way to parse a line in python to a dictionary I have a file with lines like account = "TEST1" Qty=100 price = 20.11 subject="some value" values="3=this, 4=that" There is no special delimiter and each key has a value that is surrounded by double quotes if its a string but not if it is a number. There is no key without a value though there may exist blank strings which are represented as "" and there is no escape character for a quote as it is not needed I want to know what is a good way to parse this kind of line with python and store the values as key-value pairs in a dictionary A: We're going to need a regex for this. import re, decimal r= re.compile('([^ =]+) *= *("[^"]*"|[^ ]*)') d= {} for k, v in r.findall(line): if v[:1]=='"': d[k]= v[1:-1] else: d[k]= decimal.Decimal(v) >>> d {'account': 'TEST1', 'subject': 'some value', 'values': '3=this, 4=that', 'price': Decimal('20.11'), 'Qty': Decimal('100.0')} You can use float instead of decimal if you prefer, but it's probably a bad idea if money is involved. A: Maybe a bit simpler to follow is the pyparsing rendition: from pyparsing import * # define basic elements - use re's for numerics, faster than easier than # composing from pyparsing objects integer = Regex(r'[+-]?\d+') real = Regex(r'[+-]?\d+\.\d*') ident = Word(alphanums) value = real | integer | quotedString.setParseAction(removeQuotes) # define a key-value pair, and a configline as one or more of these # wrap configline in a Dict so that results are accessible by given keys kvpair = Group(ident + Suppress('=') + value) configline = Dict(OneOrMore(kvpair)) src = 'account = "TEST1" Qty=100 price = 20.11 subject="some value" ' \ 'values="3=this, 4=that"' configitems = configline.parseString(src) Now you can access your pieces using the returned configitems ParseResults object: >>> print configitems.asList() [['account', 'TEST1'], ['Qty', '100'], ['price', '20.11'], ['subject', 'some value'], ['values', '3=this, 4=that']] >>> print configitems.asDict() {'account': 'TEST1', 'Qty': '100', 'values': '3=this, 4=that', 'price': '20.11', 'subject': 'some value'} >>> print configitems.dump() [['account', 'TEST1'], ['Qty', '100'], ['price', '20.11'], ['subject', 'some value'], ['values', '3=this, 4=that']] - Qty: 100 - account: TEST1 - price: 20.11 - subject: some value - values: 3=this, 4=that >>> print configitems.keys() ['account', 'subject', 'values', 'price', 'Qty'] >>> print configitems.subject some value A: A recursive variation of bobince's parses values with embedded equals as dictionaries: >>> import re >>> import pprint >>> >>> def parse_line(line): ... d = {} ... a = re.compile(r'\s*(\w+)\s*=\s*("[^"]*"|[^ ,]*),?') ... float_re = re.compile(r'^\d.+$') ... int_re = re.compile(r'^\d+$') ... for k,v in a.findall(line): ... if int_re.match(k): ... k = int(k) ... if v[-1] == '"': ... v = v[1:-1] ... if '=' in v: ... d[k] = parse_line(v) ... elif int_re.match(v): ... d[k] = int(v) ... elif float_re.match(v): ... d[k] = float(v) ... else: ... d[k] = v ... return d ... >>> line = 'account = "TEST1" Qty=100 price = 20.11 subject="some value" values= "3=this, 4=that"' >>> pprint.pprint(parse_line(line)) {'Qty': 100, 'account': 'TEST1', 'price': 20.109999999999999, 'subject': 'some value', 'values': {3: 'this', 4: 'that'}} A: If you don't want to use a regex, another option is just to read the string a character at a time: string = 'account = "TEST1" Qty=100 price = 20.11 subject="some value" values="3=this, 4=that"' inside_quotes = False key = None value = "" dict = {} for c in string: if c == '"': inside_quotes = not inside_quotes elif c == '=' and not inside_quotes: key = value value = '' elif c == ' ': if inside_quotes: value += ' '; elif key and value: dict[key] = value key = None value = '' else: value += c dict[key] = value print dict
best way to parse a line in python to a dictionary
I have a file with lines like account = "TEST1" Qty=100 price = 20.11 subject="some value" values="3=this, 4=that" There is no special delimiter and each key has a value that is surrounded by double quotes if its a string but not if it is a number. There is no key without a value though there may exist blank strings which are represented as "" and there is no escape character for a quote as it is not needed I want to know what is a good way to parse this kind of line with python and store the values as key-value pairs in a dictionary
[ "We're going to need a regex for this.\nimport re, decimal\nr= re.compile('([^ =]+) *= *(\"[^\"]*\"|[^ ]*)')\n\nd= {}\nfor k, v in r.findall(line):\n if v[:1]=='\"':\n d[k]= v[1:-1]\n else:\n d[k]= decimal.Decimal(v)\n\n>>> d\n{'account': 'TEST1', 'subject': 'some value', 'values': '3=this, 4=that', 'price': Decimal('20.11'), 'Qty': Decimal('100.0')}\n\nYou can use float instead of decimal if you prefer, but it's probably a bad idea if money is involved.\n", "Maybe a bit simpler to follow is the pyparsing rendition:\nfrom pyparsing import *\n\n# define basic elements - use re's for numerics, faster than easier than \n# composing from pyparsing objects\ninteger = Regex(r'[+-]?\\d+')\nreal = Regex(r'[+-]?\\d+\\.\\d*')\nident = Word(alphanums)\nvalue = real | integer | quotedString.setParseAction(removeQuotes)\n\n# define a key-value pair, and a configline as one or more of these\n# wrap configline in a Dict so that results are accessible by given keys\nkvpair = Group(ident + Suppress('=') + value)\nconfigline = Dict(OneOrMore(kvpair))\n\nsrc = 'account = \"TEST1\" Qty=100 price = 20.11 subject=\"some value\" ' \\\n 'values=\"3=this, 4=that\"'\n\nconfigitems = configline.parseString(src)\n\nNow you can access your pieces using the returned configitems ParseResults object:\n>>> print configitems.asList()\n[['account', 'TEST1'], ['Qty', '100'], ['price', '20.11'], \n ['subject', 'some value'], ['values', '3=this, 4=that']]\n\n>>> print configitems.asDict()\n{'account': 'TEST1', 'Qty': '100', 'values': '3=this, 4=that', \n 'price': '20.11', 'subject': 'some value'}\n\n>>> print configitems.dump()\n[['account', 'TEST1'], ['Qty', '100'], ['price', '20.11'], \n ['subject', 'some value'], ['values', '3=this, 4=that']]\n- Qty: 100\n- account: TEST1\n- price: 20.11\n- subject: some value\n- values: 3=this, 4=that\n\n>>> print configitems.keys()\n['account', 'subject', 'values', 'price', 'Qty']\n\n>>> print configitems.subject\nsome value\n\n", "A recursive variation of bobince's parses values with embedded equals as dictionaries:\n>>> import re\n>>> import pprint\n>>>\n>>> def parse_line(line):\n... d = {}\n... a = re.compile(r'\\s*(\\w+)\\s*=\\s*(\"[^\"]*\"|[^ ,]*),?')\n... float_re = re.compile(r'^\\d.+$')\n... int_re = re.compile(r'^\\d+$')\n... for k,v in a.findall(line):\n... if int_re.match(k):\n... k = int(k)\n... if v[-1] == '\"':\n... v = v[1:-1]\n... if '=' in v:\n... d[k] = parse_line(v)\n... elif int_re.match(v):\n... d[k] = int(v)\n... elif float_re.match(v):\n... d[k] = float(v)\n... else:\n... d[k] = v\n... return d\n...\n>>> line = 'account = \"TEST1\" Qty=100 price = 20.11 subject=\"some value\" values=\n\"3=this, 4=that\"'\n>>> pprint.pprint(parse_line(line))\n{'Qty': 100,\n 'account': 'TEST1',\n 'price': 20.109999999999999,\n 'subject': 'some value',\n 'values': {3: 'this', 4: 'that'}}\n\n", "If you don't want to use a regex, another option is just to read the string a character at a time:\nstring = 'account = \"TEST1\" Qty=100 price = 20.11 subject=\"some value\" values=\"3=this, 4=that\"'\n\ninside_quotes = False\nkey = None\nvalue = \"\"\ndict = {}\n\nfor c in string:\n if c == '\"':\n inside_quotes = not inside_quotes\n elif c == '=' and not inside_quotes:\n key = value\n value = ''\n elif c == ' ':\n if inside_quotes:\n value += ' ';\n elif key and value:\n dict[key] = value\n key = None\n value = ''\n else:\n value += c\n\ndict[key] = value\nprint dict\n\n" ]
[ 11, 5, 0, 0 ]
[]
[]
[ "delimiter", "parsing", "python" ]
stackoverflow_0001644362_delimiter_parsing_python.txt
Q: Get last function's call arguments from traceback? Can I get the parameters of the last function called in traceback? How? I want to make a catcher for standard errors to make readable code, yet provide detailed information to user. In the following example I want GET_PARAMS to return me a tuple of parameters supplied to os.chown. Examining the inspect module advised by Alex Martelli, I couldn't find that. def catch_errors(fn): def decorator(*args, **kwargs): try: return fn(*args, **kwargs) except (IOError, OSError): msg = sys.exc_info()[2].tb_frame.f_locals['error_message'] quit(msg.format(SEQUENCE_OF_PARAMETERS_OF_THE_LAST_FUNCTION_CALLED)\ + '\nError #{0[0]}: {0[1]}'.format(sys.exc_info()[1].args), 1) return decorator @catch_errors def do_your_job(): error_message = 'Can\'t change folder ownership \'{0}\' (uid:{1}, gid:{2})' os.chown('/root', 1000, 1000) # note that params aren't named vars. if __name == '__main__' and os.getenv('USERNAME') != 'root': do_your_job() (Thanks to Jim Robert for the decorator) A: For such inspection tasks, always think first of module inspect in the standard library. Here, inspect.getargvalues gives you the argument values given a frame, and inspect.getinnerframes gives you the frames of interest from a traceback object. A: Here is an example of such function and some problems that you can't get around: import sys def get_params(tb): while tb.tb_next: tb = tb.tb_next frame = tb.tb_frame code = frame.f_code argcount = code.co_argcount if code.co_flags & 4: # *args argcount += 1 if code.co_flags & 8: # **kwargs argcount += 1 names = code.co_varnames[:argcount] params = {} for name in names: params[name] = frame.f_locals.get(name, '<deleted>') return params def f(a, b=2, c=3, *d, **e): del c c = 4 e['g'] = 6 assert False try: f(1, f=5) except: print get_params(sys.exc_info()[2]) The output is: {'a': 1, 'c': 4, 'b': 2, 'e': {'g': 6, 'f': 5}, 'd': ()} I didn't used inspect.getinnerframes() to show another way to get needed frame. Although it simplifies a bit, it also do some extra work that is not needed for you while being relatively slow (inspect.getinnerframes() reads source file for every module in traceback; this is not important for one debugging call, but could be an issue in other cases). A: The problem with using a decorator for what you're trying to achieve is that the frame the exception handler gets is do_your_job()s, not os.listdir()s, os.makedirs()s or os.chown()s. So the information you'll be printing out is the arguments to do_your_job(). In order to get the behavior I think you intend, you would have to decorate all the library functions you're calling.
Get last function's call arguments from traceback?
Can I get the parameters of the last function called in traceback? How? I want to make a catcher for standard errors to make readable code, yet provide detailed information to user. In the following example I want GET_PARAMS to return me a tuple of parameters supplied to os.chown. Examining the inspect module advised by Alex Martelli, I couldn't find that. def catch_errors(fn): def decorator(*args, **kwargs): try: return fn(*args, **kwargs) except (IOError, OSError): msg = sys.exc_info()[2].tb_frame.f_locals['error_message'] quit(msg.format(SEQUENCE_OF_PARAMETERS_OF_THE_LAST_FUNCTION_CALLED)\ + '\nError #{0[0]}: {0[1]}'.format(sys.exc_info()[1].args), 1) return decorator @catch_errors def do_your_job(): error_message = 'Can\'t change folder ownership \'{0}\' (uid:{1}, gid:{2})' os.chown('/root', 1000, 1000) # note that params aren't named vars. if __name == '__main__' and os.getenv('USERNAME') != 'root': do_your_job() (Thanks to Jim Robert for the decorator)
[ "For such inspection tasks, always think first of module inspect in the standard library. Here, inspect.getargvalues gives you the argument values given a frame, and inspect.getinnerframes gives you the frames of interest from a traceback object.\n", "Here is an example of such function and some problems that you can't get around:\nimport sys\n\ndef get_params(tb):\n while tb.tb_next:\n tb = tb.tb_next\n frame = tb.tb_frame\n code = frame.f_code\n argcount = code.co_argcount\n if code.co_flags & 4: # *args\n argcount += 1\n if code.co_flags & 8: # **kwargs\n argcount += 1\n names = code.co_varnames[:argcount]\n params = {}\n for name in names:\n params[name] = frame.f_locals.get(name, '<deleted>')\n return params\n\n\ndef f(a, b=2, c=3, *d, **e):\n del c\n c = 4\n e['g'] = 6\n assert False\n\ntry:\n f(1, f=5)\nexcept:\n print get_params(sys.exc_info()[2])\n\nThe output is:\n{'a': 1, 'c': 4, 'b': 2, 'e': {'g': 6, 'f': 5}, 'd': ()}\n\nI didn't used inspect.getinnerframes() to show another way to get needed frame. Although it simplifies a bit, it also do some extra work that is not needed for you while being relatively slow (inspect.getinnerframes() reads source file for every module in traceback; this is not important for one debugging call, but could be an issue in other cases). \n", "The problem with using a decorator for what you're trying to achieve is that the frame the exception handler gets is do_your_job()s, not os.listdir()s, os.makedirs()s or os.chown()s. So the information you'll be printing out is the arguments to do_your_job(). In order to get the behavior I think you intend, you would have to decorate all the library functions you're calling.\n" ]
[ 5, 3, 0 ]
[]
[]
[ "exception_handling", "python", "traceback" ]
stackoverflow_0001650713_exception_handling_python_traceback.txt
Q: Google Federated Login (OpenID+Oauth) for Hosted Apps - changing end points? I'm trying to integrate the Google Federated Login with a premier apps account, but I'm having some problems. When I send the request to: https://www.google.com/accounts/o8/ud with all the parameters (see below), I get back both a request_token and list of attributes asked for by Attribute Exchange. This is perfect, as we need the email via attribute exhange (AX) to store the user in our application database, and we need the request token for future API requests to scopes (ie: calendar, contacts, etc). However, using that URL (herein referred to as the endpoint) doesn't keep the user signed in to their hosted apps (gmail, calendar, et al), which is a problem. Changing the endpoint to https://www.google.com/a/thedomain.com/o8/ud?be=o8 changes everything. I am automagically signed in to other google apps (gmail etc). However, using that endpoint, I only get the request token or the attributes via AX. Obviously thats not particularly Hybrid. Its very much one or the other. Example request to the endpoint https://www.google.com/accounts/o8/ud parameters = { 'openid.ns': 'http://specs.openid.net/auth/2.0', 'openid.claimed_id': 'http://specs.openid.net/auth/2.0/identifier_select', 'openid.identity': 'http://specs.openid.net/auth/2.0/identifier_select', 'openid.return_to':'http://our.domain.com/accounts/callback/', 'openid.realm': 'http://our.domain.com/', 'openid.assoc_handle': assoc_handle, 'openid.mode': 'checkid_setup', 'openid.ns.ext2': 'http://specs.openid.net/extensions/oauth/1.0', 'openid.ext2.consumer': 'our.domain.com', 'openid.ext2.scope': 'https://mail.google.com/mail/feed/atom', 'openid.ns.ax':'http://openid.net/srv/ax/1.0', 'openid.ax.mode':'fetch_request', 'openid.ax.required':'firstname,lastname,email', 'openid.ax.type.firstname':'http://axschema.org/namePerson/first', 'openid.ax.type.lastname':'http://axschema.org/namePerson/last', 'openid.ax.type.email':'http://axschema.org/contact/email', } return HttpResponseRedirect(end_point + '?' + urllib.urlencode(parameters)) (assoc_handle is previously set successfully by the openid initial request) I've been struggling for days trying to get this Hybird approach working, fighting the most opaque error messages (This page is invalid ... thanks Google) and lack of consistent documentation. I've trawled every code sample I can to get to this point. Any help would be appreciated ... A: For the record, posterity, and anyone else who might come asunder of this, I'll document the (ridiculous) answer. Ultimately, the problem was calling: return HttpResponseRedirect( 'https://www.google.com/a/thedomain.com/o8/ud?be=o8' + '?' + urllib.urlencode(parameters) ) Can you spot it? Yeah, it was the explicit inclusion of the question mark that caused the problem. Two query strings never exist at once.
Google Federated Login (OpenID+Oauth) for Hosted Apps - changing end points?
I'm trying to integrate the Google Federated Login with a premier apps account, but I'm having some problems. When I send the request to: https://www.google.com/accounts/o8/ud with all the parameters (see below), I get back both a request_token and list of attributes asked for by Attribute Exchange. This is perfect, as we need the email via attribute exhange (AX) to store the user in our application database, and we need the request token for future API requests to scopes (ie: calendar, contacts, etc). However, using that URL (herein referred to as the endpoint) doesn't keep the user signed in to their hosted apps (gmail, calendar, et al), which is a problem. Changing the endpoint to https://www.google.com/a/thedomain.com/o8/ud?be=o8 changes everything. I am automagically signed in to other google apps (gmail etc). However, using that endpoint, I only get the request token or the attributes via AX. Obviously thats not particularly Hybrid. Its very much one or the other. Example request to the endpoint https://www.google.com/accounts/o8/ud parameters = { 'openid.ns': 'http://specs.openid.net/auth/2.0', 'openid.claimed_id': 'http://specs.openid.net/auth/2.0/identifier_select', 'openid.identity': 'http://specs.openid.net/auth/2.0/identifier_select', 'openid.return_to':'http://our.domain.com/accounts/callback/', 'openid.realm': 'http://our.domain.com/', 'openid.assoc_handle': assoc_handle, 'openid.mode': 'checkid_setup', 'openid.ns.ext2': 'http://specs.openid.net/extensions/oauth/1.0', 'openid.ext2.consumer': 'our.domain.com', 'openid.ext2.scope': 'https://mail.google.com/mail/feed/atom', 'openid.ns.ax':'http://openid.net/srv/ax/1.0', 'openid.ax.mode':'fetch_request', 'openid.ax.required':'firstname,lastname,email', 'openid.ax.type.firstname':'http://axschema.org/namePerson/first', 'openid.ax.type.lastname':'http://axschema.org/namePerson/last', 'openid.ax.type.email':'http://axschema.org/contact/email', } return HttpResponseRedirect(end_point + '?' + urllib.urlencode(parameters)) (assoc_handle is previously set successfully by the openid initial request) I've been struggling for days trying to get this Hybird approach working, fighting the most opaque error messages (This page is invalid ... thanks Google) and lack of consistent documentation. I've trawled every code sample I can to get to this point. Any help would be appreciated ...
[ "For the record, posterity, and anyone else who might come asunder of this, I'll document the (ridiculous) answer.\nUltimately, the problem was calling:\nreturn HttpResponseRedirect(\n 'https://www.google.com/a/thedomain.com/o8/ud?be=o8'\n + '?'\n + urllib.urlencode(parameters)\n)\n\nCan you spot it? Yeah, it was the explicit inclusion of the question mark that caused the problem. Two query strings never exist at once. \n" ]
[ 7 ]
[]
[]
[ "hybridauthprovider", "oauth", "openid", "python" ]
stackoverflow_0001543123_hybridauthprovider_oauth_openid_python.txt
Q: pyGTK ComboBox List Height I'm just getting started with pyGtk programming, so bear with me. I have a dialog with a ComboBox. The list that shows up when I click on the combo box has 70+ times in it. It extends from the top of the screen to the bottom. I can live with it, but I'd rather have the ComboBox perform like an html select element(i.e. top of menu is at the combo box and extends down). Is there some inherited property that I can set or something? I'm not seeing anything in the API reference that has helped me so far. A: You can use a gtk.ComboBoxEntry instead of gtk.ComboBox. I have tested a ComboBoxEntry with 100 items and it works how you want. The downside is that the user will be able to type whatever they want into it, but you just need to validate the input before you do anything with it. On the upside you could implement auto-complete. I have seen font selection done this way.
pyGTK ComboBox List Height
I'm just getting started with pyGtk programming, so bear with me. I have a dialog with a ComboBox. The list that shows up when I click on the combo box has 70+ times in it. It extends from the top of the screen to the bottom. I can live with it, but I'd rather have the ComboBox perform like an html select element(i.e. top of menu is at the combo box and extends down). Is there some inherited property that I can set or something? I'm not seeing anything in the API reference that has helped me so far.
[ "You can use a gtk.ComboBoxEntry instead of gtk.ComboBox. I have tested a ComboBoxEntry with 100 items and it works how you want. The downside is that the user will be able to type whatever they want into it, but you just need to validate the input before you do anything with it. On the upside you could implement auto-complete. I have seen font selection done this way.\n" ]
[ 1 ]
[]
[]
[ "gtk", "pygtk", "python" ]
stackoverflow_0001635161_gtk_pygtk_python.txt
Q: IronPython to Original Python comparison. What can I expect from the first one? I wish to learn Python but I'm working all day in .Net as a C# developer, so I decided to download and install IronPython and integrated IronPython studio. How different or similar from the original Python it is? As a .Net developer can I expect to run conventional Python script in .Net environment with no problem or this is just the same old migration utopia? What can I expect about? Thanks in advance. EDIT: Dic. 2009 - IronPython has been upgraded to 2.6 recently. Please upgrade your answers if is possible. A: In your situation it's perfectly reasonable to study IronPython (especially as this book does a great job helping you do that!). You'll have access to essentially all of Python 2.5 functionality (not sure when IronPython will upgrade to a 2.6 version of Python, but 2.5 is already quite usable), plus all the .Net libraries and assemblies you know and love, and tools such as Visual Studio add-ins. The differences between CPython and IronPython (and Jython, for that matter, which applies the same concept as IronPython to the JVM -- Jim Hugunin was the originator of Jython long before he moved to Microsoft where he originated IronPython, both projects now thrive) are chiefly in garbage collection and threading: IronPython and Jython rely on their underlying platforms (so, you get mark-and-sweep garbage collection and free threading), CPython rolls its own (so, it's mostly reference-count GC, with mark-and-sweep once in a while to resolve reference loops, and threading hampered by a global interpreter lock). A well-coded Python script does not rely on the implementation details in question (it never assumes that GC happens immediately, never assumes that an operation is atomic under threading except for the few, like Queue.Queue's methods, that are explicitly documented to be), but of course there's plenty of scripts out in the wild that are sloppy. For example: data = open('x.txt').read() this leaves the file object open until it's garbage collected; in a reference-count environment the collection happens immediately (so the file gets closed ASAP), in a mark-and-sweep environment that is not the case (so the process using such constructs often would erroneously keep some files, maybe many files, uselessly open for far longer than they need to be, wasting system resources &c). So, proper Python coding is instead: # needed in 2.5, unneeded but innocuous in 2.6 from __future__ import with_statement with open('x.txt') as f: data = f.read() which does guarantee immediate closure of the file in every implementation (the with statement is very very handy that way;-). This doesn't affect your learning of Python, nor does it impede reuse of properly-coded Python code, but if and when you want to reuse sloppily-coded Python code (especially in a long-running server, service, daemon process, &c) you may in the future need to do some tightening up thereof. So, btw, will people who want to use newer and better CPython versions, such as Unladen Swallow &c, once those versions implement better garbage collection mechanisms, get rid of the GIL, and other enhancements; hopefully this is already changing the "culture" of the Python community towards more correct, less sloppy coding, but of course there's bazillions of lines of old sloppy code around, so some care is needed;-). A: Most python scripts work perfectly well in IronPython. Here is a list of packages and modules not included in IronPython in the latest release. As long as your script doesn't rely on these, it will most likely work without changes. However, much of the "power" of IronPython really comes into play by migrating your scripts to use .NET framework classes instead of the python standard library.
IronPython to Original Python comparison. What can I expect from the first one?
I wish to learn Python but I'm working all day in .Net as a C# developer, so I decided to download and install IronPython and integrated IronPython studio. How different or similar from the original Python it is? As a .Net developer can I expect to run conventional Python script in .Net environment with no problem or this is just the same old migration utopia? What can I expect about? Thanks in advance. EDIT: Dic. 2009 - IronPython has been upgraded to 2.6 recently. Please upgrade your answers if is possible.
[ "In your situation it's perfectly reasonable to study IronPython (especially as this book does a great job helping you do that!). You'll have access to essentially all of Python 2.5 functionality (not sure when IronPython will upgrade to a 2.6 version of Python, but 2.5 is already quite usable), plus all the .Net libraries and assemblies you know and love, and tools such as Visual Studio add-ins.\nThe differences between CPython and IronPython (and Jython, for that matter, which applies the same concept as IronPython to the JVM -- Jim Hugunin was the originator of Jython long before he moved to Microsoft where he originated IronPython, both projects now thrive) are chiefly in garbage collection and threading: IronPython and Jython rely on their underlying platforms (so, you get mark-and-sweep garbage collection and free threading), CPython rolls its own (so, it's mostly reference-count GC, with mark-and-sweep once in a while to resolve reference loops, and threading hampered by a global interpreter lock).\nA well-coded Python script does not rely on the implementation details in question (it never assumes that GC happens immediately, never assumes that an operation is atomic under threading except for the few, like Queue.Queue's methods, that are explicitly documented to be), but of course there's plenty of scripts out in the wild that are sloppy. For example:\ndata = open('x.txt').read()\n\nthis leaves the file object open until it's garbage collected; in a reference-count environment the collection happens immediately (so the file gets closed ASAP), in a mark-and-sweep environment that is not the case (so the process using such constructs often would erroneously keep some files, maybe many files, uselessly open for far longer than they need to be, wasting system resources &c).\nSo, proper Python coding is instead:\n# needed in 2.5, unneeded but innocuous in 2.6\nfrom __future__ import with_statement\n\nwith open('x.txt') as f: data = f.read()\n\nwhich does guarantee immediate closure of the file in every implementation (the with statement is very very handy that way;-).\nThis doesn't affect your learning of Python, nor does it impede reuse of properly-coded Python code, but if and when you want to reuse sloppily-coded Python code (especially in a long-running server, service, daemon process, &c) you may in the future need to do some tightening up thereof. So, btw, will people who want to use newer and better CPython versions, such as Unladen Swallow &c, once those versions implement better garbage collection mechanisms, get rid of the GIL, and other enhancements; hopefully this is already changing the \"culture\" of the Python community towards more correct, less sloppy coding, but of course there's bazillions of lines of old sloppy code around, so some care is needed;-).\n", "Most python scripts work perfectly well in IronPython. \nHere is a list of packages and modules not included in IronPython in the latest release.\nAs long as your script doesn't rely on these, it will most likely work without changes. However, much of the \"power\" of IronPython really comes into play by migrating your scripts to use .NET framework classes instead of the python standard library.\n" ]
[ 3, 1 ]
[]
[]
[ ".net", "comparison", "ironpython", "ironpython_studio", "python" ]
stackoverflow_0001650898_.net_comparison_ironpython_ironpython_studio_python.txt
Q: Create triplets from list of words Let's say I have a list of words, something like this: ['The', 'Quick', 'Brown', 'Fox', 'Jumps', 'Over', 'The', 'Lazy', 'Dog'] I'd like to generate a list of lists, with each array containing 3 of the words, but with a possible triplet for each one. So it should look something like this: ['The', 'Quick', 'Brown'] ['Quick', 'Brown', 'Fox'] ['Brown', 'Fox', 'Jumps'] and so on. What would be the best way to get this result? A: >>> words ['The', 'Quick', 'Brown', 'Fox', 'Jumps', 'Over', 'The', 'Lazy', 'Dog'] >>> [words[i:i+3] for i in range(len(words) - 2)] [['The', 'Quick', 'Brown'], ['Quick', 'Brown', 'Fox'], ['Brown', 'Fox', 'Jumps'], ['Fox', 'Jumps', 'Over'], ['Jumps', 'Over', 'The'], ['Over', 'The', 'Lazy'], ['The', 'Lazy', 'Dog']] A: b = [a[i:i+3] for i in range(len(a)-2)] A: With sliceable sequences such as lists, the answers already given work fine. For the general case in which the words come in any iterable (be it a sequence, a file, whatever): def NbyN(seq, N=3): it = iter(seq) window = [next(it) for _ in range(N)] while True: yield window window = window[1:] + [next(it)]
Create triplets from list of words
Let's say I have a list of words, something like this: ['The', 'Quick', 'Brown', 'Fox', 'Jumps', 'Over', 'The', 'Lazy', 'Dog'] I'd like to generate a list of lists, with each array containing 3 of the words, but with a possible triplet for each one. So it should look something like this: ['The', 'Quick', 'Brown'] ['Quick', 'Brown', 'Fox'] ['Brown', 'Fox', 'Jumps'] and so on. What would be the best way to get this result?
[ ">>> words\n['The', 'Quick', 'Brown', 'Fox', 'Jumps', 'Over', 'The', 'Lazy', 'Dog']\n>>> [words[i:i+3] for i in range(len(words) - 2)]\n[['The', 'Quick', 'Brown'], ['Quick', 'Brown', 'Fox'], ['Brown', 'Fox', 'Jumps'], ['Fox', 'Jumps', 'Over'], ['Jumps', 'Over', 'The'], ['Over', 'The', 'Lazy'], ['The', 'Lazy', 'Dog']]\n\n", "b = [a[i:i+3] for i in range(len(a)-2)]\n\n", "With sliceable sequences such as lists, the answers already given work fine. For the general case in which the words come in any iterable (be it a sequence, a file, whatever):\ndef NbyN(seq, N=3):\n it = iter(seq)\n window = [next(it) for _ in range(N)] \n while True:\n yield window\n window = window[1:] + [next(it)]\n\n" ]
[ 7, 5, 3 ]
[ "Without knowing why you want to do this or how many times it needs to be done, I'd say just slice it. \n" ]
[ -1 ]
[ "list", "python" ]
stackoverflow_0001651386_list_python.txt
Q: Python Array is read-only, can't append values I am new to Python. The following code is causing an error when it attempts to append values to an array. What am I doing wrong? import re from array import array freq_pattern = re.compile("Frequency of Incident[\(\)A-Za-z\s]*\.*\s*([\.0-9]*)") col_pattern = re.compile("([-\.0-9]+)\s+([-\.0-9]+)\s+([-\.0-9]+)\s+([-\.0-9]+)\s+([-\.0-9]+)") e_rcs = array('f') f = open('example.4.out', 'r') for line in f: print line, result = freq_pattern.search(line) if result: freq = float(result.group(1)) cols = col_pattern.search(line) if cols: e_rcs.append = float(cols.group(2)) f.close() Error Traceback (most recent call last): File "D:\workspace\CATS Parser\cats-post.py", line 31, in e_rcs.append = float(cols.group(2)) AttributeError: 'list' object attribute 'append' is read-only attributes (assign to .append) A: Do you want to append to the array? e_rcs.append( float(cols.group(2)) ) Doing this: e_rcs.append = float(cols.group(2)) replaces the append method of the array e-rcs with a floating-point value. Rarely something you want to do. A: You are assigning to the append() function, you want instead to call .append(float(cols.group(2))). A: append is a method. You're trying to overwrite it instead of calling it. e_rcs.append(float(cols.group(2))) A: Try this instead: import re freq_pattern = re.compile("Frequency of Incident[\(\)A-Za-z\s]*\.*\s*([\.0-9]*)") col_pattern = re.compile("([-\.0-9]+)\s+([-\.0-9]+)\s+([-\.0-9]+)\s+([-\.0-9]+)\s+([-\.0-9]+)") e_rcs = [] # make an empty list f = open('example.4.out', 'r') for line in f: print line, result = freq_pattern.search(line) if result: freq = float(result.group(1)) cols = col_pattern.search(line) if cols: e_rcs.append( float(cols.group(2)) ) # add another float to the list f.close() In Python you would only use array.array when you need to control the binary layout of your storage, i.e. a plain array of bytes in RAM. If you are going to be doing a lot of scientific data analysis, then you should have a look at the NumPy module which supports n-dimensional arrays. Think of NumPy as a replacement for FORTRAN in doing mathematics and data analysis.
Python Array is read-only, can't append values
I am new to Python. The following code is causing an error when it attempts to append values to an array. What am I doing wrong? import re from array import array freq_pattern = re.compile("Frequency of Incident[\(\)A-Za-z\s]*\.*\s*([\.0-9]*)") col_pattern = re.compile("([-\.0-9]+)\s+([-\.0-9]+)\s+([-\.0-9]+)\s+([-\.0-9]+)\s+([-\.0-9]+)") e_rcs = array('f') f = open('example.4.out', 'r') for line in f: print line, result = freq_pattern.search(line) if result: freq = float(result.group(1)) cols = col_pattern.search(line) if cols: e_rcs.append = float(cols.group(2)) f.close() Error Traceback (most recent call last): File "D:\workspace\CATS Parser\cats-post.py", line 31, in e_rcs.append = float(cols.group(2)) AttributeError: 'list' object attribute 'append' is read-only attributes (assign to .append)
[ "Do you want to append to the array?\ne_rcs.append( float(cols.group(2)) )\n\nDoing this: e_rcs.append = float(cols.group(2)) replaces the append method of the array e-rcs with a floating-point value. Rarely something you want to do.\n", "You are assigning to the append() function, you want instead to call .append(float(cols.group(2))).\n", "append is a method. You're trying to overwrite it instead of calling it.\ne_rcs.append(float(cols.group(2)))\n\n", "Try this instead:\nimport re\n\nfreq_pattern = re.compile(\"Frequency of Incident[\\(\\)A-Za-z\\s]*\\.*\\s*([\\.0-9]*)\")\ncol_pattern = re.compile(\"([-\\.0-9]+)\\s+([-\\.0-9]+)\\s+([-\\.0-9]+)\\s+([-\\.0-9]+)\\s+([-\\.0-9]+)\")\ne_rcs = [] # make an empty list\n\nf = open('example.4.out', 'r')\n\nfor line in f:\n print line,\n\n result = freq_pattern.search(line)\n if result:\n freq = float(result.group(1))\n\n cols = col_pattern.search(line)\n if cols:\n e_rcs.append( float(cols.group(2)) ) # add another float to the list\n\nf.close()\n\nIn Python you would only use array.array when you need to control the binary layout of your storage, i.e. a plain array of bytes in RAM.\nIf you are going to be doing a lot of scientific data analysis, then you should have a look at the NumPy module which supports n-dimensional arrays. Think of NumPy as a replacement for FORTRAN in doing mathematics and data analysis.\n" ]
[ 6, 6, 3, 0 ]
[]
[]
[ "arrays", "python" ]
stackoverflow_0001651430_arrays_python.txt
Q: Extracting ALL matches of a nested regular expression in python I am trying to parse a list of items which satisfies the python regex r'\A(("[\w\s]+"|\w+)\s+)*\Z' that is, it's a space separated list except that spaces are allowed inside quoted strings. I would like to get a list of items in the list (that is of items matched by the r'("[\w\s]+"|\w+)' part. So, for example >>> parse('foo "bar baz" "bob" ') ['foo', '"bar baz"', '"bob"'] Is there any nice way to do this with python re? Many things don't quite work. For example >>> re.match(r'\A(("[\w\s]+"|\w+)\s+)*\Z', 'foo "bar baz" "bob" ').group(2) '"bob"' only returns the last one it matched. On the other hand >>> re.findall(r'("[\w\s]+"|\w+)', 'foo "bar baz" "bob" ') ['foo', '"bar baz"', '"bob"'] but it also accepts malformed expressions like >>> re.findall(r'("[\w\s]+"|\w+)', 'foo "bar b-&&az" "bob" ') ['foo', 'bar', 'b', 'az', '" "', 'bob'] So is there any way to use the original regex and get all of the items that matched group 2? Something like >>> re.match_multigroup(r'\A(("[\w\s]+"|\w+)\s+)*\Z', 'foo "bar baz" "bob" ').group(2) ['foo', '"bar baz"', '"bob"'] >>> re.match_multigroup(r'("[\w\s]+"|\w+)', 'foo "bar b-&&az" "bob" ') None Edit: It is important that I preserve the quotes in the output, thus I don't want >>> re.match_multigroup(r'\A(("[\w\s]+"|\w+)\s+)*\Z', 'foo "bar baz" "bob" ').group(2) ['foo', 'bar baz', 'bob'] because then I don't know if bob was quoted or not. A: I don't think that regex is the right tool here. Try csv module: >>> s = 'foo "bar baz" "bob" ' >>> for i in csv.reader([s], delimiter=' '): print(i) ['foo', 'bar baz', 'bob', ''] A: Here's a solution that splits on any whitespace that isn't inside a pair of quotation marks: re.split('\s+(?=[^"]*(?:"[^"]*"[^"]*)*$)', target) The lookahead succeeds only if there's an even number of quotation marks ahead of the just-matched whitespace. If quoted sections in your text can contain escaped quotes, you may need a more complicated regex, depending on how the escaping is done. A: Alright, I ended up deciding to do this in two steps. First I check that the expression is syntactically valid and second I break it into individual pieces: def parse(expr): if re.match(r'\A(("[\w\s]+"|\w+)\s+)*\Z', expr): return re.findall(r'("[\w\s]+"|\w+)', expr) So: >>> parse('foo "bar baz" "bob" ') ['foo', '"bar baz"', '"bob"'] >>> parse('foo "bar b-&&az" "bob" ') >>> parse('foo "bar" ') ['foo', '"bar"'] >>> parse('"foo" bar ') ['"foo"', 'bar'] >>> parse('foo"bar baz" "bob" ') >>> parse('&&') I'm about 90% sure that this method works appropriately for all strings, but I would still be interested if anyone had a more general solution, this seems sort of kludgey to me. Thanks SilentGhost and Alan Moore for the help. I did not know about python csv or regex lookaheads before, it might be helpful to me to learn about those.
Extracting ALL matches of a nested regular expression in python
I am trying to parse a list of items which satisfies the python regex r'\A(("[\w\s]+"|\w+)\s+)*\Z' that is, it's a space separated list except that spaces are allowed inside quoted strings. I would like to get a list of items in the list (that is of items matched by the r'("[\w\s]+"|\w+)' part. So, for example >>> parse('foo "bar baz" "bob" ') ['foo', '"bar baz"', '"bob"'] Is there any nice way to do this with python re? Many things don't quite work. For example >>> re.match(r'\A(("[\w\s]+"|\w+)\s+)*\Z', 'foo "bar baz" "bob" ').group(2) '"bob"' only returns the last one it matched. On the other hand >>> re.findall(r'("[\w\s]+"|\w+)', 'foo "bar baz" "bob" ') ['foo', '"bar baz"', '"bob"'] but it also accepts malformed expressions like >>> re.findall(r'("[\w\s]+"|\w+)', 'foo "bar b-&&az" "bob" ') ['foo', 'bar', 'b', 'az', '" "', 'bob'] So is there any way to use the original regex and get all of the items that matched group 2? Something like >>> re.match_multigroup(r'\A(("[\w\s]+"|\w+)\s+)*\Z', 'foo "bar baz" "bob" ').group(2) ['foo', '"bar baz"', '"bob"'] >>> re.match_multigroup(r'("[\w\s]+"|\w+)', 'foo "bar b-&&az" "bob" ') None Edit: It is important that I preserve the quotes in the output, thus I don't want >>> re.match_multigroup(r'\A(("[\w\s]+"|\w+)\s+)*\Z', 'foo "bar baz" "bob" ').group(2) ['foo', 'bar baz', 'bob'] because then I don't know if bob was quoted or not.
[ "I don't think that regex is the right tool here. Try csv module:\n>>> s = 'foo \"bar baz\" \"bob\" '\n>>> for i in csv.reader([s], delimiter=' '):\n print(i)\n\n\n['foo', 'bar baz', 'bob', '']\n\n", "Here's a solution that splits on any whitespace that isn't inside a pair of quotation marks:\nre.split('\\s+(?=[^\"]*(?:\"[^\"]*\"[^\"]*)*$)', target)\n\nThe lookahead succeeds only if there's an even number of quotation marks ahead of the just-matched whitespace. If quoted sections in your text can contain escaped quotes, you may need a more complicated regex, depending on how the escaping is done.\n", "Alright, I ended up deciding to do this in two steps.\nFirst I check that the expression is syntactically valid and second I break it into individual pieces:\ndef parse(expr):\n if re.match(r'\\A((\"[\\w\\s]+\"|\\w+)\\s+)*\\Z', expr):\n return re.findall(r'(\"[\\w\\s]+\"|\\w+)', expr)\n\nSo:\n>>> parse('foo \"bar baz\" \"bob\" ')\n['foo', '\"bar baz\"', '\"bob\"']\n>>> parse('foo \"bar b-&&az\" \"bob\" ')\n>>> parse('foo \"bar\" ')\n['foo', '\"bar\"']\n>>> parse('\"foo\" bar ')\n['\"foo\"', 'bar']\n>>> parse('foo\"bar baz\" \"bob\" ')\n>>> parse('&&')\n\nI'm about 90% sure that this method works appropriately for all strings, but I would still be interested if anyone had a more general solution, this seems sort of kludgey to me.\nThanks SilentGhost and Alan Moore for the help. I did not know about python csv or regex lookaheads before, it might be helpful to me to learn about those.\n" ]
[ 2, 1, 1 ]
[]
[]
[ "parsing", "python", "regex" ]
stackoverflow_0001633984_parsing_python_regex.txt
Q: Calling types via their name as a string in Python I'm aware of using globals(), locals() and getattr to referance things in Python by string (as in this question) but unless I'm missing something obvious I can't seem to use this with calling types. e.g.: In [12]: locals()['int'] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) e:\downloads_to_access\<ipython console> in <module>() KeyError: 'int' In [13]: globals()['int'] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) e:\downloads_to_access\<ipython console> in <module>() KeyError: 'int' getattr(???, 'int')... What's the best way of doing this? A: There are locals,globals, and then builtins. Perhaps you are looking for the builtin: import __builtin__ getattr(__builtin__,'int') A: You've already gotten a solution using builtins, but another worthwhile technique to hold in your toolbag is a dispatch table. If your CSV is designed to be used by multiple applications written in multiple languages, it might look like this: Integer,15 String,34 Float,1.0 Integer,8 In such a case you might want something like this, where csv is a list of tuples containing the data above: mapping = { 'Integer': int, 'String': str, 'Float': float, 'Unicode': unicode } results = [] for row in csv: datatype = row[0] val_string = row[1] results.append(mapping[datatype](val_string)) return results That gives you the flexibility of allowing arbitrary strings to map to useful types. You don't have to massage your data to give you the exact values python expects. A: getattr(__builtins__,'int') A: The issue here is that int is part of the __builtins__ module, not just part of the global namespace. You can get a built-in type, such as int, using the following bit of code: int_gen = getattr(globals()["__builtins__"], "int") i = int_gen(4) # >>> i = 4 Similarly, you can access any other (imported) module by passing the module's name as a string index to globals(), and then using getattr to extract the desired attributes. A: Comments suggest that you are unhappy with the idea of using eval to generate data. looking for a function in __builtins__ allows you to find eval. the most basic solution given looks like this: import __builtin__ def parseInput(typename, value): return getattr(__builtins__,typename)(value) You would use it like so: >>> parseInput("int", "123") 123 cool. works pretty ok. how about this one though? >>> parseInput("eval", 'eval(compile("print \'Code injection?\'","","single"))') Code injection? does this do what you expect? Unless you explicitly want this, you need to do something to prevent untrustworthy inputs from poking about in your namespace. I'd strongly recommend a simple whitelist, gracefully raising some sort of exception in the case of invalid input, like so: import __builtin__ def parseInput(typename, value): return {"int":int, "float":float, "str":str}[typename](value) but if you just can't bear that, you can still add just a bit of armor by verifying that the requested function is actually a type: import __builtin__ def parseInput(typename, value): typector = getattr(__builtins__,typename) if type(typector) is type: return typector(value) else: return None A: If you have a string that is the name of a thing, and you want the thing, you can also use: thing = 'int' eval(thing) Keep in mind though, that this is very powerful, and you need to understand what thing might contain, and where it came from. For example, if you accept user input as thing, a malicious user could do unlimited damage to your machine with this code.
Calling types via their name as a string in Python
I'm aware of using globals(), locals() and getattr to referance things in Python by string (as in this question) but unless I'm missing something obvious I can't seem to use this with calling types. e.g.: In [12]: locals()['int'] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) e:\downloads_to_access\<ipython console> in <module>() KeyError: 'int' In [13]: globals()['int'] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) e:\downloads_to_access\<ipython console> in <module>() KeyError: 'int' getattr(???, 'int')... What's the best way of doing this?
[ "There are locals,globals, and then builtins.\nPerhaps you are looking for the builtin:\nimport __builtin__\ngetattr(__builtin__,'int')\n\n", "You've already gotten a solution using builtins, but another worthwhile technique to hold in your toolbag is a dispatch table. If your CSV is designed to be used by multiple applications written in multiple languages, it might look like this:\nInteger,15\nString,34\nFloat,1.0\nInteger,8\n\nIn such a case you might want something like this, where csv is a list of tuples containing the data above:\nmapping = {\n 'Integer': int,\n 'String': str,\n 'Float': float,\n 'Unicode': unicode\n}\nresults = []\nfor row in csv:\n datatype = row[0]\n val_string = row[1]\n results.append(mapping[datatype](val_string))\nreturn results\n\nThat gives you the flexibility of allowing arbitrary strings to map to useful types. You don't have to massage your data to give you the exact values python expects.\n", "getattr(__builtins__,'int')\n\n", "The issue here is that int is part of the __builtins__ module, not just part of the global namespace. You can get a built-in type, such as int, using the following bit of code:\nint_gen = getattr(globals()[\"__builtins__\"], \"int\")\ni = int_gen(4)\n# >>> i = 4\n\nSimilarly, you can access any other (imported) module by passing the module's name as a string index to globals(), and then using getattr to extract the desired attributes.\n", "Comments suggest that you are unhappy with the idea of using eval to generate data. looking for a function in __builtins__ allows you to find eval. \nthe most basic solution given looks like this:\nimport __builtin__\n\ndef parseInput(typename, value):\n return getattr(__builtins__,typename)(value)\n\nYou would use it like so:\n>>> parseInput(\"int\", \"123\")\n123\n\ncool. works pretty ok. how about this one though?\n>>> parseInput(\"eval\", 'eval(compile(\"print \\'Code injection?\\'\",\"\",\"single\"))')\nCode injection?\n\ndoes this do what you expect? Unless you explicitly want this, you need to do something to prevent untrustworthy inputs from poking about in your namespace. I'd strongly recommend a simple whitelist, gracefully raising some sort of exception in the case of invalid input, like so:\nimport __builtin__\n\ndef parseInput(typename, value):\n return {\"int\":int, \"float\":float, \"str\":str}[typename](value)\n\nbut if you just can't bear that, you can still add just a bit of armor by verifying that the requested function is actually a type:\nimport __builtin__\n\ndef parseInput(typename, value):\n typector = getattr(__builtins__,typename)\n if type(typector) is type:\n return typector(value)\n else:\n return None\n\n", "If you have a string that is the name of a thing, and you want the thing, you can also use:\nthing = 'int'\neval(thing)\n\nKeep in mind though, that this is very powerful, and you need to understand what thing might contain, and where it came from. For example, if you accept user input as thing, a malicious user could do unlimited damage to your machine with this code.\n" ]
[ 12, 7, 3, 2, 2, 1 ]
[]
[]
[ "getattr", "python" ]
stackoverflow_0001650338_getattr_python.txt
Q: Is it still Python 2.6 versus Python 3? G'day, I'm wanting to go back to Python after not using it for a while and I saw this question "Python Version for a Newbie" while wondering about getting back into Python 2.6 or Python 3. Almost all of the questions' answers were along the lines that most of the code out there, libraries, legacy systems, etc., is 2.5 or 2.6 rather than 3 so start with 2.x now and then head towards 3 later on. Given that the question and all answers date from early December 2008 I was wondering is this still the case? Should someone who wants to get back into Python maybe start off with 2.6 and then head towards 3 later on? A: Yes. Virtually all live production systems will use 2.5/2.6 for a long time yet. There's no point learning 3.0, only to have to downgrade it because your host doesn't support it. 95% of what you will learn in 2.5/2.6 is applicable to 3 anyway. A: Depends on the amount of libraries you're going to use. Raw Python, or all libs are available for Py3k - go for it without any doubts. Python code distributed as standalone app (using PyInstaller), relying on some GUI lib, XML-lib, win32api etc - double check if all libs are available at least as betas for Py3k. Chances are still quite high that some older lib is not available for Python 3.x, and either you port it by yourself to new Python version, or you switch to some other lib or - stick to Python 2.6 for a while. A: If you want to use only standard library then try Python 3.1. If you want to use others libraries/frameworks then they dictate the version to use. For example web2py framework will work best on 2.5. A: I would say that Python 2.4 is the safest to learn, but the changes from 2.4->2.5->2.6 make some small progress towards Python 3.x, even if they may never make it (if I recall there will be some more steps?). Python 3.1 can be used if you own a dedicated server and intend to build your own applications from the ground up. WSGI does support this, but I wouldn't recommend it. As has already been said, I would learn the Python 2.5 or Python 2.6 style, but I would make a few changes. Look at the Python 3 style regarding brackets. e.g. The print function in 2.x has always been just print "Hello World" Where as in 3.x you need to enclose it print("Hello World") This is probably a good practice to pick up on, but things like Exceptions will cause issues if you use 3.x in 2.x. I know it's probably a bit confusing, but if you make sure you wrap your functions (additional brackets shouldn't really hurt most things) so that nothing is bare (bare like the first code snippet above), then it'll help with the transition. A: The problem is, if you started with 2.4 or more it is better if you start from there, so you'll get on track faster, after some time when you feel comfortable with you code you can try 3.0 and find out what did they change and learn the new style. I for once still code in 2.6 style and follow those guidelines, still haven't seen the changes in 3.0
Is it still Python 2.6 versus Python 3?
G'day, I'm wanting to go back to Python after not using it for a while and I saw this question "Python Version for a Newbie" while wondering about getting back into Python 2.6 or Python 3. Almost all of the questions' answers were along the lines that most of the code out there, libraries, legacy systems, etc., is 2.5 or 2.6 rather than 3 so start with 2.x now and then head towards 3 later on. Given that the question and all answers date from early December 2008 I was wondering is this still the case? Should someone who wants to get back into Python maybe start off with 2.6 and then head towards 3 later on?
[ "Yes. Virtually all live production systems will use 2.5/2.6 for a long time yet. There's no point learning 3.0, only to have to downgrade it because your host doesn't support it.\n95% of what you will learn in 2.5/2.6 is applicable to 3 anyway.\n", "Depends on the amount of libraries you're going to use.\n\nRaw Python, or all libs are available for Py3k - go for it without any doubts.\nPython code distributed as standalone app (using PyInstaller), relying on some GUI lib, XML-lib, win32api etc - double check if all libs are available at least as betas for Py3k. Chances are still quite high that some older lib is not available for Python 3.x, and either you port it by yourself to new Python version, or you switch to some other lib or - stick to Python 2.6 for a while.\n\n", "If you want to use only standard library then try Python 3.1. If you want to use others libraries/frameworks then they dictate the version to use. For example web2py framework will work best on 2.5.\n", "I would say that Python 2.4 is the safest to learn, but the changes from 2.4->2.5->2.6 make some small progress towards Python 3.x, even if they may never make it (if I recall there will be some more steps?).\nPython 3.1 can be used if you own a dedicated server and intend to build your own applications from the ground up. WSGI does support this, but I wouldn't recommend it.\nAs has already been said, I would learn the Python 2.5 or Python 2.6 style, but I would make a few changes.\nLook at the Python 3 style regarding brackets.\ne.g. The print function in 2.x has always been just\nprint \"Hello World\"\n\nWhere as in 3.x you need to enclose it\nprint(\"Hello World\")\n\nThis is probably a good practice to pick up on, but things like Exceptions will cause issues if you use 3.x in 2.x. I know it's probably a bit confusing, but if you make sure you wrap your functions (additional brackets shouldn't really hurt most things) so that nothing is bare (bare like the first code snippet above), then it'll help with the transition.\n", "The problem is, if you started with 2.4 or more it is better if you start from there, so you'll get on track faster, after some time when you feel comfortable with you code you can try 3.0 and find out what did they change and learn the new style.\nI for once still code in 2.6 style and follow those guidelines, still haven't seen the changes in 3.0\n" ]
[ 9, 3, 2, 2, 1 ]
[]
[]
[ "python", "python_3.x", "version" ]
stackoverflow_0001649391_python_python_3.x_version.txt
Q: Close all opened xml tags I have a file, which change it content in a short time. But I'd like to read it before it is ready. The problem is, that it is an xml-file (log). So when you read it, it could be, that not all tags are closed. I would like to know if there is a possibility to close all opened tags correctly, that there are no problems to show it in the browser (with xslt stylsheet). This should be made by using included features of python. A: Some XML parsers allow incremental parsing of XML documents that is the parser can start working on the document without needing it to be fully loaded. The XMLTreeBuilder from the xml.etree.ElementTree module in the Python standard library is one such parser: Element Tree As you can see in the example below you can feed data to the parser bit by bit as you read it from your input source. The appropriate hook methods in your handler class will get called when various XML "events" happen (tag started, tag data read, tag ended) allowing you to process the data as the XML document is loaded: from xml.etree.ElementTree import XMLTreeBuilder class MyHandler(object): def start(self, tag, attrib): # Called for each opening tag. print tag + " started" def end(self, tag): # Called for each closing tag. print tag + " ended" def data(self, data): # Called when data is read from a tag print data + " data read" def close(self): # Called when all data has been parsed. print "All data read" handler = MyHandler() parser = XMLTreeBuilder(target=handler) parser.feed(<sometag>) parser.feed(<sometag-child-tag>text) parser.feed(</sometag-child-tag>) parser.feed(</sometag>) parser.close() In this example the handler would receive five events and print: sometag started sometag-child started "text" data read sometag-child ended sometag ended All data read A: If I am understanding your question correctly, you have a log file that is always being appended to so you get something like: <root> <entry> ... </entry> <entry> ... </entry> ... <entry> ... </entry <!-- no closing root --> In this case you DON'T want to use a DOM parser because it tries to read a complete document and would choke on the missing tag. Instead, a SAX or Pull parser would work because it reads the document like a stream of data rather than a complete tree. As Denis replied above, you could either close the missing tag at the end or ignore any incomplete tags before writing it out. XML parsing on Wikipedia A: You can use any SAX parser by feeding data available so far to it. Use SAX handler that just reconstructs source XML, keep the stack of tags opened and close them in reverse order at the end. A: You could use BeautifulStoneSoup (XML part of BeautifulSoup). www.crummy.com/software/BeautifulSoup It's not ideal, but it would circumvent the problem if you cannot fix the file's output... It's basically a previously implemented version of what Denis said. You can just join whatever you need into the soup and it will do its best to fix it.
Close all opened xml tags
I have a file, which change it content in a short time. But I'd like to read it before it is ready. The problem is, that it is an xml-file (log). So when you read it, it could be, that not all tags are closed. I would like to know if there is a possibility to close all opened tags correctly, that there are no problems to show it in the browser (with xslt stylsheet). This should be made by using included features of python.
[ "Some XML parsers allow incremental parsing of XML documents that is the parser can start working on the document without needing it to be fully loaded. The XMLTreeBuilder from the xml.etree.ElementTree module in the Python standard library is one such parser: Element Tree\nAs you can see in the example below you can feed data to the parser bit by bit as you read it from your input source. The appropriate hook methods in your handler class will get called when various XML \"events\" happen (tag started, tag data read, tag ended) allowing you to process the data as the XML document is loaded:\nfrom xml.etree.ElementTree import XMLTreeBuilder\nclass MyHandler(object):\n def start(self, tag, attrib):\n # Called for each opening tag.\n print tag + \" started\"\n def end(self, tag):\n # Called for each closing tag.\n print tag + \" ended\"\n def data(self, data):\n # Called when data is read from a tag\n print data + \" data read\"\n def close(self): \n # Called when all data has been parsed.\n print \"All data read\"\n\nhandler = MyHandler()\n\nparser = XMLTreeBuilder(target=handler)\n\nparser.feed(<sometag>)\nparser.feed(<sometag-child-tag>text)\nparser.feed(</sometag-child-tag>)\nparser.feed(</sometag>)\nparser.close()\n\nIn this example the handler would receive five events and print:\nsometag started\nsometag-child started\n\"text\" data read\nsometag-child ended\nsometag ended\nAll data read\n", "If I am understanding your question correctly, you have a log file that is always being appended to so you get something like:\n<root>\n<entry> ... </entry>\n<entry> ... </entry>\n...\n<entry> ... </entry\n<!-- no closing root -->\n\nIn this case you DON'T want to use a DOM parser because it tries to read a complete document and would choke on the missing tag. Instead, a SAX or Pull parser would work because it reads the document like a stream of data rather than a complete tree. As Denis replied above, you could either close the missing tag at the end or ignore any incomplete tags before writing it out.\nXML parsing on Wikipedia\n", "You can use any SAX parser by feeding data available so far to it. Use SAX handler that just reconstructs source XML, keep the stack of tags opened and close them in reverse order at the end.\n", "You could use BeautifulStoneSoup (XML part of BeautifulSoup).\nwww.crummy.com/software/BeautifulSoup\nIt's not ideal, but it would circumvent the problem if you cannot fix the file's output...\nIt's basically a previously implemented version of what Denis said.\nYou can just join whatever you need into the soup and it will do its best to fix it.\n" ]
[ 6, 1, 0, 0 ]
[]
[]
[ "python", "xml" ]
stackoverflow_0001644994_python_xml.txt
Q: Widget Transparency in PyGTK? What is the best way to have transparency of specific widgets in a PyGTK application? I do not want to use themes because the transparency of each of the widgets will be changing through animation. The only thing I can find is to use cairo to draw widgets with an Alpha, but I can't figure out how to do this. Is there perhaps a better way to do this as well? Thanks! A: Assuming that your program runs under composition manager, you could get per-widget transparency by manipulating widget's X window. Look at gtk.gdk.Window.set_opacity(). Note, it is not gtk.Window; you can get this object by getting its window property (buttonWidget.window), but only when widget is realized and only when widget does handle events -- gtk.Label does not have its own X window for instance. If you need to work also when you don't have composition manager, drawing your widgets by yourself is the only option -- but you don't necessarily have to use cairo; drawing pixel by pixel on the bare X window will also work.
Widget Transparency in PyGTK?
What is the best way to have transparency of specific widgets in a PyGTK application? I do not want to use themes because the transparency of each of the widgets will be changing through animation. The only thing I can find is to use cairo to draw widgets with an Alpha, but I can't figure out how to do this. Is there perhaps a better way to do this as well? Thanks!
[ "Assuming that your program runs under composition manager, you could get per-widget transparency by manipulating widget's X window. Look at gtk.gdk.Window.set_opacity().\nNote, it is not gtk.Window; you can get this object by getting its window property (buttonWidget.window), but only when widget is realized and only when widget does handle events -- gtk.Label does not have its own X window for instance.\nIf you need to work also when you don't have composition manager, drawing your widgets by yourself is the only option -- but you don't necessarily have to use cairo; drawing pixel by pixel on the bare X window will also work.\n" ]
[ 3 ]
[]
[]
[ "cairo", "gtk", "linux", "pygtk", "python" ]
stackoverflow_0001652779_cairo_gtk_linux_pygtk_python.txt
Q: Datastore datetimeproperty iterable? I have model class info(db.Model): user = db.UserProperty() last_update_date = db.DateTimeProperty() I need to retrieve last_update_date for specific user. It is working good, i can retrieve this value, i can even pass it to another variable if results: for result in results: data = result.last_update_date Problem lies when i try to assign it to feed_uri = contacts.GetFeedUri() feed_query = gdata.contacts.service.ContactsQuery(feed_uri) feed_query.updated_min = data This is done outside any loops so i do not see why it says that datetime is not iterable. Error message i receive is Traceback (most recent call last): File "C:\Program Files (x86)\Google\google_appengine\google\appengine\ext\webapp__init__.py", line 507, in call handler.get(*groups) File "C:\Users\mklich\workspace\google_contacts_webapp\src\contacts-list.py", line 266, in get listc = checkUserPrivateContacts(user) File "C:\Users\mklich\workspace\google_contacts_webapp\src\contacts-list.py", line 189, in checkUserPrivateContacts feed = contacts.GetContactsFeed(feed_query.ToUri()) File "C:\Users\mklich\workspace\google_contacts_webapp\src\gdata\service.py", line 1718, in ToUri return atom.service.BuildUri(q_feed, self) File "C:\Users\mklich\workspace\google_contacts_webapp\src\atom\service.py", line 584, in BuildUri parameter_list = DictionaryToParamList(url_params, escape_params) File "C:\Users\mklich\workspace\google_contacts_webapp\src\atom\service.py", line 551, in DictionaryToParamList for param, value in (url_parameters or {}).items()] File "C:\Python25\lib\urllib.py", line 1210, in quote_plus if ' ' in s: TypeError: argument of type 'datetime.datetime' is not iterable Am i doing something wrong or is it a bug? Thank you for responses. A: An example from the contacts API documentation: updated_min = raw_input('Enter updated min (example: 2007-03-16T00:00:00): ') query = gdata.contacts.service.ContactsQuery() query.updated_min = updated_min I think the updated_min property takes a string, not a datetime object.
Datastore datetimeproperty iterable?
I have model class info(db.Model): user = db.UserProperty() last_update_date = db.DateTimeProperty() I need to retrieve last_update_date for specific user. It is working good, i can retrieve this value, i can even pass it to another variable if results: for result in results: data = result.last_update_date Problem lies when i try to assign it to feed_uri = contacts.GetFeedUri() feed_query = gdata.contacts.service.ContactsQuery(feed_uri) feed_query.updated_min = data This is done outside any loops so i do not see why it says that datetime is not iterable. Error message i receive is Traceback (most recent call last): File "C:\Program Files (x86)\Google\google_appengine\google\appengine\ext\webapp__init__.py", line 507, in call handler.get(*groups) File "C:\Users\mklich\workspace\google_contacts_webapp\src\contacts-list.py", line 266, in get listc = checkUserPrivateContacts(user) File "C:\Users\mklich\workspace\google_contacts_webapp\src\contacts-list.py", line 189, in checkUserPrivateContacts feed = contacts.GetContactsFeed(feed_query.ToUri()) File "C:\Users\mklich\workspace\google_contacts_webapp\src\gdata\service.py", line 1718, in ToUri return atom.service.BuildUri(q_feed, self) File "C:\Users\mklich\workspace\google_contacts_webapp\src\atom\service.py", line 584, in BuildUri parameter_list = DictionaryToParamList(url_params, escape_params) File "C:\Users\mklich\workspace\google_contacts_webapp\src\atom\service.py", line 551, in DictionaryToParamList for param, value in (url_parameters or {}).items()] File "C:\Python25\lib\urllib.py", line 1210, in quote_plus if ' ' in s: TypeError: argument of type 'datetime.datetime' is not iterable Am i doing something wrong or is it a bug? Thank you for responses.
[ "An example from the contacts API documentation:\nupdated_min = raw_input('Enter updated min (example: 2007-03-16T00:00:00): ')\nquery = gdata.contacts.service.ContactsQuery()\nquery.updated_min = updated_min\n\nI think the updated_min property takes a string, not a datetime object.\n" ]
[ 1 ]
[]
[]
[ "datetime", "google_app_engine", "google_cloud_datastore", "python" ]
stackoverflow_0001642441_datetime_google_app_engine_google_cloud_datastore_python.txt
Q: Lightweight crash recovery for Python What would be the best way to handle lightweight crash recovery for my program? I have a Python program that runs a number of test cases and the results are stored in a dictionary which serves as a cache. If I could save (and then restore) each item that is added to the dictionary, I could simply run the program again and the caching would provide suitable crash recovery. You may assume that the keys and values in the dictionary are easily convertible to strings ie. using either str or the pickle module. I want this to be completely cross platform - well at least as cross platform as Python is I don't want to simply write out each value to a file and load it in my program might crash while I am writing the file UPDATE: This is intended to be a lightweight module so a DBMS is out of the question. UPDATE: Alex is correct in that I don't actually need to protect against crashes while writing out, but there are circumstances where I would like to be able to manually terminate it in a recoverable state. UPDATE Added a highly limited solution using standard input below A: There's no good way to guard against "your program crashing while writing a checkpoint to a file", but why should you worry so much about that?! What ELSE is your program doing at that time BESIDES "saving checkpoint to a file", that could easily cause it to crash?! It's hard to beat pickle (or cPickle) for portability of serialization in Python, but, that's just about "turning your keys and values to strings". For saving key-value pairs (once stringified), few approaches are safer than just appending to a file (don't pickle to files if your crashes are far, far more frequent than normal, as you suggest tjey are). If your environment is incredibly crash-prone for whatever reason (very cheap HW?-), just make sure you close the file (and fflush if the OS is also crash-prone;-), then reopen it for append. This way, worst that can happen is that the very latest append will be incomplete (due to a crash in the middle of things) -- then you just catch the exception raised by unpickling that incomplete record and redo only the things that weren't saved (because they weren't completed due to a crash, OR because they were completed but not fully saved due to a crash, comes to much the same thing in the end). If you have the option of checkpointing to a database engine (instead of just doing so to files), consider it seriously! The DB engine will keep transaction logs and ensure ACID properties, making your application-side programming much easier IF you can count on that!-) A: The pickle module supports serializing objects to a file (and loading from file): http://docs.python.org/library/pickle.html A: One possibility would be to create a number of smaller files ... each representing a subset of the state that you're trying to preserve and each with a checksum or tag indicating that it's complete as the last line/datum of the file (just before the file is closed). If the checksum/tag is good then the rest of the data can be considered valid ... though program would then have to find all of these files, open and read all of them, and use meta data you've provided (in their headers or their names?) to determine which ones constitute the most recent cohesive state representation (or checkpoint) from which you can continue processing. Without knowing more about the nature of the data that you're working with it's impossible to be more specific. You can use files, of course, or you could use a DBMS system just about as easily. Any decent DBMS (PostgreSQL, MySQL if you're using the proper storage back-ends) can give you ACID guarantees and transactional support. So the data you read back should always be consistent with the constraints that you put in your schema and/or with the transactions (BEGIN, COMMIT, ROLLBACK) that you processed. A possible advantage of posting your serialized date to a DBMS is that you can host the DBMS on a separate system (which is unlikely to suffer the same instabilities as your test host at the same times). A: Pickle/cPickle have problems. I use the JSON module to serialize objects out. I like it because not only does it work on any OS, but it will work fine in other programming languages, too; many other languages and platforms have readily-accessible JSON deserialization support, which makes it easy to use the same objects in different programs. A: Solution with severe restrictions If I don't worry about it crashing while writing out and I only want to allow manual termination, I can use standard output to control this. Unfortunately, this can only terminate the program when a control point is reached. This could be solved by creating a new thread to read standard input. This thread could use a global lock to check if the main thread is inside a critical section (writing to a file) and terminate the program if this is not the case. Downsides: This is reasonably complex It adds an extra thread It stops me using standard input for anything else
Lightweight crash recovery for Python
What would be the best way to handle lightweight crash recovery for my program? I have a Python program that runs a number of test cases and the results are stored in a dictionary which serves as a cache. If I could save (and then restore) each item that is added to the dictionary, I could simply run the program again and the caching would provide suitable crash recovery. You may assume that the keys and values in the dictionary are easily convertible to strings ie. using either str or the pickle module. I want this to be completely cross platform - well at least as cross platform as Python is I don't want to simply write out each value to a file and load it in my program might crash while I am writing the file UPDATE: This is intended to be a lightweight module so a DBMS is out of the question. UPDATE: Alex is correct in that I don't actually need to protect against crashes while writing out, but there are circumstances where I would like to be able to manually terminate it in a recoverable state. UPDATE Added a highly limited solution using standard input below
[ "There's no good way to guard against \"your program crashing while writing a checkpoint to a file\", but why should you worry so much about that?! What ELSE is your program doing at that time BESIDES \"saving checkpoint to a file\", that could easily cause it to crash?!\nIt's hard to beat pickle (or cPickle) for portability of serialization in Python, but, that's just about \"turning your keys and values to strings\". For saving key-value pairs (once stringified), few approaches are safer than just appending to a file (don't pickle to files if your crashes are far, far more frequent than normal, as you suggest tjey are).\nIf your environment is incredibly crash-prone for whatever reason (very cheap HW?-), just make sure you close the file (and fflush if the OS is also crash-prone;-), then reopen it for append. This way, worst that can happen is that the very latest append will be incomplete (due to a crash in the middle of things) -- then you just catch the exception raised by unpickling that incomplete record and redo only the things that weren't saved (because they weren't completed due to a crash, OR because they were completed but not fully saved due to a crash, comes to much the same thing in the end).\nIf you have the option of checkpointing to a database engine (instead of just doing so to files), consider it seriously! The DB engine will keep transaction logs and ensure ACID properties, making your application-side programming much easier IF you can count on that!-)\n", "The pickle module supports serializing objects to a file (and loading from file):\nhttp://docs.python.org/library/pickle.html\n", "One possibility would be to create a number of smaller files ... each representing a subset of the state that you're trying to preserve and each with a checksum or tag indicating that it's complete as the last line/datum of the file (just before the file is closed).\nIf the checksum/tag is good then the rest of the data can be considered valid ... though program would then have to find all of these files, open and read all of them, and use meta data you've provided (in their headers or their names?) to determine which ones constitute the most recent cohesive state representation (or checkpoint) from which you can continue processing.\nWithout knowing more about the nature of the data that you're working with it's impossible to be more specific.\nYou can use files, of course, or you could use a DBMS system just about as easily. Any decent DBMS (PostgreSQL, MySQL if you're using the proper storage back-ends) can give you ACID guarantees and transactional support. So the data you read back should always be consistent with the constraints that you put in your schema and/or with the transactions (BEGIN, COMMIT, ROLLBACK) that you processed.\nA possible advantage of posting your serialized date to a DBMS is that you can host the DBMS on a separate system (which is unlikely to suffer the same instabilities as your test host at the same times).\n", "Pickle/cPickle have problems. \nI use the JSON module to serialize objects out. I like it because not only does it work on any OS, but it will work fine in other programming languages, too; many other languages and platforms have readily-accessible JSON deserialization support, which makes it easy to use the same objects in different programs.\n", "Solution with severe restrictions\nIf I don't worry about it crashing while writing out and I only want to allow manual termination, I can use standard output to control this. Unfortunately, this can only terminate the program when a control point is reached. This could be solved by creating a new thread to read standard input. This thread could use a global lock to check if the main thread is inside a critical section (writing to a file) and terminate the program if this is not the case.\nDownsides:\n\nThis is reasonably complex\nIt adds an extra thread\nIt stops me using standard input for anything else\n\n" ]
[ 2, 1, 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001653460_python.txt
Q: Trapping signals in Python According to the documentation: There is no way to β€œblock” signals temporarily from critical sections (since this is not supported by all Unix flavors). What stops me using signal.signal(signum,SIG_IGN) to block it, then adding the signal back? A: What stops you is that, if the signal actually arrives while SIG_IGN is in place, then it will be ignored and thrown away. When you add the signal back later, it's too late because it's gone and you'll never get to learn that it happened. Thus, you will have "ignored" (= thrown away) the signal rather than "blocked" it (= kept it for handling at the end of the critical section). Your confusion here might just arise from not knowing specifically what "blocking" a signal means: it means the OS hangs onto the signal, letting it wait to strike until your critical section is complete. See (as a great reference for all sorts of questions like this) W. Richard Steven's Advanced Programming in the UNIX Environment. Section 10.8 in the edition I have, "Reliable Signal Terminology and Semantics", is the one I just checked before answering to be sure of my answer. Update: on my Ubuntu laptop, "man sigprocmask" (if manpages-dev is installed) seems to be the man page to start with for learning about signal blocking. Again, as the Python docs note, this isn't available under all Unixes, so don't expect your old Irix or AIX box to run your Python program if you actually use "sigprocmask". But maybe you're not worried about that. :-)
Trapping signals in Python
According to the documentation: There is no way to β€œblock” signals temporarily from critical sections (since this is not supported by all Unix flavors). What stops me using signal.signal(signum,SIG_IGN) to block it, then adding the signal back?
[ "What stops you is that, if the signal actually arrives while SIG_IGN is in place, then it will be ignored and thrown away. When you add the signal back later, it's too late because it's gone and you'll never get to learn that it happened.\nThus, you will have \"ignored\" (= thrown away) the signal rather than \"blocked\" it (= kept it for handling at the end of the critical section). Your confusion here might just arise from not knowing specifically what \"blocking\" a signal means: it means the OS hangs onto the signal, letting it wait to strike until your critical section is complete.\nSee (as a great reference for all sorts of questions like this) W. Richard Steven's Advanced Programming in the UNIX Environment. Section 10.8 in the edition I have, \"Reliable Signal Terminology and Semantics\", is the one I just checked before answering to be sure of my answer.\nUpdate: on my Ubuntu laptop, \"man sigprocmask\" (if manpages-dev is installed) seems to be the man page to start with for learning about signal blocking. Again, as the Python docs note, this isn't available under all Unixes, so don't expect your old Irix or AIX box to run your Python program if you actually use \"sigprocmask\". But maybe you're not worried about that. :-)\n" ]
[ 11 ]
[]
[]
[ "python", "signals" ]
stackoverflow_0001654215_python_signals.txt
Q: How can I parse text in Python? Sample Text: SUBJECT = 'NETHERLANDS MUSIC EPA' CONTENT = 'Michael Buble performs in Amsterdam Canadian singer Michael Buble performs during a concert in Amsterdam, The Netherlands, 30 October 2009. Buble released his new album entitled 'Crazy Love'. EPA/OLAF KRAAK ' Expected result: " NETHERLANDS MUSIC EPA | 36 before Michael Buble performs in Amsterdam Canadian singer Michael Buble performs during a concert in Amsterdam, The Netherlands, 30 October 2009. Buble released his new album entitled 'Crazy Love'. EPA/OLAF KRAAK " How can I accomplish this in Python? A: Looks like you want something like...: import re x = re.compile(r'^([^\|]*?)\s*\|[^\n]*\n\s*(.*?)\s*$') s = """NETHERLANDS MUSIC EPA | 36 before Michael Buble performs in Amsterdam Canadian singer Michael Buble performs during a concert in Amsterdam, The Netherlands, 30 October 2009. Buble released his new album entitled 'Crazy Love'. EPA/OLAF KRAAK""" mo = x.match(s) subject, content = mo.groups() print 'SUBJECT =', repr(subject) print 'CONTENT =', repr(content) which does emit, as you require, SUBJECT = 'NETHERLANDS MUSIC EPA' CONTENT = "Michael Buble performs in Amsterdam Canadian singer Michael Buble performs during a concert in Amsterdam, The Netherlands, 30 October 2009. Buble released his new album entitled 'Crazy Love'. EPA/OLAF KRAAK" Or maybe you want to do the reverse (as a comment suggested)? then they key RE could be y = re.compile(r'^.*SUBJECT\s*=\s*\'([^\']*)\'.*CONTENT\s*=\s*"([^"]*)"', re.DOTANY) and you can use this similarly to get a match-object, extract subject and content as its groups, and format them for display as you wish. In either case it's possible that you may need tweaks -- since you haven't given precise specs, just one single example!, it's hard to generalize reliably. A: Here's a simple solution. I am using Python 3 but I think this code would be identical in 2: >>> import re >>> pair = re.compile("SUBJECT = '([^\n]*)'\nCONTENT = '([^\n]*)'\n", re.MULTILINE) >>> s = """SUBJECT = 'NETHERLANDS MUSIC EPA' ... CONTENT = 'Michael Buble performs in Amsterdam Canadian singer Michael Buble performs during a concert in Amsterdam, The Netherlands, 30 October 2009. Buble released his new album entitled 'Crazy Love'. EPA/OLAF KRAAK ' ... """ >>> m = pair.match(s) >>> m.group(1) + "\n" + m.group(2) "NETHERLANDS MUSIC EPA\nMichael Buble performs in Amsterdam Canadian singer Michael Buble performs during a concert in Amsterdam, The Netherlands, 30 October 2009. Buble released his new album entitled 'Crazy Love'. EPA/OLAF KRAAK "
How can I parse text in Python?
Sample Text: SUBJECT = 'NETHERLANDS MUSIC EPA' CONTENT = 'Michael Buble performs in Amsterdam Canadian singer Michael Buble performs during a concert in Amsterdam, The Netherlands, 30 October 2009. Buble released his new album entitled 'Crazy Love'. EPA/OLAF KRAAK ' Expected result: " NETHERLANDS MUSIC EPA | 36 before Michael Buble performs in Amsterdam Canadian singer Michael Buble performs during a concert in Amsterdam, The Netherlands, 30 October 2009. Buble released his new album entitled 'Crazy Love'. EPA/OLAF KRAAK " How can I accomplish this in Python?
[ "Looks like you want something like...:\nimport re\n\nx = re.compile(r'^([^\\|]*?)\\s*\\|[^\\n]*\\n\\s*(.*?)\\s*$')\n\ns = \"\"\"NETHERLANDS MUSIC EPA | 36 before\nMichael Buble performs in Amsterdam Canadian singer Michael Buble performs during a concert in Amsterdam, The Netherlands, 30 October 2009. Buble released his new album entitled 'Crazy Love'. EPA/OLAF KRAAK\"\"\"\n\nmo = x.match(s)\n\nsubject, content = mo.groups()\n\nprint 'SUBJECT =', repr(subject)\nprint 'CONTENT =', repr(content)\n\nwhich does emit, as you require,\nSUBJECT = 'NETHERLANDS MUSIC EPA'\nCONTENT = \"Michael Buble performs in Amsterdam Canadian singer Michael Buble performs during a concert in Amsterdam, The Netherlands, 30 October 2009. Buble released his new album entitled 'Crazy Love'. EPA/OLAF KRAAK\"\n\nOr maybe you want to do the reverse (as a comment suggested)? then they key RE could be\ny = re.compile(r'^.*SUBJECT\\s*=\\s*\\'([^\\']*)\\'.*CONTENT\\s*=\\s*\"([^\"]*)\"',\n re.DOTANY)\n\nand you can use this similarly to get a match-object, extract subject and content as its groups, and format them for display as you wish.\nIn either case it's possible that you may need tweaks -- since you haven't given precise specs, just one single example!, it's hard to generalize reliably.\n", "Here's a simple solution. I am using Python 3 but I think this code would be identical in 2:\n>>> import re\n>>> pair = re.compile(\"SUBJECT = '([^\\n]*)'\\nCONTENT = '([^\\n]*)'\\n\", re.MULTILINE)\n>>> s = \"\"\"SUBJECT = 'NETHERLANDS MUSIC EPA'\n... CONTENT = 'Michael Buble performs in Amsterdam Canadian singer Michael Buble performs during a concert in Amsterdam, The Netherlands, 30 October 2009. Buble released his new album entitled 'Crazy Love'. EPA/OLAF KRAAK '\n... \"\"\"\n>>> m = pair.match(s)\n>>> m.group(1) + \"\\n\" + m.group(2)\n\"NETHERLANDS MUSIC EPA\\nMichael Buble performs in Amsterdam Canadian singer Michael Buble performs during a concert in Amsterdam, The Netherlands, 30 October 2009. Buble released his new album entitled 'Crazy Love'. EPA/OLAF KRAAK \"\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0001653509_python_regex.txt
Q: Run command pipes with subprocess.Popen How can I run the following command using subprocess.Popen? mysqldump database_name table_name | bzip2 > filename I know os.system() can do the job but I dont want to wait for the dump to finish in main program. A: You want the shell=True option to make it execute shell commands: import subprocess subprocess.Popen("sleep 4s && echo right thar, right thar",shell=True); print 'i like it when you put it' which yields: I like it when you put it [4 seconds later] right thar, right thar
Run command pipes with subprocess.Popen
How can I run the following command using subprocess.Popen? mysqldump database_name table_name | bzip2 > filename I know os.system() can do the job but I dont want to wait for the dump to finish in main program.
[ "You want the shell=True option to make it execute shell commands:\nimport subprocess\nsubprocess.Popen(\"sleep 4s && echo right thar, right thar\",shell=True);\nprint 'i like it when you put it'\n\nwhich yields:\n I like it when you put it\n [4 seconds later]\n right thar, right thar\n\n" ]
[ 6 ]
[]
[]
[ "command", "pipe", "python" ]
stackoverflow_0001654600_command_pipe_python.txt
Q: Find a path in Windows relative to another This problem should be a no-brainer, but I haven't yet been able to nail it. I need a function that takes two parameters, each a file path, relative or absolute, and returns a filepath which is the first path (target) resolved relative to the second path (start). The resolved path may be relative to the current directory or may be absolute (I don't care). Here as an attempted implementation, complete with several doc tests, that exercises some sample uses cases (and demonstrates where it fails). A runnable script is also available on my source code repository, but it may change. The runnable script will run the doctest if no parameters are supplied or will pass one or two parameters to findpath if supplied. def findpath(target, start=os.path.curdir): r""" Find a path from start to target where target is relative to start. >>> orig_wd = os.getcwd() >>> os.chdir('c:\\windows') # so we know what the working directory is >>> findpath('d:\\') 'd:\\' >>> findpath('d:\\', 'c:\\windows') 'd:\\' >>> findpath('\\bar', 'd:\\') 'd:\\bar' >>> findpath('\\bar', 'd:\\foo') # fails with '\\bar' 'd:\\bar' >>> findpath('bar', 'd:\\foo') 'd:\\foo\\bar' >>> findpath('bar\\baz', 'd:\\foo') 'd:\\foo\\bar\\baz' >>> findpath('\\baz', 'd:\\foo\\bar') # fails with '\\baz' 'd:\\baz' Since we're on the C drive, findpath may be allowed to return relative paths for targets on the same drive. I use abspath to confirm that the ultimate target is what we expect. >>> os.path.abspath(findpath('\\bar')) 'c:\\bar' >>> os.path.abspath(findpath('bar')) 'c:\\windows\\bar' >>> findpath('..', 'd:\\foo\\bar') 'd:\\foo' >>> findpath('..\\bar', 'd:\\foo') 'd:\\bar' The parent of the root directory is the root directory. >>> findpath('..', 'd:\\') 'd:\\' restore the original working directory >>> os.chdir(orig_wd) """ return os.path.normpath(os.path.join(start, target)) As you can see from the comments in the doctest, this implementation fails when the start specifies a drive letter and the target is relative to the root of the drive. This brings up a few questions Is this behavior a limitation of os.path.join? In other words, should os.path.join('d:\foo', '\bar') resolve to 'd:\bar'? As a Windows user, I tend to think so, but I hate to think that a mature function like path.join would need alteration to handle this use case. Is there an example of an existing target path resolver such as findpath that will work in all of these test cases? If 'no' to the above questions, how would you implement this desired behavior? A: I agree with you: this seems like a deficiency in os.path.join. Looks like you have to deal with the drives separately. This code passes all your tests: def findpath(target, start=os.path.curdir): sdrive, start = os.path.splitdrive(start) tdrive, target = os.path.splitdrive(target) rdrive = tdrive or sdrive return os.path.normpath(os.path.join(rdrive, os.path.join(start, target))) (and yes, I had to nest two os.path.join's to get it to work...)
Find a path in Windows relative to another
This problem should be a no-brainer, but I haven't yet been able to nail it. I need a function that takes two parameters, each a file path, relative or absolute, and returns a filepath which is the first path (target) resolved relative to the second path (start). The resolved path may be relative to the current directory or may be absolute (I don't care). Here as an attempted implementation, complete with several doc tests, that exercises some sample uses cases (and demonstrates where it fails). A runnable script is also available on my source code repository, but it may change. The runnable script will run the doctest if no parameters are supplied or will pass one or two parameters to findpath if supplied. def findpath(target, start=os.path.curdir): r""" Find a path from start to target where target is relative to start. >>> orig_wd = os.getcwd() >>> os.chdir('c:\\windows') # so we know what the working directory is >>> findpath('d:\\') 'd:\\' >>> findpath('d:\\', 'c:\\windows') 'd:\\' >>> findpath('\\bar', 'd:\\') 'd:\\bar' >>> findpath('\\bar', 'd:\\foo') # fails with '\\bar' 'd:\\bar' >>> findpath('bar', 'd:\\foo') 'd:\\foo\\bar' >>> findpath('bar\\baz', 'd:\\foo') 'd:\\foo\\bar\\baz' >>> findpath('\\baz', 'd:\\foo\\bar') # fails with '\\baz' 'd:\\baz' Since we're on the C drive, findpath may be allowed to return relative paths for targets on the same drive. I use abspath to confirm that the ultimate target is what we expect. >>> os.path.abspath(findpath('\\bar')) 'c:\\bar' >>> os.path.abspath(findpath('bar')) 'c:\\windows\\bar' >>> findpath('..', 'd:\\foo\\bar') 'd:\\foo' >>> findpath('..\\bar', 'd:\\foo') 'd:\\bar' The parent of the root directory is the root directory. >>> findpath('..', 'd:\\') 'd:\\' restore the original working directory >>> os.chdir(orig_wd) """ return os.path.normpath(os.path.join(start, target)) As you can see from the comments in the doctest, this implementation fails when the start specifies a drive letter and the target is relative to the root of the drive. This brings up a few questions Is this behavior a limitation of os.path.join? In other words, should os.path.join('d:\foo', '\bar') resolve to 'd:\bar'? As a Windows user, I tend to think so, but I hate to think that a mature function like path.join would need alteration to handle this use case. Is there an example of an existing target path resolver such as findpath that will work in all of these test cases? If 'no' to the above questions, how would you implement this desired behavior?
[ "I agree with you: this seems like a deficiency in os.path.join. Looks like you have to deal with the drives separately. This code passes all your tests:\ndef findpath(target, start=os.path.curdir):\n sdrive, start = os.path.splitdrive(start)\n tdrive, target = os.path.splitdrive(target)\n rdrive = tdrive or sdrive\n return os.path.normpath(os.path.join(rdrive, os.path.join(start, target)))\n\n(and yes, I had to nest two os.path.join's to get it to work...)\n" ]
[ 3 ]
[]
[]
[ "filesystems", "python", "relative_path", "windows" ]
stackoverflow_0001654659_filesystems_python_relative_path_windows.txt
Q: erlang on google app engine? I know python can be run on GAE what is different erlang and python in lay man term? can erlang run on google app engine ? A: Erlang and Python are programming languages, and each language has one or more "runtimes" that allow you to run programs written in those languages. GAE supplies a Python runtime. GAE has no support for Erlang programs.
erlang on google app engine?
I know python can be run on GAE what is different erlang and python in lay man term? can erlang run on google app engine ?
[ "Erlang and Python are programming languages, and each language has one or more \"runtimes\" that allow you to run programs written in those languages. GAE supplies a Python runtime.\nGAE has no support for Erlang programs.\n" ]
[ 11 ]
[]
[]
[ "erlang", "google_app_engine", "python" ]
stackoverflow_0001654759_erlang_google_app_engine_python.txt
Q: If pickling was interrupted, will unpickling necessarily always fail? - Python Suppose my attempt to write a pickle object out to disk is incomplete due to a crash. Will an attempt to unpickle the object always lead to an exception or is it possible that the fragment that was written out may be interpreted as valid pickle and the error go unnoticed? A: Contra the other answers offered, I believe that we can make a strong argument about the recoverability of a pickle. That answer is: "Yes, an incomplete pickle always leads to an exception." Why are we able to do this? Because the "pickle" format is in fact a small stack-based language. In a stack-based language you write code that pushes item after item on a stack, then invoke an operator that does something with the data you've accumulated. And it just so happens that a pickle has to end with the command ".", which says: "take the item now at the bottom of the stack and return it as the value of this pickle." If your pickle is chopped off early, it will not end with this command, and you will get an EOF error. If you want to try recovering some of the data, you might have to write your own interpreter, or call into pickle.py somewhere that gets around its wanting to raise EOFError when done interpreting the stack without finding a ".". The main thing to keep in mind is that, as in most stack-based languages, big data structures are built "backwards": first you put lots of little strings or numbers on the stack, then you invoke an operation that says "put those together into a list" or "grab pairs of items on the stack and make a dictionary". So, if a pickle is interrupted, you'll find the stack full of pieces of the object that was going to be built, but you'll be missing that final code that tells you what was going to be built from the pieces. A: Pickling an object returns an str object, or writes an str object to a file ... it doesn't modify the original object. If a "crash" (exception) happens inside a pickling call, the result won't be returned to the caller, so you don't have anything that you could try to unpickle. Besides, why would you want to unpickle some dud rubbish left over after an exception? A: This is a development of S. Lott's answer, with my suggestion: Append a hash or checksum to your data, that you check before unpickling again. Here is a (simple) implementation of safepickle/safeunpickle to show how you can pad the pickled data with a hash (cryptographically strong hash in this case): import hashlib import cPickle as pickle _HASHLEN = 20 def safepickle(obj): s = pickle.dumps(obj) s += hashlib.sha1(s).digest() return s def safeunpickle(pstr): data, checksum = pstr[:-_HASHLEN], pstr[-_HASHLEN:] if hashlib.sha1(data).digest() != checksum: raise ValueError("Pickle hash does not match!") return pickle.loads(data) l = range(20) p = safepickle(l) new_l = safeunpickle(p) print new_l == l This method is to ensure that what you unpickle matches what you pickled and wrote to disk previously, but it does not protect against mixing up different pickles or malicious attacks, of course. (This method can be generalized to the pattern safe_write_file and safe_read_file for any whole-file data.) A: I doubt you could make a claim that it will always lead to an exception. Pickles are actually programs written in a specialized stack language. The internal details of pickles change from version to version, and new pickle protocols are added occasionally. The state of the pickle after a crash, and the resulting effects on the unpickler, would be very difficult to summarize in a simple statement like "it will always lead to an exception". A: To be sure that you have a "complete" pickle file, you need to pickle three things. Pickle a header of some kind that claims how many objects and what the end-of-file flag will look like. A tuple of an integer and the EOF string, for example. Pickle the objects you actually care about. The count is given by the header. Pickle a tail object that you don't actually care about, but which simply matches the claim made in the header. This can be simply a string that matches what was in the header. When you unpickle this file, you have to unpickle three things: The header. You care about the count and the form of the tail. The objects you actually care about. The tail object. Check that it matches the header. Other than that, it doesn't convey much except that the file was written in it's entirety.
If pickling was interrupted, will unpickling necessarily always fail? - Python
Suppose my attempt to write a pickle object out to disk is incomplete due to a crash. Will an attempt to unpickle the object always lead to an exception or is it possible that the fragment that was written out may be interpreted as valid pickle and the error go unnoticed?
[ "Contra the other answers offered, I believe that we can make a strong argument about the recoverability of a pickle. That answer is: \"Yes, an incomplete pickle always leads to an exception.\"\nWhy are we able to do this? Because the \"pickle\" format is in fact a small stack-based language. In a stack-based language you write code that pushes item after item on a stack, then invoke an operator that does something with the data you've accumulated. And it just so happens that a pickle has to end with the command \".\", which says: \"take the item now at the bottom of the stack and return it as the value of this pickle.\" If your pickle is chopped off early, it will not end with this command, and you will get an EOF error.\nIf you want to try recovering some of the data, you might have to write your own interpreter, or call into pickle.py somewhere that gets around its wanting to raise EOFError when done interpreting the stack without finding a \".\". The main thing to keep in mind is that, as in most stack-based languages, big data structures are built \"backwards\": first you put lots of little strings or numbers on the stack, then you invoke an operation that says \"put those together into a list\" or \"grab pairs of items on the stack and make a dictionary\". So, if a pickle is interrupted, you'll find the stack full of pieces of the object that was going to be built, but you'll be missing that final code that tells you what was going to be built from the pieces.\n", "Pickling an object returns an str object, or writes an str object to a file ... it doesn't modify the original object. If a \"crash\" (exception) happens inside a pickling call, the result won't be returned to the caller, so you don't have anything that you could try to unpickle. Besides, why would you want to unpickle some dud rubbish left over after an exception?\n", "This is a development of S. Lott's answer, with my suggestion: Append a hash or checksum to your data, that you check before unpickling again.\nHere is a (simple) implementation of safepickle/safeunpickle to show how you can pad the pickled data with a hash (cryptographically strong hash in this case):\nimport hashlib\nimport cPickle as pickle\n\n_HASHLEN = 20\n\ndef safepickle(obj):\n s = pickle.dumps(obj)\n s += hashlib.sha1(s).digest()\n return s\n\ndef safeunpickle(pstr):\n data, checksum = pstr[:-_HASHLEN], pstr[-_HASHLEN:]\n if hashlib.sha1(data).digest() != checksum:\n raise ValueError(\"Pickle hash does not match!\")\n return pickle.loads(data)\n\n\nl = range(20)\np = safepickle(l)\nnew_l = safeunpickle(p)\nprint new_l == l\n\nThis method is to ensure that what you unpickle matches what you pickled and wrote to disk previously, but it does not protect against mixing up different pickles or malicious attacks, of course.\n(This method can be generalized to the pattern safe_write_file and safe_read_file for any whole-file data.)\n", "I doubt you could make a claim that it will always lead to an exception. Pickles are actually programs written in a specialized stack language. The internal details of pickles change from version to version, and new pickle protocols are added occasionally. The state of the pickle after a crash, and the resulting effects on the unpickler, would be very difficult to summarize in a simple statement like \"it will always lead to an exception\".\n", "To be sure that you have a \"complete\" pickle file, you need to pickle three things.\n\nPickle a header of some kind that claims how many objects and what the end-of-file flag will look like. A tuple of an integer and the EOF string, for example.\nPickle the objects you actually care about. The count is given by the header.\nPickle a tail object that you don't actually care about, but which simply matches the claim made in the header. This can be simply a string that matches what was in the header.\n\nWhen you unpickle this file, you have to unpickle three things:\n\nThe header. You care about the count and the form of the tail.\nThe objects you actually care about.\nThe tail object. Check that it matches the header. Other than that, it doesn't convey much except that the file was written in it's entirety.\n\n" ]
[ 8, 2, 2, 1, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001653897_python.txt
Q: Besides NLTK, what is the best information retrieval library for Python? For use to analyze documents on the Internet! A: Alternatively, R has many tools available for text mining, and it's easy to integrate with Python using RPy2. Have a look at the Natural Language Processing view on CRAN. In particular, look at the tm package. Here are some relevant links: Paper about the package in the Journal of Statistical Computing: http://www.jstatsoft.org/v25/i05/paper. The paper includes a nice example of an analysis of the R-devel mailing list (https://stat.ethz.ch/pipermail/r-devel/) newsgroup postings from 2006. Package homepage: http://cran.r-project.org/web/packages/tm/index.html Look at the introductory vignette: http://cran.r-project.org/web/packages/tm/vignettes/tm.pdf In addition, R provides many tools for parsing HTML or XML. Have a look at this question for an example using the RCurl and XML packages. A: Could you please provide more information why NLTK is insufficient or what features you need to consider some framework the "best"? Nevertheless, there is the builtin shlex lexical parsing library. There is also a recent book on the subject, Natural Language Processing with Python. It looks like at least part of it covers NLTK. You might also want to look at this list of tutorials and libraries on the awaretek website, which also points to the NLQ.py framework. Natural Language Processing with Python http://ecx.images-amazon.com/images/I/41NBqj7NyGL._BO2.jpg
Besides NLTK, what is the best information retrieval library for Python?
For use to analyze documents on the Internet!
[ "Alternatively, R has many tools available for text mining, and it's easy to integrate with Python using RPy2.\nHave a look at the Natural Language Processing view on CRAN. In particular, look at the tm package. Here are some relevant links:\n\nPaper about the package in the Journal of Statistical Computing: http://www.jstatsoft.org/v25/i05/paper. The paper includes a nice example of an analysis of the R-devel\nmailing list (https://stat.ethz.ch/pipermail/r-devel/) newsgroup postings from 2006.\nPackage homepage: http://cran.r-project.org/web/packages/tm/index.html\nLook at the introductory vignette: http://cran.r-project.org/web/packages/tm/vignettes/tm.pdf\n\nIn addition, R provides many tools for parsing HTML or XML. Have a look at this question for an example using the RCurl and XML packages.\n", "Could you please provide more information why NLTK is insufficient or what features you need to consider some framework the \"best\"?\nNevertheless, there is the builtin shlex lexical parsing library.\nThere is also a recent book on the subject, Natural Language Processing with Python. It looks like at least part of it covers NLTK.\nYou might also want to look at this list of tutorials and libraries on the awaretek website, which also points to the NLQ.py framework.\nNatural Language Processing with Python http://ecx.images-amazon.com/images/I/41NBqj7NyGL._BO2.jpg\n" ]
[ 5, 3 ]
[]
[]
[ "information_retrieval", "python", "text_mining" ]
stackoverflow_0001635014_information_retrieval_python_text_mining.txt
Q: Python MultiThreading With Urllib2 Issue I can download multiple files quite fast with many threads at once but the problem is that after a few minutes it tends to slow down gradually to almost a full stop, I have no idea why. There's nothing wrong with my code that I can see and my RAM/CPU is fine.. The only thing I can think of is that urllib2 isn't handling the massive amount of connections correctly. If it helps, I am using proxies but I had this issue without them as well. Does anyone have any suggestions or insight to this issue? Thanks! A: Can you confirm that doing the same number of simultaneous downloads without python continues to download fast? Perhaps the issue is not with your code, but with your connection getting throttled or with the site serving the files. If that's not the issue you could try the pyprocessing library to implement a multi process version instead of a multi threaded version. If you're using python 2.6 pyprocessing is included in the distribution as multiprocessing. It's quite easy to convert threaded code to multi process code, so it's worth a try if only to confirm the issue is with the threading. A: Like another answer suggested, the problem might be with your connection or the site that is serving the files. If you can run your code against a test server locally then you will be able to eliminate this. If the problem goes away when using the test server then the problem lies with your connection or the remote server. If the problem persists when using the test server then it's most like something in your code, but then you will at least have the server logs to give you more insight in to what is happening. As for another avenue you can explore, this thread suggests using httplib2 instead of urllib2.
Python MultiThreading With Urllib2 Issue
I can download multiple files quite fast with many threads at once but the problem is that after a few minutes it tends to slow down gradually to almost a full stop, I have no idea why. There's nothing wrong with my code that I can see and my RAM/CPU is fine.. The only thing I can think of is that urllib2 isn't handling the massive amount of connections correctly. If it helps, I am using proxies but I had this issue without them as well. Does anyone have any suggestions or insight to this issue? Thanks!
[ "Can you confirm that doing the same number of simultaneous downloads without python continues to download fast? Perhaps the issue is not with your code, but with your connection getting throttled or with the site serving the files.\nIf that's not the issue you could try the pyprocessing library to implement a multi process version instead of a multi threaded version. If you're using python 2.6 pyprocessing is included in the distribution as multiprocessing. It's quite easy to convert threaded code to multi process code, so it's worth a try if only to confirm the issue is with the threading.\n", "Like another answer suggested, the problem might be with your connection or the site that is serving the files. If you can run your code against a test server locally then you will be able to eliminate this.\nIf the problem goes away when using the test server then the problem lies with your connection or the remote server.\nIf the problem persists when using the test server then it's most like something in your code, but then you will at least have the server logs to give you more insight in to what is happening.\nAs for another avenue you can explore, this thread suggests using httplib2 instead of urllib2.\n" ]
[ 3, 1 ]
[]
[]
[ "multithreading", "python", "sockets", "urllib" ]
stackoverflow_0001654721_multithreading_python_sockets_urllib.txt
Q: Install PyObjC on Python 2.6 on OS X 10.5? OS X 10.5.8 came with Python 2.5, and had PyObjC already installed. I installed Python 2.6 from the python.org site, and PyObjC isn't there. I can't find a download to install PyObjC on my Python 2.6 install. Is checking out the PyObjC trunk and trying to build it my only choice? Will that work "out of the box"? A: Apple includes PyObjC with their Pythons that come with OS X 10.5 and 10.6. It is not part of the python.org installers. But it should be easy enough to install. Just install setuptools to the python.org python 2.6 following the instructions here. Then use easy_install-2.6 (which will have been installed in /Library/Frameworks/Python.framework/Versions/2.6/bin and may already be on your $PATH) to do: easy_install-2.6 pyobjc==2.2b2 as described here. If you want to live on the bleeding edge, you could try installing directly from the svn repository as there has been a lot of work recently, primarily in support of 10.6. If that seems like too much work, you could install an older version and all dependencies including python via MacPorts: sudo port install py26-pyobjc2 A: You should probably try to build PyObjC from trunk, which will work fine on the official Python 2.6 distribution, but not on Python 2.5. There are quite a lot of fixes in the trunk right now that weren't in 2.2b2, which afaik. is the most current version available through easy_install. There are some little snags that you may run into when building with py2app on 10.5 + 2.6 + PyObjC 2.2 (which for a lot of reasons is what you should probably do, instead of using the Xcode templates from 10.5 that build differently), especially if you still have Python 2.5 installed somewhere, so you'll probably want to build and install py2app from trunk as well, this particular issue I ran into with PyObjC 2.2 on 2.6 on 10.5 has been fixed by now. A: If your goal is to write software that will work on other people's computers, you shouldn't touch the default Python installation. If you simply cannot live without 2.6, then you're responsible for re-creating everything on your own, and that's not going to be a point-and-click process by any means.
Install PyObjC on Python 2.6 on OS X 10.5?
OS X 10.5.8 came with Python 2.5, and had PyObjC already installed. I installed Python 2.6 from the python.org site, and PyObjC isn't there. I can't find a download to install PyObjC on my Python 2.6 install. Is checking out the PyObjC trunk and trying to build it my only choice? Will that work "out of the box"?
[ "Apple includes PyObjC with their Pythons that come with OS X 10.5 and 10.6. It is not part of the python.org installers. But it should be easy enough to install. Just install setuptools to the python.org python 2.6 following the instructions here. Then use easy_install-2.6 (which will have been installed in /Library/Frameworks/Python.framework/Versions/2.6/bin and may already be on your $PATH) to do:\neasy_install-2.6 pyobjc==2.2b2\n\nas described here. If you want to live on the bleeding edge, you could try installing directly from the svn repository as there has been a lot of work recently, primarily in support of 10.6.\nIf that seems like too much work, you could install an older version and all dependencies including python via MacPorts:\nsudo port install py26-pyobjc2\n\n", "You should probably try to build PyObjC from trunk, which will work fine on the official Python 2.6 distribution, but not on Python 2.5. There are quite a lot of fixes in the trunk right now that weren't in 2.2b2, which afaik. is the most current version available through easy_install.\nThere are some little snags that you may run into when building with py2app on 10.5 + 2.6 + PyObjC 2.2 (which for a lot of reasons is what you should probably do, instead of using the Xcode templates from 10.5 that build differently), especially if you still have Python 2.5 installed somewhere, so you'll probably want to build and install py2app from trunk as well, this particular issue I ran into with PyObjC 2.2 on 2.6 on 10.5 has been fixed by now.\n", "If your goal is to write software that will work on other people's computers, you shouldn't touch the default Python installation. If you simply cannot live without 2.6, then you're responsible for re-creating everything on your own, and that's not going to be a point-and-click process by any means. \n" ]
[ 1, 1, 0 ]
[]
[]
[ "installation", "macos", "pyobjc", "python" ]
stackoverflow_0001653153_installation_macos_pyobjc_python.txt
Q: Output file redirection in Python I'm writing a backup script I intend to execute in a cronjob every night. The script sets sys.stdout and sys.stderr to an output file to keep a log of what happens. To do the backup I use the following code cmd = 'rsync -av --del --stats --filter "- .thumbnails/" ' + \ '--filter "- *~" --filter "- *.iso" --filter "- lost+found/" ' + \ '--filter "- .cache/" --filter "- tmp/" --filter "- *.mp3" ' + \ '--filter "- *.log" ' + srcDir + ' ' + dstDir print "Executing '"+cmd+"' ..." try: sys.stdout.flush() sys.stderr.flush() retcode = subprocess.call( cmd, stdin = sys.stdin, stdout = sys.stdout, stderr=sys.stderr, shell=False ) if retcode < 0: print >>sys.stderr, "Command was terminated by signal", -retcode elif retcode > 0: print >>sys.stderr, "Command returned code ", retcode except OSError, e: print >>sys.stderr, "Execution failed:", e I add print statements before and after the subprocess call. The problem is that I get the output of the subprocess call before any output of my print instructions before the call. I added the flush() calls but it has no effect. Why is this happening and how could I change this behaviour ? A: I just found the solution here in a Stackoverflow answer. Replace sys.stderr = sys.stdout = logFile = open( tmpLogFileName, 'a' ) with sys.stderr = sys.stdout = logFile = open( tmpLogFileName, 'a', 0 ) This tells python to not assign any output buffer to file. A: Have you tried putting the flush calls outside the try block? A: Why are you printing to stderr? If the subprocess is writing to stdout while you are writing to stderr, that could explain the odd interleaving.
Output file redirection in Python
I'm writing a backup script I intend to execute in a cronjob every night. The script sets sys.stdout and sys.stderr to an output file to keep a log of what happens. To do the backup I use the following code cmd = 'rsync -av --del --stats --filter "- .thumbnails/" ' + \ '--filter "- *~" --filter "- *.iso" --filter "- lost+found/" ' + \ '--filter "- .cache/" --filter "- tmp/" --filter "- *.mp3" ' + \ '--filter "- *.log" ' + srcDir + ' ' + dstDir print "Executing '"+cmd+"' ..." try: sys.stdout.flush() sys.stderr.flush() retcode = subprocess.call( cmd, stdin = sys.stdin, stdout = sys.stdout, stderr=sys.stderr, shell=False ) if retcode < 0: print >>sys.stderr, "Command was terminated by signal", -retcode elif retcode > 0: print >>sys.stderr, "Command returned code ", retcode except OSError, e: print >>sys.stderr, "Execution failed:", e I add print statements before and after the subprocess call. The problem is that I get the output of the subprocess call before any output of my print instructions before the call. I added the flush() calls but it has no effect. Why is this happening and how could I change this behaviour ?
[ "I just found the solution here in a Stackoverflow answer.\nReplace \nsys.stderr = sys.stdout = logFile = open( tmpLogFileName, 'a' )\n\nwith \nsys.stderr = sys.stdout = logFile = open( tmpLogFileName, 'a', 0 )\n\nThis tells python to not assign any output buffer to file.\n", "Have you tried putting the flush calls outside the try block?\n", "Why are you printing to stderr? If the subprocess is writing to stdout while you are writing to stderr, that could explain the odd interleaving.\n" ]
[ 3, 0, 0 ]
[]
[]
[ "file", "python" ]
stackoverflow_0001654875_file_python.txt
Q: Explanation of PyAPI_DATA() macro? I've searched all over the web and can't seem to find documentation or even a simple explanation of what PyAPI_DATA() does (even though it is used in the Python header files and cited on python.org). Could anyone care to explain what this is or point me to documentation I am overlooking? Thanks. A: It's used to mark public API variables (as Python's core is usually a dynamic library), e.g. on Windows, it's expanded to extern __declspec(dllexport) RTYPE when core is compiled and to extern __declspec(dllimport) RTYPE when e.g. modules are compiled. It's defined in Include/pyport.h.
Explanation of PyAPI_DATA() macro?
I've searched all over the web and can't seem to find documentation or even a simple explanation of what PyAPI_DATA() does (even though it is used in the Python header files and cited on python.org). Could anyone care to explain what this is or point me to documentation I am overlooking? Thanks.
[ "It's used to mark public API variables (as Python's core is usually a dynamic library), e.g. on Windows, it's expanded to extern __declspec(dllexport) RTYPE when core is compiled and to extern __declspec(dllimport) RTYPE when e.g. modules are compiled. It's defined in Include/pyport.h.\n" ]
[ 9 ]
[]
[]
[ "api", "c", "python", "python_c_api" ]
stackoverflow_0001655271_api_c_python_python_c_api.txt
Q: Subclassing python's dict, override of __setitem__ doesn't retain new value I'm subclassing dict, but ran into a problem with setitem where one assignment works, but another assignment does not. I've boiled it down to the following basic problem: class CustomDict(dict): def __setitem__(self, key, value): super(CustomDict, self).__setitem__(key, value) Test 1 fails: data = {"message":"foo"} CustomDict(data)["message"] = "bar" print CustomDict(data) # Expected "{'message': 'bar'}". Actual is "{'message': 'foo'}". print data # Expected "{'message': 'bar'}". Actual is "{'message': 'foo'}". Test 2 succeeds: data = CustomDict({"message":"foo"}) data["message"] = "bar" print CustomDict(data) # Expected "{'message': 'bar'}". Actual matches expected. print data # Expected "{'message': 'bar'}". Actual matches expected. I looked online but couldn't tell whether the subclass constructor copies the dictionary so operations are performed on a different instance of the dictionary. Any advice? A: You are constructing new instances of CustomDict on each line. CustomDict(data) makes a new instance, which copies data. Try this: cd = CustomData({"message":"foo"}) cd["message"] = "bar" print cd # prints "{'message': 'bar'}".
Subclassing python's dict, override of __setitem__ doesn't retain new value
I'm subclassing dict, but ran into a problem with setitem where one assignment works, but another assignment does not. I've boiled it down to the following basic problem: class CustomDict(dict): def __setitem__(self, key, value): super(CustomDict, self).__setitem__(key, value) Test 1 fails: data = {"message":"foo"} CustomDict(data)["message"] = "bar" print CustomDict(data) # Expected "{'message': 'bar'}". Actual is "{'message': 'foo'}". print data # Expected "{'message': 'bar'}". Actual is "{'message': 'foo'}". Test 2 succeeds: data = CustomDict({"message":"foo"}) data["message"] = "bar" print CustomDict(data) # Expected "{'message': 'bar'}". Actual matches expected. print data # Expected "{'message': 'bar'}". Actual matches expected. I looked online but couldn't tell whether the subclass constructor copies the dictionary so operations are performed on a different instance of the dictionary. Any advice?
[ "You are constructing new instances of CustomDict on each line. CustomDict(data) makes a new instance, which copies data.\nTry this:\ncd = CustomData({\"message\":\"foo\"})\ncd[\"message\"] = \"bar\"\nprint cd # prints \"{'message': 'bar'}\".\n\n" ]
[ 10 ]
[]
[]
[ "dictionary", "python", "subclass" ]
stackoverflow_0001655422_dictionary_python_subclass.txt
Q: Can I access the __dict__ object for the local scope? Here is my situation... I am trying to dynamically generate a bunch of stuff in my settings.py file on a django site. I am setting up several sites, (via sites framework) and I want to have some values I plug in to a function that will generate a portion of the settings file for each site. for example: from universal_settings import * SITE_NAME = 'First Site' SITE_SLUG = 'firstsite' DEFAULT_FROM_EMAIL = '%s <[email protected]>' % SITE_NAME ROOT_URLCONF = 'mysite.urls.%s' % SITE_SLUG TEMPLATE_DIRS += ( os.path.join(PROJECT_ROOT, "templates", SITE_SLUG), ) obviously it's a huge violation of DRY to have those last 3 lines in the settings file for every site running this code. So I want to do something like this from universal_settings import * from utils import get_dynamic_settings SITE_NAME = 'First Site' SITE_SLUG = 'firstsite' get_dynamic_settings( locals() ) And here is the function # WARNING: THIS CODE DOES NOT WORK! def get_dynamic_settings(context_dict): global DEFAULT_FROM_EMAIL global ROOT_URLCONF global TEMPLATE_DIRS DEFAULT_FROM_EMAIL = '%s <[email protected]>' % context_dict['SITE_NAME'] ROOT_URLCONF = 'mysite.urls.%s' % context_dict['SITE_SLUG'] TEMPLATE_DIRS += ( os.path.join(PROJECT_ROOT, "templates", context_dict['SITE_SLUG']), ) so my question is... how do I add things to the scope of the settings file? it doesn't seem to have a dict object available to the variables within it. Maybe I'm going about this all wrong? Thanks for your help! PS - my understanding of the global keyword is that it tells the compiler that the function means to manipulate a global variable within it's own file - is there such a thing for the file which the function is called? A: Dict returned by locals() (or globals()) is mutable, so you could do: def get_dynamic_settings(context_dict): context_dict['DEFAULT_FROM_EMAIL'] = '%s <[email protected]>' % context_dict['SITE_NAME'] context_dict['ROOT_URLCONF'] = 'mysite.urls.%s' % context_dict['SITE_SLUG'] context_dict['TEMPLATE_DIRS'] += (os.path.join(PROJECT_ROOT, "templates", context_dict['SITE_SLUG']),) A: You might want to look into the various schemes people have used to configure many django sites without duplication: How to manage local vs production settings in Django? and Elegantly handle site-specific settings/configuration in svn/hg/git/etc?
Can I access the __dict__ object for the local scope?
Here is my situation... I am trying to dynamically generate a bunch of stuff in my settings.py file on a django site. I am setting up several sites, (via sites framework) and I want to have some values I plug in to a function that will generate a portion of the settings file for each site. for example: from universal_settings import * SITE_NAME = 'First Site' SITE_SLUG = 'firstsite' DEFAULT_FROM_EMAIL = '%s <[email protected]>' % SITE_NAME ROOT_URLCONF = 'mysite.urls.%s' % SITE_SLUG TEMPLATE_DIRS += ( os.path.join(PROJECT_ROOT, "templates", SITE_SLUG), ) obviously it's a huge violation of DRY to have those last 3 lines in the settings file for every site running this code. So I want to do something like this from universal_settings import * from utils import get_dynamic_settings SITE_NAME = 'First Site' SITE_SLUG = 'firstsite' get_dynamic_settings( locals() ) And here is the function # WARNING: THIS CODE DOES NOT WORK! def get_dynamic_settings(context_dict): global DEFAULT_FROM_EMAIL global ROOT_URLCONF global TEMPLATE_DIRS DEFAULT_FROM_EMAIL = '%s <[email protected]>' % context_dict['SITE_NAME'] ROOT_URLCONF = 'mysite.urls.%s' % context_dict['SITE_SLUG'] TEMPLATE_DIRS += ( os.path.join(PROJECT_ROOT, "templates", context_dict['SITE_SLUG']), ) so my question is... how do I add things to the scope of the settings file? it doesn't seem to have a dict object available to the variables within it. Maybe I'm going about this all wrong? Thanks for your help! PS - my understanding of the global keyword is that it tells the compiler that the function means to manipulate a global variable within it's own file - is there such a thing for the file which the function is called?
[ "Dict returned by locals() (or globals()) is mutable, so you could do:\ndef get_dynamic_settings(context_dict):\n context_dict['DEFAULT_FROM_EMAIL'] = '%s <[email protected]>' % context_dict['SITE_NAME']\n context_dict['ROOT_URLCONF'] = 'mysite.urls.%s' % context_dict['SITE_SLUG']\n context_dict['TEMPLATE_DIRS'] += (os.path.join(PROJECT_ROOT, \"templates\", context_dict['SITE_SLUG']),)\n\n", "You might want to look into the various schemes people have used to configure many django sites without duplication: How to manage local vs production settings in Django? and Elegantly handle site-specific settings/configuration in svn/hg/git/etc?\n" ]
[ 3, 3 ]
[]
[]
[ "django", "python", "scope", "settings" ]
stackoverflow_0001655509_django_python_scope_settings.txt
Q: Converting Python code to PHP What is the following Python code in PHP? import sys li = range(1,777); def countFigure(li, n): m = str(n); return str(li).count(m); # counting figures for substr in range(1,10): print substr, " ", countFigure(li, substr); Wanted output for 777 1 258 2 258 3 258 4 258 5 258 6 258 7 231 8 147 9 147 A: It's been a while since I did any Python but I think this should do it. If you could clarify what str(li) looks like it would help. <?php $li = implode('', range(1, 776)); function countFigure($li, $n) { return substr_count($li, $n); } // counting figures foreach (range(1, 9) as $substr) echo $substr, " ", countFigure($li, $substr), "\n";
Converting Python code to PHP
What is the following Python code in PHP? import sys li = range(1,777); def countFigure(li, n): m = str(n); return str(li).count(m); # counting figures for substr in range(1,10): print substr, " ", countFigure(li, substr); Wanted output for 777 1 258 2 258 3 258 4 258 5 258 6 258 7 231 8 147 9 147
[ "It's been a while since I did any Python but I think this should do it.\nIf you could clarify what str(li) looks like it would help.\n<?php\n\n$li = implode('', range(1, 776));\n\nfunction countFigure($li, $n)\n{\n return substr_count($li, $n);\n}\n\n// counting figures\n\nforeach (range(1, 9) as $substr)\n echo $substr, \" \", countFigure($li, $substr), \"\\n\";\n\n" ]
[ 1 ]
[]
[]
[ "php", "python" ]
stackoverflow_0001655556_php_python.txt
Q: How can I generate random numbers in Python? Are there any built-in libraries in Python or Numpy to generate random numbers based on various common distributions, such as: Normal Poisson Exponential Bernoulli And various others? Are there any such libraries with multi-variate distributions? A: #!/usr/bin/env python from scipy.stats import bernoulli,poisson,norm,expon bernoulli, poisson, norm, expon and many others are documented here print(norm.rvs(size=30)) print(bernoulli.rvs(.3,size=30)) print(poisson.rvs(1,2,size=30)) print(expon.rvs(5,size=30)) All the distributions defined in scipy.stats have a common interface to the pdf, cdf, rvs (random variates). More info here. A: The random module has tons of functions for generating random numbers in lots of way. Not sure it has multi-variate. Numpy.random would be the next place to look.
How can I generate random numbers in Python?
Are there any built-in libraries in Python or Numpy to generate random numbers based on various common distributions, such as: Normal Poisson Exponential Bernoulli And various others? Are there any such libraries with multi-variate distributions?
[ "#!/usr/bin/env python\nfrom scipy.stats import bernoulli,poisson,norm,expon\n\nbernoulli, poisson, norm, expon and many others are documented here \nprint(norm.rvs(size=30))\nprint(bernoulli.rvs(.3,size=30))\nprint(poisson.rvs(1,2,size=30))\nprint(expon.rvs(5,size=30))\n\nAll the distributions defined in scipy.stats have a common interface to the pdf, cdf, rvs (random variates). More info here. \n", "The random module has tons of functions for generating random numbers in lots of way. Not sure it has multi-variate.\nNumpy.random would be the next place to look.\n" ]
[ 27, 5 ]
[]
[]
[ "python", "random" ]
stackoverflow_0001655559_python_random.txt
Q: Help with a AppEngine Handler Regex? I've been trying to design a Google AppEngine Python handler regex and haven't been too successful in getting it to work. I'm trying to handle API calls similar to OpenStreetMap's. My current regex looks like this: /api/0.6/(.*?)/(.*?)\/?(.*?) But when this comes in: /api/0.6/changeset/723/close It incorrectly groups 723/close and changeset, when I wanted it to group it into three things: changeset, 723, and close. The last slash and group is optional, thus the /?. A: Try this: ^/api/0.6/([^/]+)/([^/]+)/?([^/]*)$ My Python tests: >>> regex = re.compile(r"^/api/0.6/([^/]+)/([^/]+)/?([^/]*)$") >>> regex.match("/api/0.6/changeset") is None True >>> regex.match("/api/0.6/changeset/723").groups() ('changeset', '723', '') >>> regex.match("/api/0.6/changeset/723/close").groups() ('changeset', '723', 'close') >>> regex.match("/api/0.6/changeset/723/close/extragroup") is None True
Help with a AppEngine Handler Regex?
I've been trying to design a Google AppEngine Python handler regex and haven't been too successful in getting it to work. I'm trying to handle API calls similar to OpenStreetMap's. My current regex looks like this: /api/0.6/(.*?)/(.*?)\/?(.*?) But when this comes in: /api/0.6/changeset/723/close It incorrectly groups 723/close and changeset, when I wanted it to group it into three things: changeset, 723, and close. The last slash and group is optional, thus the /?.
[ "Try this:\n^/api/0.6/([^/]+)/([^/]+)/?([^/]*)$\n\nMy Python tests:\n>>> regex = re.compile(r\"^/api/0.6/([^/]+)/([^/]+)/?([^/]*)$\")\n>>> regex.match(\"/api/0.6/changeset\") is None\nTrue\n>>> regex.match(\"/api/0.6/changeset/723\").groups()\n('changeset', '723', '')\n>>> regex.match(\"/api/0.6/changeset/723/close\").groups()\n('changeset', '723', 'close')\n>>> regex.match(\"/api/0.6/changeset/723/close/extragroup\") is None\nTrue\n\n" ]
[ 3 ]
[]
[]
[ "google_app_engine", "python", "regex" ]
stackoverflow_0001655745_google_app_engine_python_regex.txt
Q: Querying the connecting device for usb devices in OS X Ok, so here's the setup. In OS X (>= 10.5), is it possible, given a mounted usb device with a known location, say /Volumes/FLASHDRIVE, to find out whether this device is connecting through another usb device (a card reader for example) and if so, which one. Ideally, this could all be done in python, but if not that's ok too. A: You're confusing the term device with the term volume--in this example (and in most real world situations) there would only be one device involved. The state of most hardware falls under the purview of IOKit, and the only way you can possibly get to this information from Python is through careful parsing of the ioreg tool's output.
Querying the connecting device for usb devices in OS X
Ok, so here's the setup. In OS X (>= 10.5), is it possible, given a mounted usb device with a known location, say /Volumes/FLASHDRIVE, to find out whether this device is connecting through another usb device (a card reader for example) and if so, which one. Ideally, this could all be done in python, but if not that's ok too.
[ "You're confusing the term device with the term volume--in this example (and in most real world situations) there would only be one device involved. \nThe state of most hardware falls under the purview of IOKit, and the only way you can possibly get to this information from Python is through careful parsing of the ioreg tool's output.\n" ]
[ 0 ]
[]
[]
[ "macos", "python", "usb" ]
stackoverflow_0001655927_macos_python_usb.txt
Q: Avoid C style comments while reading a file I am parsing a C file for LOC in a function using python. I am starting from first line of function definition and skipping all lines till i met first "{". The issue is that "{" can also come as a part of comment. I just want to skip all "{" present inside comments. e.g 100: int func( 102: int i, // some comment { .... 103: float f, 104: char c /* some comment here { ... 105: .... more comment */ 106: ) 107:{ Whats the best pythonic way to acheive this. A: Here is a comment stripper that should also comprehend comment introducers within quoted strings: from pyparsing import cppStyleComment,dblQuotedString cppStyleComment.ignore(dblQuotedString) src = cppStyleComment.suppress().transformString(src) print src With your original snippet as src, this prints: int func( int i, float f, char c ) { You can do all this in memory, so you don't have to create a comment-less file first. A: You're going to find that getting this right is very difficult without a real lexer and parser. This will find the opening brace you're looking for: f = open("myfile.c") for l in f.readlines(): l = l.split('//')[0] if '{' in l: break But for example, you could have double-slashes inside string literals, etc. A: If you have gcc, you can use gcc -E input_file as a preprocessor which will strip-off comments (but also expand macros - might change LOC). For your example the output would be: # 1 "_.c" # 1 "" # 1 "" # 1 "_.c" int func( int i, float f, char c ) {
Avoid C style comments while reading a file
I am parsing a C file for LOC in a function using python. I am starting from first line of function definition and skipping all lines till i met first "{". The issue is that "{" can also come as a part of comment. I just want to skip all "{" present inside comments. e.g 100: int func( 102: int i, // some comment { .... 103: float f, 104: char c /* some comment here { ... 105: .... more comment */ 106: ) 107:{ Whats the best pythonic way to acheive this.
[ "Here is a comment stripper that should also comprehend comment introducers within quoted strings:\nfrom pyparsing import cppStyleComment,dblQuotedString\n\ncppStyleComment.ignore(dblQuotedString)\nsrc = cppStyleComment.suppress().transformString(src)\n\nprint src\n\nWith your original snippet as src, this prints:\nint func(\n int i, \n float f,\n char c \n )\n {\n\nYou can do all this in memory, so you don't have to create a comment-less file first.\n", "You're going to find that getting this right is very difficult without a real lexer and parser.\nThis will find the opening brace you're looking for:\nf = open(\"myfile.c\")\nfor l in f.readlines():\n l = l.split('//')[0]\n if '{' in l:\n break\n\nBut for example, you could have double-slashes inside string literals, etc.\n", "If you have gcc, you can use gcc -E input_file as a preprocessor which will strip-off comments (but also expand macros - might change LOC). For your example the output would be:\n\n# 1 \"_.c\"\n# 1 \"\"\n# 1 \"\"\n# 1 \"_.c\"\n\n\n int func(\n int i,\n float f,\n char c\n\n )\n{\n\n" ]
[ 7, 3, 0 ]
[]
[]
[ "python" ]
stackoverflow_0001654649_python.txt
Q: an auth method over HTTP(s) and/or REST with libraries for Python and/or C++ Because I don't exactly know how any auth method works I want to write my own. So, what I want to do is the following. A client sends over HTTPs username+password(or SHA1(username+password)) the server gets the username+password and generates a big random number and stores it in a table called TOKENS(in some database) along with his IP, then it give the client that exact number. From now on, all the requests made by the client are accompanied by that TOKEN and if the TOKEN is not in the table TOKENS then any such request will fail. If the user hasn't made any requests in 2 hours the TOKEN will expire. If the user wants to log out he makes a request '/logout' to the server and the server deletes from the table TOKENS the entry containing his token but ONLY if the request to '/logout' originates from his IP. Maybe I am reinventing the wheel... this wouldn't be very good so my question is if there is some auth system that already works like this , what is it's name , does it have any OSS C++ libraries or Python libraries available ? I am not sure if finding such an auth system and configuring it would take longer than writing it myself, on the other hand I know security is a delicate problem so I am approaching this with some doubt that I am capable of writing something secure enough. Also, is there a good OSS C++ HTTP library ? I'm planning to write a RESTful Desktop client for a web app. Depending on the available libraries I will choose if I'll write it in C++ or Python. A: If you are implementing such authentication system over ordinary HTTP, you are vulnerable to replay attacks. Attacker could sniff out the SHA1(username+password) and just resend it every time he/she wants to log in. To make such authentication system work, you will need to use a nonce. You might want to look at HTTP Digest authentication for tips. A: Because I don't exactly know how any auth method works I want to write my own How could you ever write something you don't understand? Learn at least one, the underlaying concepts are similar in every library. Python has repoze.what. A: I would highly recommend OAuth here, for which many open source libraries are available.
an auth method over HTTP(s) and/or REST with libraries for Python and/or C++
Because I don't exactly know how any auth method works I want to write my own. So, what I want to do is the following. A client sends over HTTPs username+password(or SHA1(username+password)) the server gets the username+password and generates a big random number and stores it in a table called TOKENS(in some database) along with his IP, then it give the client that exact number. From now on, all the requests made by the client are accompanied by that TOKEN and if the TOKEN is not in the table TOKENS then any such request will fail. If the user hasn't made any requests in 2 hours the TOKEN will expire. If the user wants to log out he makes a request '/logout' to the server and the server deletes from the table TOKENS the entry containing his token but ONLY if the request to '/logout' originates from his IP. Maybe I am reinventing the wheel... this wouldn't be very good so my question is if there is some auth system that already works like this , what is it's name , does it have any OSS C++ libraries or Python libraries available ? I am not sure if finding such an auth system and configuring it would take longer than writing it myself, on the other hand I know security is a delicate problem so I am approaching this with some doubt that I am capable of writing something secure enough. Also, is there a good OSS C++ HTTP library ? I'm planning to write a RESTful Desktop client for a web app. Depending on the available libraries I will choose if I'll write it in C++ or Python.
[ "If you are implementing such authentication system over ordinary HTTP, you are vulnerable to replay attacks. Attacker could sniff out the SHA1(username+password) and just resend it every time he/she wants to log in. To make such authentication system work, you will need to use a nonce.\nYou might want to look at HTTP Digest authentication for tips.\n", "\nBecause I don't exactly know how any auth method works I want to write my own\n\nHow could you ever write something you don't understand? Learn at least one, the underlaying concepts are similar in every library.\nPython has repoze.what.\n", "I would highly recommend OAuth here, for which many open source libraries are available.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "authentication", "c++", "http", "python", "rest" ]
stackoverflow_0001486056_authentication_c++_http_python_rest.txt
Q: appending successfully to a python list This may seem like the worlds simplest python question... But I'm going to give it a go of explaining it. Basically I have to loop through pages of json results from a query. the standard result is this {'result': [{result 1}, {result 2}], 'next_page': '2'} I need the loop to continue to loop, appending the list in the result key to a var that can be later accessed and counted the amount of results within the list. However I require it to loop only while next_page exists as after a while when there are no more pages the next_page key is dropped from the dict. currently i have this next_page = True while next_page == True: try: next_page_result = get_results['next_page'] # this gets the next page next_url = urllib2.urlopen("http://search.twitter.com/search.json" + next_page_result)# this opens the next page json_loop = simplejson.load(next_url) # this puts the results into json new_result = result.append(json_loop['results']) # this grabs the result and "should" put it into the list except KeyError: next_page = False result_count = len(new_result) A: Alternate (cleaner) approach, making one big list: results = [] res = { "next_page": "magic_token_to_get_first_page" } while "next_page" in res: fp = urllib2.urlopen("http://search.twitter.com/search.json" + res["next_page"]) res = simplejson.load(fp) fp.close() results.extend(res["results"]) A: new_result = result.append(json_loop['results']) The list is appended as a side-effect of the method call. append() actually returns None, so new_result is now a reference to None. A: You want to use result.append(json_loop['results']) # this grabs the result and "should" put it into the list new_result = result if you insist on doing it that way. As Bastien said, result.append(whatever) == None A: AFAICS, you don't need the variable new_result at all. result_count = len(result) will give you the answer you need. A: you cannot append into a dict..you can append into your list inside your dict,you should do like this result['result'].append(json_loop['results']) if you want to check if there is no next page value in your result dict,and you want to delete the key from the dict,just do like this if not result['next_page']: del result['next_page']
appending successfully to a python list
This may seem like the worlds simplest python question... But I'm going to give it a go of explaining it. Basically I have to loop through pages of json results from a query. the standard result is this {'result': [{result 1}, {result 2}], 'next_page': '2'} I need the loop to continue to loop, appending the list in the result key to a var that can be later accessed and counted the amount of results within the list. However I require it to loop only while next_page exists as after a while when there are no more pages the next_page key is dropped from the dict. currently i have this next_page = True while next_page == True: try: next_page_result = get_results['next_page'] # this gets the next page next_url = urllib2.urlopen("http://search.twitter.com/search.json" + next_page_result)# this opens the next page json_loop = simplejson.load(next_url) # this puts the results into json new_result = result.append(json_loop['results']) # this grabs the result and "should" put it into the list except KeyError: next_page = False result_count = len(new_result)
[ "Alternate (cleaner) approach, making one big list:\nresults = []\nres = { \"next_page\": \"magic_token_to_get_first_page\" }\nwhile \"next_page\" in res:\n fp = urllib2.urlopen(\"http://search.twitter.com/search.json\" + res[\"next_page\"])\n res = simplejson.load(fp)\n fp.close()\n results.extend(res[\"results\"])\n\n", "new_result = result.append(json_loop['results'])\n\nThe list is appended as a side-effect of the method call.\nappend() actually returns None, so new_result is now a reference to None.\n", "You want to use \nresult.append(json_loop['results']) # this grabs the result and \"should\" put it into the list\nnew_result = result\n\nif you insist on doing it that way. As Bastien said, result.append(whatever) == None\n", "AFAICS, you don't need the variable new_result at all.\nresult_count = len(result)\n\nwill give you the answer you need.\n", "you cannot append into a dict..you can append into your list inside your dict,you should do like this\nresult['result'].append(json_loop['results'])\n\nif you want to check if there is no next page value in your result dict,and you want to delete the key from the dict,just do like this\nif not result['next_page']:\n del result['next_page']\n\n" ]
[ 4, 2, 1, 0, 0 ]
[]
[]
[ "append", "dictionary", "list", "python" ]
stackoverflow_0001656059_append_dictionary_list_python.txt
Q: Shortest total path among set of Latitude/Longitudes I have a set of 52 or so latitude/longitude pairs. I simply need to find the shortest path through all of them; it doesn't matter where staring point or ending point is. I've implemented Dijkstra's algorithm by hand multiple times before and don't really have the time to do it again. I've found a couple things that come close, but most require raw graphs with pre-computed weights for each edge. Do you know of any libraries or existing scripts/applications which will compute the shortest path in this manner? The code/libraries would preferably use Python or Clojure but it really doesn't matter. Thanks A: If this is a closed path, it is the Traveling Salesman Problem, and a sub-optimal but quite effective way to resolve it is to use Simulated Annealing A: In python, the best graph handling library I was able to put my hands on is networkx. It supports a broad range of different algos for short path search. Go for it. It's really complete and well designed. A: Isn't this the Traveling Salesman Problem, and therefore there is no efficient way to solve it?
Shortest total path among set of Latitude/Longitudes
I have a set of 52 or so latitude/longitude pairs. I simply need to find the shortest path through all of them; it doesn't matter where staring point or ending point is. I've implemented Dijkstra's algorithm by hand multiple times before and don't really have the time to do it again. I've found a couple things that come close, but most require raw graphs with pre-computed weights for each edge. Do you know of any libraries or existing scripts/applications which will compute the shortest path in this manner? The code/libraries would preferably use Python or Clojure but it really doesn't matter. Thanks
[ "If this is a closed path, it is the Traveling Salesman Problem, and a sub-optimal but quite effective way to resolve it is to use Simulated Annealing\n", "In python, the best graph handling library I was able to put my hands on is networkx. It supports a broad range of different algos for short path search.\nGo for it. It's really complete and well designed.\n", "Isn't this the Traveling Salesman Problem, and therefore there is no efficient way to solve it?\n" ]
[ 3, 2, 0 ]
[]
[]
[ "clojure", "graph_theory", "mapping", "python" ]
stackoverflow_0001656112_clojure_graph_theory_mapping_python.txt
Q: Would it be possible to write a 3D game as large as World of Warcraft in pure Python? Would it be possible to write a 3D game as large as World of Warcraft in pure Python? Assuming the use of DirectX / D3D bindings or OpenGL bindings. If not, what would be the largest hold-up to doing such a project in Python? I know games tend to fall into the realm of C and C++ but sometimes people do things out of habit! Any information would help satisfy my curiosity. Edit: Would the GIL post a major issue on 3d client performance? And what is the general performance penalty for using say, OpenGL or DirectX bindings vs natively using the libraries? A: Yes. How it will perform is another question. A good development pattern would be to develop it in pure python, and then profile it, and rewrite performance-critical bottlenecks, either in C/C++/Cython or even python itself but with more efficient code. A: While I don't know all the technical details of World of Warcraft, I would say that an MMO of its size could be built in Stackless Python. EVE Online uses it and they have one server for 200,000 users. A: Technically, anything is possible in any Turing Complete programming language. Practically though, you will run into trouble making the networking stack out of a high level language, because the server will have to be VERY fast to handle so many players. The gaming side of things on the client, there should be no problem, because there is nothing too complicated about GUIs or quests or keyboard input and what have you. The problems will be in whatever is computationally intensive up on the server. Anything that happens in human-time like logging on will probably be just fine, but if somemthing needs to be instantaneous over ten thousand users, you might want to go for an external library done up in C. Now some Python guru is going to come out of the woodwork and rip my head off because, as I said at the top, technically, anything can be done with enough effort. A: The game Minions of Mirth is a full MMO more or less on the scale of WoW, and was done mostly in Python. The client side used the Torque Game Engine, which is written in C++, but the server code and behaviours were all Python. A: Yes, you could write it in assembly, or Java, or Python, or brainfuck. It's just how much time you are willing to put into it. Language performance's aren't a major issue anymore, it's more about which algorithms you use, not what language you use. A: There are some additional real industry examples in addition to Eve Online. The server backend on the Ultima Online 2 project at Origin in the late 90's was mostly Python over a C++ server infrastructure, and the late Tablua Rasa game from NCSoft (with most of the same dev team) had the same architecture. The Twisted Matrix python server framework was created originally with this exact goals - actually from a developer on the UO2 project at the time, and there was a company called Ninjaneering that attempted to commercialize that code-base through MMO projects. There has been a move towards lua as a scripting engine (eg EQ2) as is easier to embed and instance. The problems with python in this environment tend to be in the interface between languages. When you do the inevitable optimization of moving some high performance system from python to C/C++, then pushing data back and forth across language boundaries and calling functions across language boundaries becomes an issue. The cost of serialization can be high if done poorly. For example, using early versions of SWIG would serialize pointers into their string representation and then parse the string back into a pointer on the other side!! Check out this paper from the mid-90's: But, in the long term, i think it is possible. A: Since your main question has already been answered well, I'll answer your latter questions: Would the GIL post a major issue on 3d client performance? In Python 2.6, the multiprocessing library was introduced, so you can take advantage of multiple processor cores without worrying about the GIL. Stackless Python also has some pretty cool related stuff. And what is the general performance penalty for using say, OpenGL or DirectX bindings vs natively using the libraries? I don't have any benchmarks to back it up, but the penalty for using the bindings vs. the native libraries is small enough that you don't need to worry about it. A: The answer to what I think your specific question is, "... in pure Python ..." the answer is NO. Python is not fast enough to call OpenGL or DirectX efficently enough to re-create World Of Warcraft at an exceptable frame rate. Like many others have answered, given some high level frame work, it would be possible to use Python has the scripting language but at a minimum you'd need some kind of graphics system written in another language like C++ to handle the graphics. For networking, given that WoW is not an action game, you might be able to get away with pure python but most likely that part as well would need to be some non-python library. A: I have been trying my hand at writing 3D games in Python, and given a good rendering framework (my favourite is OGRE) and decent bindings, it is amazing what you can get away with. However, especially with games, you are always trying to squeeze as much as you can out of the hardware. The performance disadvantage of python eventually will make itself felt. The main problem I ran into using python is its massive call overhead. Calling python functions, even from other python functions is very expensive. In a way, it's the price you pay for the dynamic nature of python. When you use the function call operator "()" on a symbol, it has to work out whether it's a function or a class, look over the method resolution order, handle the keyword arguments, etc etc. All these things are done ahead of time in less dynamic (compiled) languages. I have seen people trying to overcome this problem by manually inlining function calls. I do not have to tell you that this medicine is worse than the ailment. A: Just because it might give an interesting read, Civilization is partly written using Python. A google on it returns interesting reading material. A: As a technologist I know: If it can be written in C\C++ it can be written in assembly (though it will take longer). If it can be written in C\C++ and is not a low-level code - it can be written in any managed environment. WoW is a high-level program that is written in C\C++ python is a managed environment There for: WoW can be written in python and so any other MMORPG in 3D... The hardest part will be the 3d engine for it is the "heaviest" part of code - you will need to use an outside engine (written in C\C++\Assebly) or to write one and optimize it (not recommended) A: Because Python is interpreted there would be a performance hit, as opposed to C/C++, but, you would want to use something like PyOpenGL instead of DirectX though, to run on more operating systems. But, I don't see why you couldn't write such a game in Python. A: Python is not interpreted - it is tokenized/'just in time' bytecode 'interpreted' and it doesn't have a VM like Java does. This means, in english, it can be daaaaaamnfast. Not all the time though, it depends on the problem and the libraries, but python is not slow, this is a common misconception even among knowledgable people (and that includes deep java engine folks who have just not gone and tried python).
Would it be possible to write a 3D game as large as World of Warcraft in pure Python?
Would it be possible to write a 3D game as large as World of Warcraft in pure Python? Assuming the use of DirectX / D3D bindings or OpenGL bindings. If not, what would be the largest hold-up to doing such a project in Python? I know games tend to fall into the realm of C and C++ but sometimes people do things out of habit! Any information would help satisfy my curiosity. Edit: Would the GIL post a major issue on 3d client performance? And what is the general performance penalty for using say, OpenGL or DirectX bindings vs natively using the libraries?
[ "Yes. How it will perform is another question.\nA good development pattern would be to develop it in pure python, and then profile it, and rewrite performance-critical bottlenecks, either in C/C++/Cython or even python itself but with more efficient code.\n", "While I don't know all the technical details of World of Warcraft, I would say that an MMO of its size could be built in Stackless Python.\nEVE Online uses it and they have one server for 200,000 users.\n", "Technically, anything is possible in any Turing Complete programming language.\nPractically though, you will run into trouble making the networking stack out of a high level language, because the server will have to be VERY fast to handle so many players. \nThe gaming side of things on the client, there should be no problem, because there is nothing too complicated about GUIs or quests or keyboard input and what have you. \nThe problems will be in whatever is computationally intensive up on the server. Anything that happens in human-time like logging on will probably be just fine, but if somemthing needs to be instantaneous over ten thousand users, you might want to go for an external library done up in C. \nNow some Python guru is going to come out of the woodwork and rip my head off because, as I said at the top, technically, anything can be done with enough effort.\n", "The game Minions of Mirth is a full MMO more or less on the scale of WoW, and was done mostly in Python. The client side used the Torque Game Engine, which is written in C++, but the server code and behaviours were all Python.\n", "Yes, you could write it in assembly, or Java, or Python, or brainfuck. It's just how much time you are willing to put into it. Language performance's aren't a major issue anymore, it's more about which algorithms you use, not what language you use.\n", "There are some additional real industry examples in addition to Eve Online. The server backend on the Ultima Online 2 project at Origin in the late 90's was mostly Python over a C++ server infrastructure, and the late Tablua Rasa game from NCSoft (with most of the same dev team) had the same architecture.\nThe Twisted Matrix python server framework was created originally with this exact goals - actually from a developer on the UO2 project at the time, and there was a company called Ninjaneering that attempted to commercialize that code-base through MMO projects.\nThere has been a move towards lua as a scripting engine (eg EQ2) as is easier to embed and instance.\nThe problems with python in this environment tend to be in the interface between languages. When you do the inevitable optimization of moving some high performance system from python to C/C++, then pushing data back and forth across language boundaries and calling functions across language boundaries becomes an issue. The cost of serialization can be high if done poorly. For example, using early versions of SWIG would serialize pointers into their string representation and then parse the string back into a pointer on the other side!!\nCheck out this paper from the mid-90's:\nBut, in the long term, i think it is possible.\n", "Since your main question has already been answered well, I'll answer your latter questions:\n\nWould the GIL post a major issue on 3d client performance?\n\nIn Python 2.6, the multiprocessing library was introduced, so you can take advantage of multiple processor cores without worrying about the GIL. Stackless Python also has some pretty cool related stuff.\n\nAnd what is the general performance penalty for using say, OpenGL or DirectX bindings vs natively using the libraries?\n\nI don't have any benchmarks to back it up, but the penalty for using the bindings vs. the native libraries is small enough that you don't need to worry about it.\n", "The answer to what I think your specific question is, \"... in pure Python ...\" the answer is NO.\nPython is not fast enough to call OpenGL or DirectX efficently enough to re-create World Of Warcraft at an exceptable frame rate.\nLike many others have answered, given some high level frame work, it would be possible to use Python has the scripting language but at a minimum you'd need some kind of graphics system written in another language like C++ to handle the graphics. For networking, given that WoW is not an action game, you might be able to get away with pure python but most likely that part as well would need to be some non-python library.\n", "I have been trying my hand at writing 3D games in Python, and given a good rendering framework (my favourite is OGRE) and decent bindings, it is amazing what you can get away with. However, especially with games, you are always trying to squeeze as much as you can out of the hardware. The performance disadvantage of python eventually will make itself felt.\nThe main problem I ran into using python is its massive call overhead. Calling python functions, even from other python functions is very expensive. In a way, it's the price you pay for the dynamic nature of python. When you use the function call operator \"()\" on a symbol, it has to work out whether it's a function or a class, look over the method resolution order, handle the keyword arguments, etc etc. All these things are done ahead of time in less dynamic (compiled) languages.\nI have seen people trying to overcome this problem by manually inlining function calls. I do not have to tell you that this medicine is worse than the ailment.\n", "Just because it might give an interesting read, Civilization is partly written using Python.\nA google on it returns interesting reading material.\n", "As a technologist I know:\nIf it can be written in C\\C++ it can be written in assembly (though it will take longer).\nIf it can be written in C\\C++ and is not a low-level code - it can be written in any managed environment.\nWoW is a high-level program that is written in C\\C++\npython is a managed environment\nThere for:\nWoW can be written in python and so any other MMORPG in 3D...\n\nThe hardest part will be the 3d engine for it is the \"heaviest\" part of code - you will need to use an outside engine (written in C\\C++\\Assebly) or to write one and optimize it (not recommended)\n", "Because Python is interpreted there would be a performance hit, as opposed to C/C++, but, you would want to use something like PyOpenGL instead of DirectX though, to run on more operating systems.\nBut, I don't see why you couldn't write such a game in Python.\n", "Python is not interpreted - it is tokenized/'just in time' bytecode 'interpreted' and it doesn't have a VM like Java does. This means, in english, it can be daaaaaamnfast. Not all the time though, it depends on the problem and the libraries, but python is not slow, this is a common misconception even among knowledgable people (and that includes deep java engine folks who have just not gone and tried python).\n" ]
[ 17, 14, 9, 7, 6, 6, 5, 5, 4, 2, 2, 0, 0 ]
[]
[]
[ "3d", "direct3d", "python" ]
stackoverflow_0000916663_3d_direct3d_python.txt
Q: Grab some ofx data with python I was trying to use http://www.jongsma.org/gc/scripts/ofx-ba.py to grab my bank account information from wachovia. Having no luck, I decided that I would just try to manually construct some request data using this example So, I have this file that I want to use as the request data. Let's call it req.ofxsgml: FXHEADER:100 DATA:OFXSGML VERSION:102 SECURITY:NONE ENCODING:USASCII CHARSET:1252 COMPRESSION:NONE OLDFILEUID:NONE NEWFILEUID:NONE <OFX> <SIGNONMSGSRQV1> <SONRQ> <DTCLIENT>20071015021529.000[-8:PST] <USERID>TheNameIuseForOnlineBanking <USERPASS>MySecretPassword <LANGUAGE>ENG <FI> <ORG>Wachovia <FID>4309 </FI> <APPID>Money <APPVER>1700 </SONRQ> </SIGNONMSGSRQV1> <BANKMSGSRQV1> <STMTTRNRQ> <TRNUID>438BD6F4-2106-4C88-8DE5-7625915A2FC0 <STMTRQ> <BANKACCTFROM> <BANKID>061000227 <ACCTID>101555555555 <ACCTTYPE>CHECKING </BANKACCTFROM> <INCTRAN> <INCLUDE>Y </INCTRAN> </STMTRQ> </STMTTRNRQ> </BANKMSGSRQV1> </OFX> Then, in python, I try: >>> import urllib2 >>> query = open('req.ofxsgml').read() >>> request = urllib2.Request('https://pfmpw.wachovia.com/cgi-forte/fortecgi?servicename=ofx&amp;pagename=PFM', query, { "Content-type": "application/x-ofx", "Accept": "*/*, application/x-ofx" }) >>> f = urllib2.urlopen(request) This command gives me a 500 and this traceback. I wonder what is wrong with my request. Visiting the url with no data and no concern for headers, >>> f = urllib2.urlopen('https://pfmpw.wachovia.com/cgi-forte/fortecgi?servicename=ofx&amp;pagename=PFM') yields the same thing as visiting that url directly, HTTPError: HTTP Error 403: <BODY><H1>Request not allowed</H1></BODY>. This is pretty obvious but just an observation. Everything on the subject seems to be pretty outdated. Hoping to write a simple python ofx module to open source. Maybe there is already something developed that I have not managed to find? EDIT - If I make a flat mapping of the above information: d = {'ACCTID': '10555555', 'ACCTTYPE': 'CHECKING', 'APPID': 'Money', 'APPVER': '1700', 'BANKID': '061000227', 'DTCLIENT': '20071015021529.000[-8:PST]', 'FID': '4309', 'INCLUDE': 'Y', 'LANGUAGE': 'ENG', 'ORG': 'Wachovia', 'TRNUID': 'I18BD6F4-2006-4C88-8DE5-7625915A2FC0', 'USERID': 'm48m40', 'USERPASS': '12397'} and then urlencode it and make the request with that as the data query=urllib.urlencode(d) request = urllib2.Request('https://pfmpw.wachovia.com/cgi-forte/fortecgi?servicename=ofx&amp;pagename=PFM', query, { "Content-type": "application/x-ofx", "Accept": "*/*, application/x-ofx" }) f = urllib2.urlopen(request) HTTP Error 403: <BODY><H1>Request not allowed</H1></BODY> A: The problem was that you were previously passing in the data from your file directly as the data parameter to the Request. The file you were reading in contains both the headers and the data that you should be sending. You needed to supply the headers and the data separately as you have now done. HTTP error 403 means the request was correct but the server is refusing to respond to it. Have you already signed up and arranged permission to use the web service you are trying to access? If so is there some authentication that you need to do before making the request? A: could just be authentication? (or lack therof?)
Grab some ofx data with python
I was trying to use http://www.jongsma.org/gc/scripts/ofx-ba.py to grab my bank account information from wachovia. Having no luck, I decided that I would just try to manually construct some request data using this example So, I have this file that I want to use as the request data. Let's call it req.ofxsgml: FXHEADER:100 DATA:OFXSGML VERSION:102 SECURITY:NONE ENCODING:USASCII CHARSET:1252 COMPRESSION:NONE OLDFILEUID:NONE NEWFILEUID:NONE <OFX> <SIGNONMSGSRQV1> <SONRQ> <DTCLIENT>20071015021529.000[-8:PST] <USERID>TheNameIuseForOnlineBanking <USERPASS>MySecretPassword <LANGUAGE>ENG <FI> <ORG>Wachovia <FID>4309 </FI> <APPID>Money <APPVER>1700 </SONRQ> </SIGNONMSGSRQV1> <BANKMSGSRQV1> <STMTTRNRQ> <TRNUID>438BD6F4-2106-4C88-8DE5-7625915A2FC0 <STMTRQ> <BANKACCTFROM> <BANKID>061000227 <ACCTID>101555555555 <ACCTTYPE>CHECKING </BANKACCTFROM> <INCTRAN> <INCLUDE>Y </INCTRAN> </STMTRQ> </STMTTRNRQ> </BANKMSGSRQV1> </OFX> Then, in python, I try: >>> import urllib2 >>> query = open('req.ofxsgml').read() >>> request = urllib2.Request('https://pfmpw.wachovia.com/cgi-forte/fortecgi?servicename=ofx&amp;pagename=PFM', query, { "Content-type": "application/x-ofx", "Accept": "*/*, application/x-ofx" }) >>> f = urllib2.urlopen(request) This command gives me a 500 and this traceback. I wonder what is wrong with my request. Visiting the url with no data and no concern for headers, >>> f = urllib2.urlopen('https://pfmpw.wachovia.com/cgi-forte/fortecgi?servicename=ofx&amp;pagename=PFM') yields the same thing as visiting that url directly, HTTPError: HTTP Error 403: <BODY><H1>Request not allowed</H1></BODY>. This is pretty obvious but just an observation. Everything on the subject seems to be pretty outdated. Hoping to write a simple python ofx module to open source. Maybe there is already something developed that I have not managed to find? EDIT - If I make a flat mapping of the above information: d = {'ACCTID': '10555555', 'ACCTTYPE': 'CHECKING', 'APPID': 'Money', 'APPVER': '1700', 'BANKID': '061000227', 'DTCLIENT': '20071015021529.000[-8:PST]', 'FID': '4309', 'INCLUDE': 'Y', 'LANGUAGE': 'ENG', 'ORG': 'Wachovia', 'TRNUID': 'I18BD6F4-2006-4C88-8DE5-7625915A2FC0', 'USERID': 'm48m40', 'USERPASS': '12397'} and then urlencode it and make the request with that as the data query=urllib.urlencode(d) request = urllib2.Request('https://pfmpw.wachovia.com/cgi-forte/fortecgi?servicename=ofx&amp;pagename=PFM', query, { "Content-type": "application/x-ofx", "Accept": "*/*, application/x-ofx" }) f = urllib2.urlopen(request) HTTP Error 403: <BODY><H1>Request not allowed</H1></BODY>
[ "The problem was that you were previously passing in the data from your file directly as the data parameter to the Request. The file you were reading in contains both the headers and the data that you should be sending. You needed to supply the headers and the data separately as you have now done.\nHTTP error 403 means the request was correct but the server is refusing to respond to it. Have you already signed up and arranged permission to use the web service you are trying to access? If so is there some authentication that you need to do before making the request?\n", "could just be authentication? (or lack therof?)\n" ]
[ 2, 0 ]
[]
[]
[ "ofx", "python", "sgml", "urllib2", "xml" ]
stackoverflow_0001656611_ofx_python_sgml_urllib2_xml.txt
Q: Python Scoping/Static Misunderstanding I'm really stuck on why the following code block 1 result in output 1 instead of output 2? Code block 1: class FruitContainer: def __init__(self,arr=[]): self.array = arr def addTo(self,something): self.array.append(something) def __str__(self): ret = "[" for item in self.array: ret = "%s%s," % (ret,item) return "%s]" % ret arrayOfFruit = ['apple', 'banana', 'pear'] arrayOfFruitContainers = [] while len(arrayOfFruit) > 0: tempFruit = arrayOfFruit.pop(0) tempB = FruitContainer() tempB.addTo(tempFruit) arrayOfFruitContainers.append(tempB) for container in arrayOfFruitContainers: print container **Output 1 (actual):** [apple,banana,pear,] [apple,banana,pear,] [apple,banana,pear,] **Output 2 (desired):** [apple,] [banana,] [pear,] The goal of this code is to iterate through an array and wrap each in a parent object. This is a reduction of my actual code which adds all apples to a bag of apples and so forth. My guess is that, for some reason, it's either using the same object or acting as if the fruit container uses a static array. I have no idea how to fix this. A: You should never use a mutable value (like []) for a default argument to a method. The value is computed once, and then used for every invocation. When you use an empty list as a default value, that same list is used every time the method is invoked without the argument, even as the value is modified by previous function calls. Do this instead: def __init__(self,arr=None): self.array = arr or [] A: Your code has a default argument to initialize the class. The value of the default argument is evaluated once, at compile time, so every instance is initialized with the same list. Change it like so: def __init__(self, arr=None): if arr is None: self.array = [] else: self.array = arr I discussed this more fully here: How to define a class in Python A: As Ned says, the problem is you are using a list as a default argument. There is more detail here. The solution is to change __init__ function as below: def __init__(self,arr=None): if arr is not None: self.array = arr else: self.array = [] A: A better solution than passing in None β€” in this particular instance, rather than in general β€” is to treat the arr parameter to __init__ as an enumerable set of items to pre-initialize the FruitContainer with, rather than an array to use for internal storage: class FruitContainer: def __init__(self, arr=()): self.array = list(arr) ... This will allow you to pass in other enumerable types to initialize your container, which more advanced Python users will expect to be able to do: myFruit = ('apple', 'pear') # Pass a tuple myFruitContainer = FruitContainer(myFruit) myOtherFruit = file('fruitFile', 'r') # Pass a file myOtherFruitContainer = FruitContainer(myOtherFruit) It will also defuse another potential aliasing bug: myFruit = ['apple', 'pear'] myFruitContainer1 = FruitContainer(myFruit) myFruitContainer2 = FruitContainer(myFruit) myFruitContainer1.addTo('banana') 'banana' in str(myFruitContainer2) With all other implementations on this page, this will return True, because you have accidentally aliased the internal storage of your containers. Note: This approach is not always the right answer: "if not None" is better in other cases. Just ask yourself: am I passing in a set of objects, or a mutable container? If the class/function I'm passing my objects in to changes the storage I gave it, would that be (a) surprising or (b) desirable? In this case, I would argue that it is (a); thus, the list(...) call is the best solution. If (b), "if not None" would be the right approach.
Python Scoping/Static Misunderstanding
I'm really stuck on why the following code block 1 result in output 1 instead of output 2? Code block 1: class FruitContainer: def __init__(self,arr=[]): self.array = arr def addTo(self,something): self.array.append(something) def __str__(self): ret = "[" for item in self.array: ret = "%s%s," % (ret,item) return "%s]" % ret arrayOfFruit = ['apple', 'banana', 'pear'] arrayOfFruitContainers = [] while len(arrayOfFruit) > 0: tempFruit = arrayOfFruit.pop(0) tempB = FruitContainer() tempB.addTo(tempFruit) arrayOfFruitContainers.append(tempB) for container in arrayOfFruitContainers: print container **Output 1 (actual):** [apple,banana,pear,] [apple,banana,pear,] [apple,banana,pear,] **Output 2 (desired):** [apple,] [banana,] [pear,] The goal of this code is to iterate through an array and wrap each in a parent object. This is a reduction of my actual code which adds all apples to a bag of apples and so forth. My guess is that, for some reason, it's either using the same object or acting as if the fruit container uses a static array. I have no idea how to fix this.
[ "You should never use a mutable value (like []) for a default argument to a method. The value is computed once, and then used for every invocation. When you use an empty list as a default value, that same list is used every time the method is invoked without the argument, even as the value is modified by previous function calls.\nDo this instead:\ndef __init__(self,arr=None):\n self.array = arr or []\n\n", "Your code has a default argument to initialize the class. The value of the default argument is evaluated once, at compile time, so every instance is initialized with the same list. Change it like so:\ndef __init__(self, arr=None):\n if arr is None:\n self.array = []\n else:\n self.array = arr\n\nI discussed this more fully here: How to define a class in Python\n", "As Ned says, the problem is you are using a list as a default argument. There is more detail here. The solution is to change __init__ function as below: \n def __init__(self,arr=None):\n if arr is not None:\n self.array = arr\n else:\n self.array = []\n\n", "A better solution than passing in None β€” in this particular instance, rather than in general β€” is to treat the arr parameter to __init__ as an enumerable set of items to pre-initialize the FruitContainer with, rather than an array to use for internal storage:\nclass FruitContainer:\n def __init__(self, arr=()):\n self.array = list(arr)\n ...\n\nThis will allow you to pass in other enumerable types to initialize your container, which more advanced Python users will expect to be able to do:\nmyFruit = ('apple', 'pear') # Pass a tuple\nmyFruitContainer = FruitContainer(myFruit)\nmyOtherFruit = file('fruitFile', 'r') # Pass a file\nmyOtherFruitContainer = FruitContainer(myOtherFruit)\n\nIt will also defuse another potential aliasing bug:\nmyFruit = ['apple', 'pear']\nmyFruitContainer1 = FruitContainer(myFruit)\nmyFruitContainer2 = FruitContainer(myFruit)\nmyFruitContainer1.addTo('banana')\n'banana' in str(myFruitContainer2)\n\nWith all other implementations on this page, this will return True, because you have accidentally aliased the internal storage of your containers.\nNote: This approach is not always the right answer: \"if not None\" is better in other cases. Just ask yourself: am I passing in a set of objects, or a mutable container? If the class/function I'm passing my objects in to changes the storage I gave it, would that be (a) surprising or (b) desirable? In this case, I would argue that it is (a); thus, the list(...) call is the best solution. If (b), \"if not None\" would be the right approach.\n" ]
[ 8, 2, 1, 0 ]
[]
[]
[ "class", "iteration", "python", "scope", "static_members" ]
stackoverflow_0001654967_class_iteration_python_scope_static_members.txt
Q: Testing twisted application - Load client I've written a Twisted based server and I'd like to test it using twisted as well. But I'd like to write a load test starting a bunch of request at the same time. But I believe that I didn't get the concepts of Twisted, mainly client side, because I'm stucked with this problem: from twisted.internet import reactor, protocol from threading import Thread from twisted.protocols.basic import LineReceiver __author__="smota" __date__ ="$30/10/2009 17:17:50$" class SquitterClient(LineReceiver): def connectionMade(self): self.sendLine("message from " % threading.current_thread().name); pass def connectionLost(self, reason): print "connection lost" def sendMessage(self, msg): for m in [ "a", "b", "c", "d", "e"]: self.sendLine(msg % " - " % m); class SquitterClientFactory(protocol.ClientFactory): protocol = SquitterClient def clientConnectionFailed(self, connector, reason): print "Connection failed - goodbye!" reactor.stop() def clientConnectionLost(self, connector, reason): print "Connection lost - goodbye!" reactor.stop() def createAndRun(): f = SquitterClientFactory() reactor.connectTCP("localhost", 4010, f) reactor.run(installSignalHandlers=0) # this connects the protocol to a server runing on port 8000 def main(): for n in range(0,10): th=Thread(target=createAndRun) th.start() # this only runs if the module was *not* imported if __name__ == '__main__': main() socket_client.py:35: DeprecationWarning: Reactor already running! This behavior is deprecated since Twisted 8.0 reactor.run(installSignalHandlers=0) What am I missing? How to test it? Thank you, Samuel A: The direct cause for your failure is that you attemp to call run() on the reactor multiple times. You are supposed to ever only call run() once. I think you are expecting to have multiple reactors, each in its own thread, but actually you only have one. The bad thing is that having multiple reactors is difficult or impossible - the good thing is that it's also unnecessary. In fact you don't even need multiple threads. You can multiplex multiple client connections in one reactor almost as easily as you can listen for multiple connections. Modifying your sample code, something like the following should work. The key idea is that you don't need multiple reactors to do things concurrently. The only thing that could ever be concurrent with the regular Python implementation is I/O anyway. from twisted.internet import reactor, protocol from twisted.protocols.basic import LineReceiver __author__="smota" __date__ ="$30/10/2009 17:17:50$" class SquitterClient(LineReceiver): def connectionMade(self): self.messageCount = 0 # The factory provides a reference to itself, we'll use it to enumerate the clients self.factory.n += 1 self.name = "Client %d" %self.factory.n # Send initial message, and more messages a bit later self.sendLine("Client %s starting!" % self.name); reactor.callLater(0.5, self.sendMessage, "Message %d" %self.messageCount) def connectionLost(self, reason): print "connection lost" def sendMessage(self, msg): for m in [ "a", "b", "c", "d", "e"]: self.sendLine("Copy %s of message %s from client %s!" % (m, msg, self.name)) if self.factory.stop: self.sendLine("Client %s disconnecting!" % self.name) self.transport.loseConnection() else: self.messageCount += 1 reactor.callLater(0.5, self.sendMessage, "Message %d" %self.messageCount) class SquitterClientFactory(protocol.ClientFactory): protocol = SquitterClient def __init__(self): self.n = 0 self.stop = False def stopTest(): self.stop = True def clientConnectionFailed(self, connector, reason): print "Connection failed - goodbye!" def clientConnectionLost(self, connector, reason): print "Connection lost - goodbye!" # this connects the protocol to a server running on port 8000 def main(): # Create 10 clients f = SquitterClientFactory() for i in range(10): reactor.connectTCP("localhost", 8000, f) # Schedule end of test in 10 seconds reactor.callLater(10, f.stopTest) # And let loose the dogs of war reactor.run() # this only runs if the module was *not* imported if __name__ == '__main__': main()
Testing twisted application - Load client
I've written a Twisted based server and I'd like to test it using twisted as well. But I'd like to write a load test starting a bunch of request at the same time. But I believe that I didn't get the concepts of Twisted, mainly client side, because I'm stucked with this problem: from twisted.internet import reactor, protocol from threading import Thread from twisted.protocols.basic import LineReceiver __author__="smota" __date__ ="$30/10/2009 17:17:50$" class SquitterClient(LineReceiver): def connectionMade(self): self.sendLine("message from " % threading.current_thread().name); pass def connectionLost(self, reason): print "connection lost" def sendMessage(self, msg): for m in [ "a", "b", "c", "d", "e"]: self.sendLine(msg % " - " % m); class SquitterClientFactory(protocol.ClientFactory): protocol = SquitterClient def clientConnectionFailed(self, connector, reason): print "Connection failed - goodbye!" reactor.stop() def clientConnectionLost(self, connector, reason): print "Connection lost - goodbye!" reactor.stop() def createAndRun(): f = SquitterClientFactory() reactor.connectTCP("localhost", 4010, f) reactor.run(installSignalHandlers=0) # this connects the protocol to a server runing on port 8000 def main(): for n in range(0,10): th=Thread(target=createAndRun) th.start() # this only runs if the module was *not* imported if __name__ == '__main__': main() socket_client.py:35: DeprecationWarning: Reactor already running! This behavior is deprecated since Twisted 8.0 reactor.run(installSignalHandlers=0) What am I missing? How to test it? Thank you, Samuel
[ "The direct cause for your failure is that you attemp to call run() on the reactor multiple times. You are supposed to ever only call run() once. I think you are expecting to have multiple reactors, each in its own thread, but actually you only have one. The bad thing is that having multiple reactors is difficult or impossible - the good thing is that it's also unnecessary. In fact you don't even need multiple threads. You can multiplex multiple client connections in one reactor almost as easily as you can listen for multiple connections.\nModifying your sample code, something like the following should work. The key idea is that you don't need multiple reactors to do things concurrently. The only thing that could ever be concurrent with the regular Python implementation is I/O anyway.\nfrom twisted.internet import reactor, protocol\nfrom twisted.protocols.basic import LineReceiver\n\n__author__=\"smota\"\n__date__ =\"$30/10/2009 17:17:50$\"\n\nclass SquitterClient(LineReceiver):\n def connectionMade(self):\n self.messageCount = 0\n # The factory provides a reference to itself, we'll use it to enumerate the clients\n self.factory.n += 1\n self.name = \"Client %d\" %self.factory.n\n\n # Send initial message, and more messages a bit later\n self.sendLine(\"Client %s starting!\" % self.name);\n reactor.callLater(0.5, self.sendMessage, \"Message %d\" %self.messageCount)\n\n def connectionLost(self, reason):\n print \"connection lost\"\n\n def sendMessage(self, msg):\n for m in [ \"a\", \"b\", \"c\", \"d\", \"e\"]:\n self.sendLine(\"Copy %s of message %s from client %s!\" % (m, msg, self.name))\n if self.factory.stop:\n self.sendLine(\"Client %s disconnecting!\" % self.name)\n self.transport.loseConnection()\n else:\n self.messageCount += 1\n reactor.callLater(0.5, self.sendMessage, \"Message %d\" %self.messageCount)\n\nclass SquitterClientFactory(protocol.ClientFactory):\n protocol = SquitterClient\n\n def __init__(self):\n self.n = 0\n self.stop = False\n\n def stopTest():\n self.stop = True\n\n def clientConnectionFailed(self, connector, reason):\n print \"Connection failed - goodbye!\"\n\n def clientConnectionLost(self, connector, reason):\n print \"Connection lost - goodbye!\"\n\n# this connects the protocol to a server running on port 8000\ndef main():\n # Create 10 clients\n\n f = SquitterClientFactory()\n for i in range(10):\n reactor.connectTCP(\"localhost\", 8000, f)\n\n # Schedule end of test in 10 seconds\n reactor.callLater(10, f.stopTest)\n\n # And let loose the dogs of war\n reactor.run()\n\n# this only runs if the module was *not* imported\nif __name__ == '__main__':\n main()\n\n" ]
[ 9 ]
[]
[]
[ "multithreading", "python", "twisted" ]
stackoverflow_0001654566_multithreading_python_twisted.txt
Q: how to open a URL with non utf-8 arguments Using Python I need to transfer non utf-8 encoded data (specifically shift-jis) to a URL via the query string. How should I transfer the data? Quote it? Encode in utf-8? Thanks A: Query string parameters are byte-based. Whilst IRI-to-URI and typed non-ASCII characters will typically use UTF-8, there is nothing forcing you to send or receive your own parameters in that encoding. So for Shift-JIS (actually typically cp932, the Windows extension of that encoding): foo= u'\u65E5\u672C\u8A9E' # ζ—₯本θͺž url= 'http://www.example.jp/something?foo='+urllib.quote(foo.encode('cp932')) In Python 3 you do it in the quote function itself: foo= '\u65E5\u672C\u8A9E' url= 'http://www.example.jp/something?foo='+urllib.parse.quote(foo, encoding= 'cp932') A: I don't know what unicode has to do with this, since the query string is a string of bytes. You can use the quoting functions in urllib to quote plain strings so that they can be passed within query strings. A: By the Β»query stringΒ« you mean HTTP GET like in http:/{URL}?data=XYZ? You have encoding what ever data you have via base64.b64encode using -_ as alternative character to be URL safe as an option. See here.
how to open a URL with non utf-8 arguments
Using Python I need to transfer non utf-8 encoded data (specifically shift-jis) to a URL via the query string. How should I transfer the data? Quote it? Encode in utf-8? Thanks
[ "Query string parameters are byte-based. Whilst IRI-to-URI and typed non-ASCII characters will typically use UTF-8, there is nothing forcing you to send or receive your own parameters in that encoding.\nSo for Shift-JIS (actually typically cp932, the Windows extension of that encoding):\nfoo= u'\\u65E5\\u672C\\u8A9E' # ζ—₯本θͺž\nurl= 'http://www.example.jp/something?foo='+urllib.quote(foo.encode('cp932'))\n\nIn Python 3 you do it in the quote function itself:\nfoo= '\\u65E5\\u672C\\u8A9E'\nurl= 'http://www.example.jp/something?foo='+urllib.parse.quote(foo, encoding= 'cp932')\n\n", "I don't know what unicode has to do with this, since the query string is a string of bytes. You can use the quoting functions in urllib to quote plain strings so that they can be passed within query strings.\n", "By the Β»query stringΒ« you mean HTTP GET like in http:/{URL}?data=XYZ?\nYou have encoding what ever data you have via base64.b64encode using -_ as alternative character to be URL safe as an option. See here.\n" ]
[ 4, 1, 0 ]
[]
[]
[ "python", "quotes", "shift_jis", "unicode", "urllib" ]
stackoverflow_0001657201_python_quotes_shift_jis_unicode_urllib.txt
Q: Help sorting: first by this, and then by that I have a list of tuples I am trying to sort and could use some help. The field I want to sort by in the tuples looks like "XXX_YYY". First, I want to group the XXX values in reverse order, and then, within those groups, I want to place the YYY values in normal sort order. (NOTE: I am just as happy, actually, sorting the second item in the tuple in this way, reverse order first word, normal order second.) Here is an example of what I have and what I would like in the end ... not sure how to do it. mylist = [ (u'community_news', u'Community: News & Information'), (u'kf_video', u'KF: Video'), (u'community_video', u'Community: Video'), (u'kf_news', u'KF: News & Information'), (u'kf_magazine', u'KF: Magazine') ] I would like to perform some sort of sort() on this list that will change the output to: sorted = [ (u'kf_magazine', u'KF: Magazine'), (u'kf_news', u'KF: News & Information'), (u'kf_video', u'KF: Video'), (u'community_news', u'Community: News & Information'), (u'community_video', u'Community: Video'), ] I suspect there may be a pythonic way to handle this but am not able to wrap my head around it. A: def my_cmp(x, y): x1, x2 = x[0].split('_') y1, y2 = y[0].split('_') return -cmp(x1, y1) or cmp(x2, y2) my_list = [ (u'community_news', u'Community: News & Information'), (u'kf_video', u'KF: Video'), (u'community_video', u'Community: Video'), (u'kf_news', u'KF: News & Information'), (u'kf_magazine', u'KF: Magazine') ] sorted_list = [ (u'kf_magazine', u'KF: Magazine'), (u'kf_news', u'KF: News & Information'), (u'kf_video', u'KF: Video'), (u'community_news', u'Community: News & Information'), (u'community_video', u'Community: Video'), ] my_list.sort(cmp=my_cmp) assert my_list == sorted_list A: Custom comparison functions for sorting, as suggested in existing answers, do make it easy to sort in a mix of ascending and descending orders -- but they have serious performance issues and have been removed in Python 3, leaving only the preferred customization approach -- custom key-extraction functions... much speedier, though more delicate to use for the relatively rare use case of mixed ascending/descending sorts. In Python 2.*, which supports either kind of customization (not both in the same call to sort or sorted:-), a custom comparison function can be passed as a cmp= named argument; or, a custom key-extraction function can be passed as a key= named argument. In Python 3.*, only the latter option is available. It's definitely worth understanding the key-extraction approach, even if you think you've just solved your problem with a custom-comparison approach instead: not just for performance, but for future-proofness (Python 3) and for generality (the key= approach also applies to min, max, itertools.groupby... much more general than the cmp= approach!). Key-extraction is very simple when all the key subfields are to be sorted the same way (all ascending, or all descending) -- you just extract them; it's still pretty easy if the subfields that go "the other way" are numbers (you just change their sign while extracting); the delicate case is exactly the one you have -- multiple string fields that must be compared in oppposite ways. A reasonably simple approach to solving your problem is a tiny shim class: class Reverser(object): def __init__(self, s): self.s = s def __lt__(self, other): return other.s < self.s def __eq__(self, other): return other.s == self.s Note that you only have to supply __lt__ and __eq__ (the < and == operators) -- sort and friends synthesize all other comparisons, if needed, based on those two. So, armed with this little auxiliary tool, we can proceed easily...: def getkey(tup): a, b = tup[0].split('_') return Reverser(a), b my_list.sort(key=getkey) As you see, once you "get" the reverser and key extraction concepts, you pay essentially no price for using key extraction instead of custom comparison: the code I suggest is 4 statements for the reverser class (which you can write once and put into your "goodies bag" module somewhere), three for the key extraction function, and of course one for the sort or sorted call -- a total of eight vs the 4 + 1 == 5 of the custom comparison approach in the most compact form (i.e. the one using either cmp with a sign change, or cmp with swapped arguments). Three statements are not much of a price to pay for key-extraction's advantages!-) Performance is clearly not a big issue with such a short list, but with an even modestly longer (10 times) one...: # my_list as in the Q, my_cmp as per top A, getkey as here def bycmp(): return sorted(my_list*10, cmp=my_cmp) def bykey(): return sorted(my_list*10, key=getkey) ... $ python -mtimeit -s'import so' 'so.bykey()' 1000 loops, best of 3: 548 usec per loop $ python -mtimeit -s'import so' 'so.bycmp()' 1000 loops, best of 3: 995 usec per loop I.e., the key= approach is already showing a performance gain of almost two times (sorting the list twice as fast) when working on a 50-items list -- well worth the modest price of "8 lines rather than 5", particularly with all the other advantages I already mentioned! A: >>> def my_cmp(tuple_1, tuple_2): xxx_1, yyy_1 = tuple_1[0].split('_') xxx_2, yyy_2 = tuple_2[0].split('_') if xxx_1 > xxx_2: return -1 elif xxx_1 < xxx_2: return 1 else: return cmp(yyy_1, yyy_2) >>> import pprint >>> pprint.pprint(sorted(mylist, my_cmp)) [(u'kf_magazine', u'KF: Magazine'), (u'kf_news', u'KF: News & Information'), (u'kf_video', u'KF: Video'), (u'community_news', u'Community: News & Information'), (u'community_video', u'Community: Video')] Not the prettiest solution in the world...
Help sorting: first by this, and then by that
I have a list of tuples I am trying to sort and could use some help. The field I want to sort by in the tuples looks like "XXX_YYY". First, I want to group the XXX values in reverse order, and then, within those groups, I want to place the YYY values in normal sort order. (NOTE: I am just as happy, actually, sorting the second item in the tuple in this way, reverse order first word, normal order second.) Here is an example of what I have and what I would like in the end ... not sure how to do it. mylist = [ (u'community_news', u'Community: News & Information'), (u'kf_video', u'KF: Video'), (u'community_video', u'Community: Video'), (u'kf_news', u'KF: News & Information'), (u'kf_magazine', u'KF: Magazine') ] I would like to perform some sort of sort() on this list that will change the output to: sorted = [ (u'kf_magazine', u'KF: Magazine'), (u'kf_news', u'KF: News & Information'), (u'kf_video', u'KF: Video'), (u'community_news', u'Community: News & Information'), (u'community_video', u'Community: Video'), ] I suspect there may be a pythonic way to handle this but am not able to wrap my head around it.
[ "def my_cmp(x, y):\n x1, x2 = x[0].split('_')\n y1, y2 = y[0].split('_')\n return -cmp(x1, y1) or cmp(x2, y2)\n\nmy_list = [\n (u'community_news', u'Community: News & Information'), \n (u'kf_video', u'KF: Video'), \n (u'community_video', u'Community: Video'), \n (u'kf_news', u'KF: News & Information'), \n (u'kf_magazine', u'KF: Magazine')\n]\n\nsorted_list = [\n (u'kf_magazine', u'KF: Magazine'),\n (u'kf_news', u'KF: News & Information'), \n (u'kf_video', u'KF: Video'), \n (u'community_news', u'Community: News & Information'), \n (u'community_video', u'Community: Video'), \n]\n\nmy_list.sort(cmp=my_cmp)\nassert my_list == sorted_list\n\n", "Custom comparison functions for sorting, as suggested in existing answers, do make it easy to sort in a mix of ascending and descending orders -- but they have serious performance issues and have been removed in Python 3, leaving only the preferred customization approach -- custom key-extraction functions... much speedier, though more delicate to use for the relatively rare use case of mixed ascending/descending sorts.\nIn Python 2.*, which supports either kind of customization (not both in the same call to sort or sorted:-), a custom comparison function can be passed as a cmp= named argument; or, a custom key-extraction function can be passed as a key= named argument. In Python 3.*, only the latter option is available.\nIt's definitely worth understanding the key-extraction approach, even if you think you've just solved your problem with a custom-comparison approach instead: not just for performance, but for future-proofness (Python 3) and for generality (the key= approach also applies to min, max, itertools.groupby... much more general than the cmp= approach!).\nKey-extraction is very simple when all the key subfields are to be sorted the same way (all ascending, or all descending) -- you just extract them; it's still pretty easy if the subfields that go \"the other way\" are numbers (you just change their sign while extracting); the delicate case is exactly the one you have -- multiple string fields that must be compared in oppposite ways.\nA reasonably simple approach to solving your problem is a tiny shim class:\nclass Reverser(object):\n def __init__(self, s): self.s = s\n def __lt__(self, other): return other.s < self.s\n def __eq__(self, other): return other.s == self.s\n\nNote that you only have to supply __lt__ and __eq__ (the < and == operators) -- sort and friends synthesize all other comparisons, if needed, based on those two.\nSo, armed with this little auxiliary tool, we can proceed easily...:\ndef getkey(tup):\n a, b = tup[0].split('_')\n return Reverser(a), b\n\nmy_list.sort(key=getkey)\n\nAs you see, once you \"get\" the reverser and key extraction concepts, you pay essentially no price for using key extraction instead of custom comparison: the code I suggest is 4 statements for the reverser class (which you can write once and put into your \"goodies bag\" module somewhere), three for the key extraction function, and of course one for the sort or sorted call -- a total of eight vs the 4 + 1 == 5 of the custom comparison approach in the most compact form (i.e. the one using either cmp with a sign change, or cmp with swapped arguments). Three statements are not much of a price to pay for key-extraction's advantages!-)\nPerformance is clearly not a big issue with such a short list, but with an even modestly longer (10 times) one...:\n# my_list as in the Q, my_cmp as per top A, getkey as here\n\ndef bycmp():\n return sorted(my_list*10, cmp=my_cmp)\n\ndef bykey():\n return sorted(my_list*10, key=getkey)\n\n...\n\n$ python -mtimeit -s'import so' 'so.bykey()'\n1000 loops, best of 3: 548 usec per loop\n$ python -mtimeit -s'import so' 'so.bycmp()'\n1000 loops, best of 3: 995 usec per loop\n\nI.e., the key= approach is already showing a performance gain of almost two times (sorting the list twice as fast) when working on a 50-items list -- well worth the modest price of \"8 lines rather than 5\", particularly with all the other advantages I already mentioned!\n", ">>> def my_cmp(tuple_1, tuple_2):\n xxx_1, yyy_1 = tuple_1[0].split('_')\n xxx_2, yyy_2 = tuple_2[0].split('_')\n if xxx_1 > xxx_2:\n return -1\n elif xxx_1 < xxx_2:\n return 1\n else:\n return cmp(yyy_1, yyy_2)\n\n\n>>> import pprint\n>>> pprint.pprint(sorted(mylist, my_cmp))\n[(u'kf_magazine', u'KF: Magazine'),\n (u'kf_news', u'KF: News & Information'),\n (u'kf_video', u'KF: Video'),\n (u'community_news', u'Community: News & Information'),\n (u'community_video', u'Community: Video')]\n\nNot the prettiest solution in the world...\n" ]
[ 10, 8, 2 ]
[]
[]
[ "python", "sorting" ]
stackoverflow_0001657242_python_sorting.txt
Q: Elegant structured text file parsing I need to parse a transcript of a live chat conversation. My first thought on seeing the file was to throw regular expressions at the problem but I was wondering what other approaches people have used. I put elegant in the title as i've previously found that this type of task has a danger of getting hard to maintain just relying on regular expressions. The transcripts are being generated by www.providesupport.com and emailed to an account, I then extract a plain text transcript attachment from the email. The reason for parsing the file is to extract the conversation text for later but also to identify visitors and operators names so that the information can be made available via a CRM. Here is an example of a transcript file: Chat Transcript Visitor: Random Website Visitor Operator: Milton Company: Initech Started: 16 Oct 2008 9:13:58 Finished: 16 Oct 2008 9:45:44 Random Website Visitor: Where do i get the cover sheet for the TPS report? * There are no operators available at the moment. If you would like to leave a message, please type it in the input field below and click "Send" button * Call accepted by operator Milton. Currently in room: Milton, Random Website Visitor. Milton: Y-- Excuse me. You-- I believe you have my stapler? Random Website Visitor: I really just need the cover sheet, okay? Milton: it's not okay because if they take my stapler then I'll, I'll, I'll set the building on fire... Random Website Visitor: oh i found it, thanks anyway. * Random Website Visitor is now off-line and may not reply. Currently in room: Milton. Milton: Well, Ok. But… that's the last straw. * Milton has left the conversation. Currently in room: room is empty. Visitor Details --------------- Your Name: Random Website Visitor Your Question: Where do i get the cover sheet for the TPS report? IP Address: 255.255.255.255 Host Name: 255.255.255.255 Referrer: Unknown Browser/OS: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322; InfoPath.1; .NET CLR 2.0.50727) A: No and in fact, for the specific type of task you describe, I doubt there's a "cleaner" way to do it than regular expressions. It looks like your files have embedded line breaks so typically what we'll do here is make the line your unit of decomposition, applying per-line regexes. Meanwhile, you create a small state machine and use regex matches to trigger transitions in that state machine. This way you know where you are in the file, and what types of character data you can expect. Also, consider using named capture groups and loading the regexes from an external file. That way if the format of your transcript changes, it's a simple matter of tweaking the regex, rather than writing new parse-specific code. A: With Perl, you can use Parse::RecDescent It is simple, and your grammar will be maintainable later on. A: You might want to consider a full parser generator. Regular expressions are good for searching text for small substrings but they're woefully under-powered if you're really interested in parsing the entire file into meaningful data. They are especially insufficient if the context of the substring is important. Most people throw regexes at everything because that's what they know. They've never learned any parser generating tools and they end up coding a lot of the production rule composition and semantic action handling that you can get for free with a parser generator. Regexes are great and all, but if you need a parser they're no substitute. A: Here's two parsers based on lepl parser generator library. They both produce the same result. from pprint import pprint from lepl import AnyBut, Drop, Eos, Newline, Separator, SkipTo, Space # field = name , ":" , value name, value = AnyBut(':\n')[1:,...], AnyBut('\n')[::'n',...] with Separator(~Space()[:]): field = name & Drop(':') & value & ~(Newline() | Eos()) > tuple header_start = SkipTo('Chat Transcript' & Newline()[2]) header = ~header_start & field[1:] > dict server_message = Drop('* ') & AnyBut('\n')[:,...] & ~Newline() > 'Server' conversation = (server_message | field)[1:] > list footer_start = 'Visitor Details' & Newline() & '-'*15 & Newline() footer = ~footer_start & field[1:] > dict chat_log = header & ~Newline() & conversation & ~Newline() & footer pprint(chat_log.parse_file(open('chat.log'))) Stricter Parser from pprint import pprint from lepl import And, Drop, Newline, Or, Regexp, SkipTo def Field(name, value=Regexp(r'\s*(.*?)\s*?\n')): """'name , ":" , value' matcher""" return name & Drop(':') & value > tuple Fields = lambda names: reduce(And, map(Field, names)) header_start = SkipTo(Regexp(r'^Chat Transcript$') & Newline()[2]) header_fields = Fields("Visitor Operator Company Started Finished".split()) server_message = Regexp(r'^\* (.*?)\n') > 'Server' footer_fields = Fields(("Your Name, Your Question, IP Address, " "Host Name, Referrer, Browser/OS").split(', ')) with open('chat.log') as f: # parse header to find Visitor and Operator's names headers, = (~header_start & header_fields > dict).parse_file(f) # only Visitor, Operator and Server may take part in the conversation message = reduce(Or, [Field(headers[name]) for name in "Visitor Operator".split()]) conversation = (message | server_message)[1:] messages, footers = ((conversation > list) & Drop('\nVisitor Details\n---------------\n') & (footer_fields > dict)).parse_file(f) pprint((headers, messages, footers)) Output: ({'Company': 'Initech', 'Finished': '16 Oct 2008 9:45:44', 'Operator': 'Milton', 'Started': '16 Oct 2008 9:13:58', 'Visitor': 'Random Website Visitor'}, [('Random Website Visitor', 'Where do i get the cover sheet for the TPS report?'), ('Server', 'There are no operators available at the moment. If you would like to leave a message, please type it in the input field below and click "Send" button'), ('Server', 'Call accepted by operator Milton. Currently in room: Milton, Random Website Visitor.'), ('Milton', 'Y-- Excuse me. You-- I believe you have my stapler?'), ('Random Website Visitor', 'I really just need the cover sheet, okay?'), ('Milton', "it's not okay because if they take my stapler then I'll, I'll, I'll set the building on fire..."), ('Random Website Visitor', 'oh i found it, thanks anyway.'), ('Server', 'Random Website Visitor is now off-line and may not reply. Currently in room: Milton.'), ('Milton', "Well, Ok. But… that's the last straw."), ('Server', 'Milton has left the conversation. Currently in room: room is empty.')], {'Browser/OS': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322; InfoPath.1; .NET CLR 2.0.50727)', 'Host Name': '255.255.255.255', 'IP Address': '255.255.255.255', 'Referrer': 'Unknown', 'Your Name': 'Random Website Visitor', 'Your Question': 'Where do i get the cover sheet for the TPS report?'}) A: Build a parser? I can't decide if your data is regular enough for that, but it might be worth looking into. A: Using multiline, commented regexs can mitigate the maintainance problem somewhat. Try and avoid the one line super regex! Also, consider breaking the regex down into individual tasks, one for each 'thing' you want to get. eg. visitor = text.find(/Visitor:(.*)/) operator = text.find(/Operator:(.*)/) body = text.find(/whatever....) instead of text.match(/Visitor:(.*)\nOperator:(.*)...whatever to giant regex/m) do visitor = $1 operator = $2 etc. end Then it makes it easy to change how any particular item is parsed. As far as parsing through a file with many "chat blocks", just have a single simple regex that matches a single chat block, iterate over the text and pass the match data from this to your group of other matchers. This will obviously affect performance, but unless you processing enormous files i wouldnt worry. A: Consider using Ragel https://www.colm.net/open-source/ragel/ That's what powers mongrel under the hood. Parsing a string multiple times is going to slow things down dramatically. A: I have used Paul McGuire's pyParsing class library and I continue to be impressed by it, in that it's well-documented, easy to get started, and the rules are easy to tweak and maintain. BTW, the rules are expressed in your python code. It certainly appears that the log file has enough regularity to parse each line as a stand-alone unit. A: Just a quick post, I've only glanced at your transcript example but I've recently also had to look into text parsing and hoped to avoid going the route of hand rolled parsing. I did happen across Ragel which I've only started to get my head around but it's looking to be pretty useful.
Elegant structured text file parsing
I need to parse a transcript of a live chat conversation. My first thought on seeing the file was to throw regular expressions at the problem but I was wondering what other approaches people have used. I put elegant in the title as i've previously found that this type of task has a danger of getting hard to maintain just relying on regular expressions. The transcripts are being generated by www.providesupport.com and emailed to an account, I then extract a plain text transcript attachment from the email. The reason for parsing the file is to extract the conversation text for later but also to identify visitors and operators names so that the information can be made available via a CRM. Here is an example of a transcript file: Chat Transcript Visitor: Random Website Visitor Operator: Milton Company: Initech Started: 16 Oct 2008 9:13:58 Finished: 16 Oct 2008 9:45:44 Random Website Visitor: Where do i get the cover sheet for the TPS report? * There are no operators available at the moment. If you would like to leave a message, please type it in the input field below and click "Send" button * Call accepted by operator Milton. Currently in room: Milton, Random Website Visitor. Milton: Y-- Excuse me. You-- I believe you have my stapler? Random Website Visitor: I really just need the cover sheet, okay? Milton: it's not okay because if they take my stapler then I'll, I'll, I'll set the building on fire... Random Website Visitor: oh i found it, thanks anyway. * Random Website Visitor is now off-line and may not reply. Currently in room: Milton. Milton: Well, Ok. But… that's the last straw. * Milton has left the conversation. Currently in room: room is empty. Visitor Details --------------- Your Name: Random Website Visitor Your Question: Where do i get the cover sheet for the TPS report? IP Address: 255.255.255.255 Host Name: 255.255.255.255 Referrer: Unknown Browser/OS: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322; InfoPath.1; .NET CLR 2.0.50727)
[ "No and in fact, for the specific type of task you describe, I doubt there's a \"cleaner\" way to do it than regular expressions. It looks like your files have embedded line breaks so typically what we'll do here is make the line your unit of decomposition, applying per-line regexes. Meanwhile, you create a small state machine and use regex matches to trigger transitions in that state machine. This way you know where you are in the file, and what types of character data you can expect. Also, consider using named capture groups and loading the regexes from an external file. That way if the format of your transcript changes, it's a simple matter of tweaking the regex, rather than writing new parse-specific code.\n", "With Perl, you can use Parse::RecDescent\nIt is simple, and your grammar will be maintainable later on.\n", "You might want to consider a full parser generator. \nRegular expressions are good for searching text for small substrings but they're woefully under-powered if you're really interested in parsing the entire file into meaningful data. \nThey are especially insufficient if the context of the substring is important.\nMost people throw regexes at everything because that's what they know. They've never learned any parser generating tools and they end up coding a lot of the production rule composition and semantic action handling that you can get for free with a parser generator. \nRegexes are great and all, but if you need a parser they're no substitute.\n", "Here's two parsers based on lepl parser generator library. They both produce the same result.\nfrom pprint import pprint\nfrom lepl import AnyBut, Drop, Eos, Newline, Separator, SkipTo, Space\n\n# field = name , \":\" , value\nname, value = AnyBut(':\\n')[1:,...], AnyBut('\\n')[::'n',...] \nwith Separator(~Space()[:]):\n field = name & Drop(':') & value & ~(Newline() | Eos()) > tuple\n\nheader_start = SkipTo('Chat Transcript' & Newline()[2])\nheader = ~header_start & field[1:] > dict\nserver_message = Drop('* ') & AnyBut('\\n')[:,...] & ~Newline() > 'Server'\nconversation = (server_message | field)[1:] > list\nfooter_start = 'Visitor Details' & Newline() & '-'*15 & Newline()\nfooter = ~footer_start & field[1:] > dict\nchat_log = header & ~Newline() & conversation & ~Newline() & footer\n\npprint(chat_log.parse_file(open('chat.log')))\n\nStricter Parser\nfrom pprint import pprint\nfrom lepl import And, Drop, Newline, Or, Regexp, SkipTo\n\ndef Field(name, value=Regexp(r'\\s*(.*?)\\s*?\\n')):\n \"\"\"'name , \":\" , value' matcher\"\"\"\n return name & Drop(':') & value > tuple\n\nFields = lambda names: reduce(And, map(Field, names))\n\nheader_start = SkipTo(Regexp(r'^Chat Transcript$') & Newline()[2])\nheader_fields = Fields(\"Visitor Operator Company Started Finished\".split())\nserver_message = Regexp(r'^\\* (.*?)\\n') > 'Server'\nfooter_fields = Fields((\"Your Name, Your Question, IP Address, \"\n \"Host Name, Referrer, Browser/OS\").split(', '))\n\nwith open('chat.log') as f:\n # parse header to find Visitor and Operator's names\n headers, = (~header_start & header_fields > dict).parse_file(f)\n # only Visitor, Operator and Server may take part in the conversation\n message = reduce(Or, [Field(headers[name])\n for name in \"Visitor Operator\".split()])\n conversation = (message | server_message)[1:]\n messages, footers = ((conversation > list)\n & Drop('\\nVisitor Details\\n---------------\\n')\n & (footer_fields > dict)).parse_file(f)\n\npprint((headers, messages, footers))\n\nOutput:\n({'Company': 'Initech',\n 'Finished': '16 Oct 2008 9:45:44',\n 'Operator': 'Milton',\n 'Started': '16 Oct 2008 9:13:58',\n 'Visitor': 'Random Website Visitor'},\n [('Random Website Visitor',\n 'Where do i get the cover sheet for the TPS report?'),\n ('Server',\n 'There are no operators available at the moment. If you would like to leave a message, please type it in the input field below and click \"Send\" button'),\n ('Server',\n 'Call accepted by operator Milton. Currently in room: Milton, Random Website Visitor.'),\n ('Milton', 'Y-- Excuse me. You-- I believe you have my stapler?'),\n ('Random Website Visitor', 'I really just need the cover sheet, okay?'),\n ('Milton',\n \"it's not okay because if they take my stapler then I'll, I'll, I'll set the building on fire...\"),\n ('Random Website Visitor', 'oh i found it, thanks anyway.'),\n ('Server',\n 'Random Website Visitor is now off-line and may not reply. Currently in room: Milton.'),\n ('Milton', \"Well, Ok. But… that's the last straw.\"),\n ('Server',\n 'Milton has left the conversation. Currently in room: room is empty.')],\n {'Browser/OS': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322; InfoPath.1; .NET CLR 2.0.50727)',\n 'Host Name': '255.255.255.255',\n 'IP Address': '255.255.255.255',\n 'Referrer': 'Unknown',\n 'Your Name': 'Random Website Visitor',\n 'Your Question': 'Where do i get the cover sheet for the TPS report?'})\n\n", "Build a parser? I can't decide if your data is regular enough for that, but it might be worth looking into.\n", "Using multiline, commented regexs can mitigate the maintainance problem somewhat. Try and avoid the one line super regex!\nAlso, consider breaking the regex down into individual tasks, one for each 'thing' you want to get. eg.\nvisitor = text.find(/Visitor:(.*)/)\noperator = text.find(/Operator:(.*)/)\nbody = text.find(/whatever....)\n\ninstead of \ntext.match(/Visitor:(.*)\\nOperator:(.*)...whatever to giant regex/m) do\n visitor = $1\n operator = $2\n etc.\nend\n\nThen it makes it easy to change how any particular item is parsed. As far as parsing through a file with many \"chat blocks\", just have a single simple regex that matches a single chat block, iterate over the text and pass the match data from this to your group of other matchers.\nThis will obviously affect performance, but unless you processing enormous files i wouldnt worry.\n", "Consider using Ragel https://www.colm.net/open-source/ragel/\nThat's what powers mongrel under the hood. Parsing a string multiple times is going to slow things down dramatically.\n", "I have used Paul McGuire's pyParsing class library and I continue to be impressed by it, in that it's well-documented, easy to get started, and the rules are easy to tweak and maintain. BTW, the rules are expressed in your python code. It certainly appears that the log file has enough regularity to parse each line as a stand-alone unit.\n", "Just a quick post, I've only glanced at your transcript example but I've recently also had to look into text parsing and hoped to avoid going the route of hand rolled parsing. I did happen across Ragel which I've only started to get my head around but it's looking to be pretty useful.\n" ]
[ 12, 11, 8, 6, 5, 4, 2, 2, 0 ]
[]
[]
[ "perl", "python", "ruby", "text_parsing" ]
stackoverflow_0000223866_perl_python_ruby_text_parsing.txt
Q: Django syntax highlighting causing character escaping issues I've been working on my own django based blog (like everyone, I know) to sharpen up my python, and I thought added some syntax highlight would be pretty great. I looked at some of the snippets out there and decided to combine a few and write my own syntax highlighting template filter using Beautiful Soup and Pygments. It looks like this: from django import template from BeautifulSoup import BeautifulSoup import pygments import pygments.lexers as lexers import pygments.formatters as formatters register = template.Library() @register.filter(name='pygmentize') def pygmentize(value): try: formatter = formatters.HtmlFormatter(style='trac') tree = BeautifulSoup(value) for code in tree.findAll('code'): if not code['class']: code['class'] = 'text' lexer = lexers.get_lexer_by_name(code['class']) new_content = pygments.highlight(code.contents[0], lexer, formatter) new_content += u"<style>%s</style>" % formatter.get_style_defs('.highlight') code.replaceWith ( "%s\n" % new_content ) content = str(tree) return content except KeyError: return value It looks for a code block like this and highlights and ads the relevant styles: <code class="python"> print "Hello World" </code> This was all working fine until a block of code I was included had some html in it. Now, I know all the html I need, so I write my blog posts directly in it and when rendering to the template, just mark the post body as safe: {{ post.body|pygmentize|safe }} This approach results in any html in a code block just rendering as html (ie, not showing up). I've been playing around with using the django escape function on the code extracted from body by my filter, but I can never quite seem to get it right. I think my understanding of the content escaping just isn't complete enough. I've also tried writing the escaped version in the post body (eg <), but it just comes out as text. What is the best way to mark the html for display? Am I going about this all wrong? Thanks. A: I've finally found some time to figure it out. When beautiful soup pulls in the content and it contains a tag, the tag is listed as a sub node of a list. This line is the culprit: new_content = pygments.highlight(code.contents[0], lexer, formatter) The [0] cuts off the other part of the code, it isn't being decoded incorrectly. Poor bug spotting on my part. That line needs to be replaced with: new_content = pygments.highlight(code.decodeContents(), lexer, formatter) The lessons here are make sure you know what the problem is, and know how your libraries work.
Django syntax highlighting causing character escaping issues
I've been working on my own django based blog (like everyone, I know) to sharpen up my python, and I thought added some syntax highlight would be pretty great. I looked at some of the snippets out there and decided to combine a few and write my own syntax highlighting template filter using Beautiful Soup and Pygments. It looks like this: from django import template from BeautifulSoup import BeautifulSoup import pygments import pygments.lexers as lexers import pygments.formatters as formatters register = template.Library() @register.filter(name='pygmentize') def pygmentize(value): try: formatter = formatters.HtmlFormatter(style='trac') tree = BeautifulSoup(value) for code in tree.findAll('code'): if not code['class']: code['class'] = 'text' lexer = lexers.get_lexer_by_name(code['class']) new_content = pygments.highlight(code.contents[0], lexer, formatter) new_content += u"<style>%s</style>" % formatter.get_style_defs('.highlight') code.replaceWith ( "%s\n" % new_content ) content = str(tree) return content except KeyError: return value It looks for a code block like this and highlights and ads the relevant styles: <code class="python"> print "Hello World" </code> This was all working fine until a block of code I was included had some html in it. Now, I know all the html I need, so I write my blog posts directly in it and when rendering to the template, just mark the post body as safe: {{ post.body|pygmentize|safe }} This approach results in any html in a code block just rendering as html (ie, not showing up). I've been playing around with using the django escape function on the code extracted from body by my filter, but I can never quite seem to get it right. I think my understanding of the content escaping just isn't complete enough. I've also tried writing the escaped version in the post body (eg <), but it just comes out as text. What is the best way to mark the html for display? Am I going about this all wrong? Thanks.
[ "I've finally found some time to figure it out. When beautiful soup pulls in the content and it contains a tag, the tag is listed as a sub node of a list. This line is the culprit:\nnew_content = pygments.highlight(code.contents[0], lexer, formatter)\n\nThe [0] cuts off the other part of the code, it isn't being decoded incorrectly. Poor bug spotting on my part. That line needs to be replaced with:\nnew_content = pygments.highlight(code.decodeContents(), lexer, formatter)\n\nThe lessons here are make sure you know what the problem is, and know how your libraries work.\n" ]
[ 1 ]
[]
[]
[ "django", "escaping", "pygments", "python" ]
stackoverflow_0001607979_django_escaping_pygments_python.txt
Q: Problems using PyQt's Resource System I am trying to use PyQt's Resource System but it appears I have no clue what I am doing! I already have to application created, along with its GUI I am just trying to import some images to use with the program. I used the QtDesigner to create the resource file and I compiled it using pyrcc4.exe. But when I attempt to import the resource file I get this error: Traceback (most recent call last): File "C:\Projects\main.py", line 14, in <module> import main_rc File "C:\Projects\main_rc.py", line 482, in <module> qInitResources() File "C:\Projects\main_rc.py", line 477, in qInitResources QtCore.qRegisterResourceData(0x01, qt_resource_struct, qt_resource_name, qt_resource_data) TypeError: argument 2 of qRegisterResourceData() has an invalid type What am I doing wrong? A: pyrcc generates Python 2.x code by default. Try regenerating your resource files using pyrcc with flag '-py3'
Problems using PyQt's Resource System
I am trying to use PyQt's Resource System but it appears I have no clue what I am doing! I already have to application created, along with its GUI I am just trying to import some images to use with the program. I used the QtDesigner to create the resource file and I compiled it using pyrcc4.exe. But when I attempt to import the resource file I get this error: Traceback (most recent call last): File "C:\Projects\main.py", line 14, in <module> import main_rc File "C:\Projects\main_rc.py", line 482, in <module> qInitResources() File "C:\Projects\main_rc.py", line 477, in qInitResources QtCore.qRegisterResourceData(0x01, qt_resource_struct, qt_resource_name, qt_resource_data) TypeError: argument 2 of qRegisterResourceData() has an invalid type What am I doing wrong?
[ "pyrcc generates Python 2.x code by default.\nTry regenerating your resource files using pyrcc with flag '-py3'\n" ]
[ 19 ]
[]
[]
[ "pyqt", "pyqt4", "python" ]
stackoverflow_0001619574_pyqt_pyqt4_python.txt
Q: Why is there a need to explicitly delete the sys.exc_info() traceback? I've seen in different code bases and just read on PyMOTW (see the first Note here). The explanation says that a cycle will be created in case the traceback is assigned to a variable from sys.exc_info()[2], but why is that? How big of a problem is this? Should I search for all uses of exc_info in my code base and make sure the traceback is deleted? A: Python 3 (update to original answer): In Python 3, the advice quoted in the question has been removed from the Python documentation. My original answer (which follows) applies only to versions of Python that include the quote in their documentation. Python 2: The Python garbage collector will, eventually, find and delete circular references like the one created by referring to a traceback stack from inside one of the stack frames themselves, so don't go back and rewrite your code. But, going forward, you could follow the advice of http://docs.python.org/library/sys.html (where it documents exc_info()) and say: exctype, value = sys.exc_info()[:2] when you need to grab the exception. Two more thoughts: First, why are you running exc_info() at all? If you want to catch an exception shouldn't you just say: try: ... except Exception as e: # or "Exception, e" in old Pythons ... do with with e ... instead of mucking about with objects inside the sys module? Second: Okay, I've given a lot of advice but haven't really answered your question. :-) Why is a cycle created? Well, in simple cases, a cycle is created when an object refers to itself: a = [1,2,3] a.append(a) Or when two objects refer to each other: a = [1,2,3] b = [4,5,a] a.append(b) In both of these cases, when the function ends the variable values will still exist because they're locked in a reference-count embrace: neither can go away until the other has gone away first! Only the modern Python garbage collector can resolve this, by eventually noticing the loop and breaking it. And so the key to understanding this situation is that a "traceback" object β€” the third thing (at index #2) returned by exc_info() β€” contains a "stack frame" for each function that was active when the exception was called. And those stack frames are not "dead" objects showing what was true when the execption was called; the frames are still alive! The function that's caught the exception is still alive, so its stack frame is a living thing, still growing and losing variable references as its code executes to handle the exception (and do whatever else it does as it finishes the "except" clause and goes about its work). So when you say t = sys.exc_info()[2], one of those stack frames inside of the traceback β€” the frame, in fact, belonging to the very function that's currently running β€” now has a variable in it named t that points back to the stack frame itself, creating a loop just like the ones that I showed above. A: The traceback contains references to all the active frames, which in turn contain references to all the local variables in those various frames -- those references are a big part of the very job of traceback and frame objects, so that's hardly surprising. So, if you add a reference back to the traceback (or fail to remove it promptly having temporarily added it), you inevitably form a big loop of references -- which interferes with garbage collection (and may stop it altogether if any of the objects in the loop belong to classes that overide __del__, the finalizer method). Especially in a long-running program, interfering with garbage collection is not the best of idea, because you'll be holding on to memory you don't really need (for longer than necessary, or indefinitely if you've essentially blocked garbage collection on such loops by having them include objects with finalizers). So, it's definitely best to get rid of tracebacks as soon as feasible, whether they come from exc_info or not!
Why is there a need to explicitly delete the sys.exc_info() traceback?
I've seen in different code bases and just read on PyMOTW (see the first Note here). The explanation says that a cycle will be created in case the traceback is assigned to a variable from sys.exc_info()[2], but why is that? How big of a problem is this? Should I search for all uses of exc_info in my code base and make sure the traceback is deleted?
[ "Python 3 (update to original answer):\nIn Python 3, the advice quoted in the question has been removed from the Python documentation. My original answer (which follows) applies only to versions of Python that include the quote in their documentation.\nPython 2:\nThe Python garbage collector will, eventually, find and delete circular references like the one created by referring to a traceback stack from inside one of the stack frames themselves, so don't go back and rewrite your code. But, going forward, you could follow the advice of\nhttp://docs.python.org/library/sys.html\n(where it documents exc_info()) and say:\nexctype, value = sys.exc_info()[:2]\n\nwhen you need to grab the exception.\nTwo more thoughts:\nFirst, why are you running exc_info() at all?\nIf you want to catch an exception shouldn't you just say:\ntry:\n ...\nexcept Exception as e: # or \"Exception, e\" in old Pythons\n ... do with with e ...\n\ninstead of mucking about with objects inside the sys module?\nSecond: Okay, I've given a lot of advice but haven't really answered your question. :-)\nWhy is a cycle created? Well, in simple cases, a cycle is created when an object refers to itself:\na = [1,2,3]\na.append(a)\n\nOr when two objects refer to each other:\na = [1,2,3]\nb = [4,5,a]\na.append(b)\n\nIn both of these cases, when the function ends the variable values will still exist because they're locked in a reference-count embrace: neither can go away until the other has gone away first! Only the modern Python garbage collector can resolve this, by eventually noticing the loop and breaking it.\nAnd so the key to understanding this situation is that a \"traceback\" object β€” the third thing (at index #2) returned by exc_info() β€” contains a \"stack frame\" for each function that was active when the exception was called. And those stack frames are not \"dead\" objects showing what was true when the execption was called; the frames are still alive! The function that's caught the exception is still alive, so its stack frame is a living thing, still growing and losing variable references as its code executes to handle the exception (and do whatever else it does as it finishes the \"except\" clause and goes about its work).\nSo when you say t = sys.exc_info()[2], one of those stack frames inside of the traceback β€” the frame, in fact, belonging to the very function that's currently running β€” now has a variable in it named t that points back to the stack frame itself, creating a loop just like the ones that I showed above.\n", "The traceback contains references to all the active frames, which in turn contain references to all the local variables in those various frames -- those references are a big part of the very job of traceback and frame objects, so that's hardly surprising. So, if you add a reference back to the traceback (or fail to remove it promptly having temporarily added it), you inevitably form a big loop of references -- which interferes with garbage collection (and may stop it altogether if any of the objects in the loop belong to classes that overide __del__, the finalizer method).\nEspecially in a long-running program, interfering with garbage collection is not the best of idea, because you'll be holding on to memory you don't really need (for longer than necessary, or indefinitely if you've essentially blocked garbage collection on such loops by having them include objects with finalizers).\nSo, it's definitely best to get rid of tracebacks as soon as feasible, whether they come from exc_info or not!\n" ]
[ 23, 12 ]
[]
[]
[ "python" ]
stackoverflow_0001658293_python.txt
Q: How to get the range of valid Numpy data types? I'm interested in finding for a particular Numpy type (e.g. np.int64, np.uint32, np.float32, etc.) what the range of all possible valid values is (e.g. np.int32 can store numbers up to 2**31-1). Of course, I guess one can theoretically figure this out for each type, but is there a way to do this at run time to ensure more portable code? A: Quoting from a numpy discussion list: That information is available via numpy.finfo() and numpy.iinfo(): In [12]: finfo('d').max Out[12]: 1.7976931348623157e+308 In [13]: iinfo('i').max Out[13]: 2147483647 In [14]: iinfo('uint8').max Out[14]: 255 Link here. A: You can use numpy.iinfo(arg).max to find the max value for integer types of arg, and numpy.finfo(arg).max to find the max value for float types of arg. >>> numpy.iinfo(numpy.uint64).min 0 >>> numpy.iinfo(numpy.uint64).max 18446744073709551615L >>> numpy.finfo(numpy.float64).max 1.7976931348623157e+308 >>> numpy.finfo(numpy.float64).min -1.7976931348623157e+308 iinfo only offers min and max, but finfo also offers useful values such as eps (the smallest number > 0 representable) and resolution (the approximate decimal number resolution of the type of arg).
How to get the range of valid Numpy data types?
I'm interested in finding for a particular Numpy type (e.g. np.int64, np.uint32, np.float32, etc.) what the range of all possible valid values is (e.g. np.int32 can store numbers up to 2**31-1). Of course, I guess one can theoretically figure this out for each type, but is there a way to do this at run time to ensure more portable code?
[ "Quoting from a numpy discussion list:\n\nThat information is available via numpy.finfo() and numpy.iinfo():\nIn [12]: finfo('d').max\nOut[12]: 1.7976931348623157e+308\n\nIn [13]: iinfo('i').max\nOut[13]: 2147483647\n\nIn [14]: iinfo('uint8').max\nOut[14]: 255\n\n\nLink here.\n", "You can use numpy.iinfo(arg).max to find the max value for integer types of arg, and numpy.finfo(arg).max to find the max value for float types of arg.\n>>> numpy.iinfo(numpy.uint64).min\n0\n>>> numpy.iinfo(numpy.uint64).max\n18446744073709551615L\n>>> numpy.finfo(numpy.float64).max\n1.7976931348623157e+308\n>>> numpy.finfo(numpy.float64).min\n-1.7976931348623157e+308\n\niinfo only offers min and max, but finfo also offers useful values such as eps (the smallest number > 0 representable) and resolution (the approximate decimal number resolution of the type of arg).\n" ]
[ 73, 59 ]
[]
[]
[ "numpy", "python", "types" ]
stackoverflow_0001658714_numpy_python_types.txt
Q: Python: how to make a function visible throughout a program I have two functions like the following: def fitnesscompare(x, y): if x.fitness>y.fitness: return 1 elif x.fitness==y.fitness: return 0 else: #x.fitness<y.fitness return -1 that are used with 'sort' to sort on different attributes of class instances. These are used from within other functions and methods in the program. Can I make them visible everywhere rather than having to pass them to each object in which they are used? Thanks A: The best approach (to get the visibility you ask about) is to put this def statement in a module (say fit.py), import fit from any other module that needs access to items defined in this one, and use fit.fitnesscompare in any of those modules as needed. What you ask, and what you really need, may actually be different...: as I explained in another post earlier today, custom comparison functions are not the best way to customize sorting in Python (which is why in Python 3 they're not even allowed any more): rather, a custom key-extraction function will serve you much better (future-proof, more general, faster). I.e., instead of calling, say somelist.sort(cmp=fit.fitnesscompare) call somelist.sort(key=fit.fitnessextract) where def fitnessextract(x): return x.fitness or, for really blazing speed, import operator somelist.sort(key=operator.attrgetter('fitness')) A: Defining a function with def makes that function available within whatever scope you've defined it in. At module level, using def will make that function available to any other function inside that module. Can you perhaps post an example of what is not working for you? The code you've posted appears to be unrelated to your actual problem.
Python: how to make a function visible throughout a program
I have two functions like the following: def fitnesscompare(x, y): if x.fitness>y.fitness: return 1 elif x.fitness==y.fitness: return 0 else: #x.fitness<y.fitness return -1 that are used with 'sort' to sort on different attributes of class instances. These are used from within other functions and methods in the program. Can I make them visible everywhere rather than having to pass them to each object in which they are used? Thanks
[ "The best approach (to get the visibility you ask about) is to put this def statement in a module (say fit.py), import fit from any other module that needs access to items defined in this one, and use fit.fitnesscompare in any of those modules as needed.\nWhat you ask, and what you really need, may actually be different...:\nas I explained in another post earlier today, custom comparison functions are not the best way to customize sorting in Python (which is why in Python 3 they're not even allowed any more): rather, a custom key-extraction function will serve you much better (future-proof, more general, faster). I.e., instead of calling, say\nsomelist.sort(cmp=fit.fitnesscompare)\n\ncall\nsomelist.sort(key=fit.fitnessextract)\n\nwhere\ndef fitnessextract(x):\n return x.fitness\n\nor, for really blazing speed,\nimport operator\nsomelist.sort(key=operator.attrgetter('fitness'))\n\n", "Defining a function with def makes that function available within whatever scope you've defined it in. At module level, using def will make that function available to any other function inside that module.\nCan you perhaps post an example of what is not working for you? The code you've posted appears to be unrelated to your actual problem.\n" ]
[ 6, 1 ]
[]
[]
[ "python" ]
stackoverflow_0001658722_python.txt
Q: searching within nested list in python I have a list: l = [['en', 60, 'command'],['sq', 34, 'komand']] I want to search for komand or sq and get l[1] returned. Can I somehow define my own matching function for list searches? A: An expression like: next(subl for subl in l if 'sq' in subl) will give you exactly the sublist you're searching for (or raise StopIteration if there is no such sublist; if the latter behavior is not what you want, pass next a second argument [[e.g, [] or None, depending on what exactly you want!]] to return in that case). So, just use this result value, or assign it to whatever name you wish, and so forth. Of course, you can easily dress this expression up into any kind of function you like, e.g.: def gimmethesublist(thelist, anitem, adef=None): return next((subl for subl in thelist if anitem in subl), adef) but if you're working with specific variables or values, coding the expression in-line may often be preferable. Edit: if you want to search for multiple items in order to find a sublist containing any one (or more) of your items, its = set(['blah', 'bluh']) next(subl for subl in l if its.intersection(subl)) and if you want to find a sublist containing all of your items, next(subl for subl in l if its.issubset(subl)) A: You can do it this way: def find(value, seq): for index, item in enumerate(seq): if value in item: return index, item In [10]: find('sq', [['en', 60, 'command'],['sq', 34, 'komand']]) Out[10]: (1, ['sq', 34, 'komand']) Or if you want a general solution: def find(fun, seq): for index, item in enumerate(seq): if fun(item): return index, item def contain(value): return lambda l: value in l In [14]: find(contain('komand'), [['en', 60, 'command'],['sq', 34, 'komand']]) Out[14]: (1, ['sq', 34, 'komand']) A: If all you're trying to do is return the first list that contains a match for any of the values then this will work. def search(inlist, matches): for li in inlist: for m in matches: if m in li: return li return None >>> l = [['en', 60, 'command'],['sq', 34, 'komand']] >>> search(l, ('sq', 'komand')) ['sq', 34, 'komand'] A: Yes: has_oneof = lambda *patterns: lambda: values any(p in values for p in patterns) result = itertools.ifilter(has_oneof('komand', 'sq'), l).next() print result # prints ['sq', 34, 'komand']
searching within nested list in python
I have a list: l = [['en', 60, 'command'],['sq', 34, 'komand']] I want to search for komand or sq and get l[1] returned. Can I somehow define my own matching function for list searches?
[ "An expression like:\nnext(subl for subl in l if 'sq' in subl)\n\nwill give you exactly the sublist you're searching for (or raise StopIteration if there is no such sublist; if the latter behavior is not what you want, pass next a second argument [[e.g, [] or None, depending on what exactly you want!]] to return in that case). So, just use this result value, or assign it to whatever name you wish, and so forth.\nOf course, you can easily dress this expression up into any kind of function you like, e.g.:\ndef gimmethesublist(thelist, anitem, adef=None):\n return next((subl for subl in thelist if anitem in subl), adef)\n\nbut if you're working with specific variables or values, coding the expression in-line may often be preferable.\nEdit: if you want to search for multiple items in order to find a sublist containing any one (or more) of your items,\nits = set(['blah', 'bluh'])\nnext(subl for subl in l if its.intersection(subl))\n\nand if you want to find a sublist containing all of your items,\nnext(subl for subl in l if its.issubset(subl))\n\n", "You can do it this way:\ndef find(value, seq):\n for index, item in enumerate(seq):\n if value in item: \n return index, item\n\nIn [10]: find('sq', [['en', 60, 'command'],['sq', 34, 'komand']])\nOut[10]: (1, ['sq', 34, 'komand'])\n\nOr if you want a general solution:\ndef find(fun, seq):\n for index, item in enumerate(seq):\n if fun(item): \n return index, item\n\ndef contain(value):\n return lambda l: value in l\n\nIn [14]: find(contain('komand'), [['en', 60, 'command'],['sq', 34, 'komand']])\nOut[14]: (1, ['sq', 34, 'komand'])\n\n", "If all you're trying to do is return the first list that contains a match for any of the values then this will work.\ndef search(inlist, matches):\n for li in inlist:\n for m in matches:\n if m in li:\n return li\n return None\n\n>>> l = [['en', 60, 'command'],['sq', 34, 'komand']]\n>>> search(l, ('sq', 'komand'))\n['sq', 34, 'komand']\n\n", "Yes:\nhas_oneof = lambda *patterns: lambda: values any(p in values for p in patterns)\nresult = itertools.ifilter(has_oneof('komand', 'sq'), l).next()\nprint result # prints ['sq', 34, 'komand']\n\n" ]
[ 11, 1, 0, 0 ]
[]
[]
[ "list", "nested", "python", "search" ]
stackoverflow_0001658505_list_nested_python_search.txt
Q: How can I generate a screenshot of a webpage using a server-side script? I need a server-side script (PHP, Python) to capture a webpage to a PNG, JPG, Tiff, GIF image and resize them to a thumbnail. What is the best way to accomplish this? See also: Web Page Screenshots with PHP? How can I take a screenshot of a website with PHP and GD? How might I obtain a Snapshot or Thumbnail of a web page using PHP? A: You can probably write something similar to webkit2png, unless your server already runs Mac OS X. UPDATE: I just saw the link to its Linux equivalent: khtml2png See also: Create screenshots of a web page using Python and QtWebKit Taking automated webpage screenshots with embedded Mozilla A: What needs to happen is for a program to render the page and then take an image of the page. This is a very slow and heavy process but it can be done in PHP on Windows. Also check the comments in the documentation article. For python I'd recommend reading this article. It highlights some of the solutions. There are services you can also call (via some API) that will return you an image. But usually they cost (WebShots for example) A: You'll need to: read the webpage and all the its multimedia content (images, flash, etc) utilize a browser rendering engine to render the webpage take a screenshot and save it as image first and third steps are easy, the second step is more challenging ;) A: If you are using php, you could use imagegrabscreen (PHP 5 >= 5.2.2). Imagegrabscreen: captures the whole screen.
How can I generate a screenshot of a webpage using a server-side script?
I need a server-side script (PHP, Python) to capture a webpage to a PNG, JPG, Tiff, GIF image and resize them to a thumbnail. What is the best way to accomplish this? See also: Web Page Screenshots with PHP? How can I take a screenshot of a website with PHP and GD? How might I obtain a Snapshot or Thumbnail of a web page using PHP?
[ "You can probably write something similar to webkit2png, unless your server already runs Mac OS X.\nUPDATE: I just saw the link to its Linux equivalent: khtml2png\nSee also:\n\nCreate screenshots of a web page using Python and QtWebKit\nTaking automated webpage screenshots with embedded Mozilla\n\n", "What needs to happen is for a program to render the page and then take an image of the page. This is a very slow and heavy process but it can be done in PHP on Windows.\nAlso check the comments in the documentation article.\nFor python I'd recommend reading this article. It highlights some of the solutions.\nThere are services you can also call (via some API) that will return you an image. But usually they cost (WebShots for example)\n", "You'll need to:\n\nread the webpage and all the its multimedia content (images, flash, etc)\nutilize a browser rendering engine to render the webpage\ntake a screenshot and save it as image\n\nfirst and third steps are easy, the second step is more challenging ;)\n", "If you are using php, you could use imagegrabscreen (PHP 5 >= 5.2.2). Imagegrabscreen: captures the whole screen.\n" ]
[ 14, 7, 2, 0 ]
[]
[]
[ "php", "python", "screenshot", "server_side_scripting" ]
stackoverflow_0000713938_php_python_screenshot_server_side_scripting.txt